content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Billing for Hasura¶ Having an active billing account is mandatory to create pro-tier clusters on Hasura. Billing accounts can be created and managed in the Hasura Dashboard. This section details the process of adding and modifying a billing account. Adding and activating billing account¶ Log in to Hasura Dashboard and add your payment details at. You will be asked to add credit card related information like card details, your name and the billing address. Your card will also be temporarily charged with $1 to verify the information provided. This transaction will be reversed immediately after the verification in a few minutes. Modifying billing information¶ You may want to modify your billing information to either change details like CVV, billing address, etc. or replace a saved card. Changing the saved card¶ To replace a saved card, please drop a note to [email protected] from your registered email address. We will work with our payment service provider to clear the details of your current card and notify you. You are then expected to add a replacement card within 72 hours. Failure to do so may lead to your pro-tier clusters being temporarily suspended. Changing billing information¶ To change your billing details like CVV, address, etc., please drop a note to [email protected] from your registered email address. You will be notified once the details have been updated. Cancelling your subscription¶ To cancel your subscription, simply delete your pro-tier clusters. Your card will be charged as usual at the end of the billing cycle for the outstanding amount. Transfer ownership of a cluster¶ Drop a note to [email protected] from your registered email address, copying the new owner.
https://docs.hasura.io/0.15/manual/billing/index.html
2018-06-18T03:18:04
CC-MAIN-2018-26
1529267860041.64
[]
docs.hasura.io
DBCC UPDATEUSAGE (Transact-SQL) Reports and corrects pages and row count inaccuracies in the catalog views. These inaccuracies may cause incorrect space usage reports returned by the sp_spaceused system stored procedure. We recommend the following: - Do not run DBCC UPDATEUSAGE routinely.. Result Sets DBCC UPDATEUSAGE returns (values may vary): DBCC execution completed. If DBCC printed error messages, contact your system administrator.2012)
https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-updateusage-transact-sql?view=sql-server-2017
2018-06-18T04:28:19
CC-MAIN-2018-26
1529267860041.64
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) ]
docs.microsoft.com
1. Drag a List component into the second row of the table layout. Using the Layout ribbon, change the list's Size to Fit Both. Change the List's Name to EmployeeList 2. On the Repository tab, locate the file xEmployee. Drag and drop the following fields into the List component to define its columns: xEmployeeIdentification, xEmployeeSurname, xEmployeeGivenNames, xEmployeePostalCode, xDepartmentCode and xEmployeeSalary. 3. Select the Details tab and click on the column heading for employee identification. Note that the column ColumnXEMPLOYID1 is now selected. a. Change ColumnCaption property to Number. b. Change ColumnCaptionType to Caption. c. Create an ItemGotSelection event routine for the list 4. Leave other column headings unchanged. Your list should now look like the following:
https://docs.lansa.com/14/en/lansa095/content/lansa/wbfeng01_0305.htm
2018-06-18T03:54:54
CC-MAIN-2018-26
1529267860041.64
[]
docs.lansa.com
You can use the placement policy to have vRealize Automation determine where to place machines when you deploy new blueprints. The placement policy uses the analytics of vRealize Operations Manager to identify workloads on your clusters so that it can suggest placement destinations. You must perform several steps before you can use the placement policy. In vRealize Automation, you create endpoints for the vRealize Operations Manager and vCenter Server instances. Then, you create a fabric group, and add reservations to your vCenter Server endpoint. To ensure that vRealize Operations Manager provides workload placement analytics to vRealize Automation, you must: Install the vRealize Automation Solution in the vRealize Operations Manager instance that is being used for workload placement. Configure vRealize Operations Manager to monitor the vCenter Server. To configure vRealize Automation and vRealize Operations Manager for workload placement, see Configuring Workload Placement. Locating the Placement Policy In your vRealize Automation instance, select . To use the workload placement analytics that vRealize Operations Manager provides, select Use vRealize Operations Manager for placement recommendations If you do not use the workload placement policy, vRealize Automation uses default placement method.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-BC3F6BFA-8F76-4975-8558-8C5C1639A827.html
2018-06-18T04:17:20
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
glGetUniformLocation — Returns the location of a uniform variable program Specifies the program object to be queried. name Points to a null terminated string containing the name of the uniform variable whose location is to be queried. glGetUniformLocation may be queried by calling glGetUniformLocation for each field within the structure. The array element operator "[]" and the structure field operator "." may be used in name in by using the name of the array, or by using the name appended by "[0]".. GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. GL_INVALID_OPERATION is generated if program is not a program object. GL_INVALID_OPERATION is generated if program has not been successfully linked. glGetActiveUniform with arguments program and the index of an active uniform variable glGetProgram with arguments program and GL_ACTIVE_UNIFORMS or GL_ACTIVE_UNIFORM_MAX_LENGTH glGetUniform with arguments program and the name of a uniform variable open.gl - The Graphics Pipeline opengl-tutorial.org - Tutorial 13 : Normal Mapping opengl-tutorial.org - Tutorial 14 : Render To Texture Copyright © 2003-2005 3Dlabs Inc. Ltd. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999..
http://docs.gl/gl3/glGetUniformLocation
2018-06-18T03:32:29
CC-MAIN-2018-26
1529267860041.64
[]
docs.gl
For purposes of key definition precedence, the scope-qualified key definitions from a child scope are considered to occur at the location of the scope-defining element within the parent scope. Within a single key scope, key precedence is determined by which key definition comes first in the map, or by the depth of the submap that defines the key. This was true for all key definitions prior to DITA 1.3, because all key definitions were implicitly in the same key scope. Scope-qualified key names differ in that precedence is determined by the location where the key scope is defined. This distinction is particularly important when key names or key scope names contain periods. While avoiding periods within these names will avoid this sort of issue, such names are legal so processors will need to handle them properly. The following root map contains one submap and one key definition. The submap defines a key named "sample". <map> <!-- The following mapref defines the key scope "scopeName" --> <mapref href="submap.ditamap" keyscope="scopeName"/> <!-- The following keydef defines the key "scopeName.sample" --> <keydef keys="scopeName.sample" href="losing-key.dita"/> <!-- Other content, key definitions, etc. --> </map> <map> <keydef keys="sample" href="winning-key.dita"/> <!-- Other content, key definitions, etc. --> </map> When determining precedence, all keys from the key scope "scopeName" occur at the location of the scope-defining element -- in this case, the <mapref> element in the root map. Because the <mapref> comes first in the root map, the scope-qualified key name "scopeName.sample" that is pulled from submap.ditamap occurs before the definition of "scopeName.sample" in the root map. This means that in the context of the root map, the effective definition of "scopeName.sample" is the scope-qualified key definition that references winning-key.dita. The following illustration shows a root map and several submaps. Each submap defines a new key scope, and each map defines a key. In order to aid understanding, this sample does not use valid DITA markup; instead, it shows the content of submaps inline where they are referenced. <map> <!-- Start of the root map --> <mapref href="submapA.ditamap" keyscope="scopeA"> <!-- Contents of submapA.ditamap begin here --> <mapref href="submapB.ditamap" keyscope="scopeB"> <!-- Contents of submapB.ditamap: define key MYKEY --> <keydef keys="MYKEY" href="example-ONE.dita"/> </mapref> <keydef keys="scopeB.MYKEY" href="example-TWO.dita"/> <!-- END contents of submapA.ditamap --> </mapref> <mapref href="submapC.ditamap" keyscope="scopeA.scopeB"> <!-- Contents of submapC.ditamap begin here --> <keydef keys="MYKEY" href="example-THREE.dita"/> </mapref> <keydef keys="scopeA.scopeB.MYKEY" href="example-FOUR.dita"/> </map> The sample map shows four key definitions. From the context of the root scope, all have key names of "scopeA.scopeB.MYKEY". <mapref>to submapB.ditamap, so from the context of submapA.ditamap, the scope-qualified key name is "scopeB.MYKEY". The key scope "scopeA" is defined on the <mapref>to submapA.ditamap, so from the context of the root map, the scope-qualified key name is "scopeA.scopeB.MYKEY". <mapref>to submapA.ditamap, so from the context of the root map, the scope-qualified key name is "scopeA.scopeB.MYKEY". <mapref>to submapC.ditamap, so from the context of the root map, the scope-qualified key name is "scopeA.scopeB.MYKEY". Because scope-qualified key definitions are considered to occur at the location of the scope-defining element, the effective key definition is the one from submapB.ditamap (the definition that references example-ONE.dita).
http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part3-all-inclusive/archSpec/base/example-keys-scope-defining-precedence.html
2018-06-18T03:55:38
CC-MAIN-2018-26
1529267860041.64
[]
docs.oasis-open.org
A workflow extends the extensibleService. This means that all workflows inherit properties and methods provided by the extensibleService. Extending a workflow allows you to add your own steps, remove existing steps, and inject custom data handling logic. Refer to inline documentation on what those properties and methods are. We highly recommend that you complete the Tutorial: Creating an Horizon Plugin if you have not done so already. If you do not know how to package and install a plugin, the rest of this tutorial will not make sense! In this tutorial, we will examine an existing workflow and how we can extend it as a plugin. Note Although this tutorial focuses on extending a workflow, the steps here can easily be adapted to extend any service that inherited the extensibleService. Examples of other extensible points include table columns and table actions. Remember that the goal of this tutorial is to inject our custom step into an existing workflow. All of the files we are interested in reside in the static folder. myplugin │ ├── enabled │ └── _31000_myplugin.py │ └── static └── horizon └── app └── core └── images ├── plugins │ └── myplugin.module.js │ └── steps └── mystep ├── mystep.controller.js ├── mystep.help.html └── mystep.html This is the entry point into our plugin. We hook into an existing module via the run block which is executed after the module has been initialized. All we need to do is inject it as a dependency and then use the methods provided in the extensible service to override or modify steps. In this example, we are going to prepend our custom step so that it will show up as the first step in the wizard. (function () { 'use strict'; angular .module('horizon.app.core.images') .run(myPlugin); myPlugin.$inject = [ 'horizon.app.core.images.basePath', 'horizon.app.core.images.workflows.create-volume.service' ]; function myPlugin(basePath, workflow) { var customStep = { id: 'mypluginstep', title: gettext('My Step'), templateUrl: basePath + 'steps/mystep/mystep.html', helpUrl: basePath + 'steps/mystep/mystep.help.html', formName: 'myStepForm' }; workflow.prepend(customStep); } })(); Note Replace horizon.app.core.images.workflows.create-volume.service with the workflow you intend to augment. It is important to note that the scope is the glue between our controllers, this is how we are propagating events from one controller to another. We can propagate events upward using the $emit method and propagate events downward using the $broadcast method. Using the $on method, we can listen to events generated within the scope. In this manner, actions we completed in the wizard are visually reflected in the table even though they are two completely different widgets. Similarly, you can share data between steps in your workflow as long as they share the same parent scope. In this example, we are listening for events generated by the wizard and the user panel. We also emit a custom event that other controllers can register to when favorite color changes. (function() { 'use strict'; angular .module('horizon.app.core.images') .controller('horizon.app.core.images.steps.myStepController', myStepController); myStepController.$inject = [ '$scope', 'horizon.framework.widgets.wizard.events', 'horizon.app.core.images.events' ]; function myStepController($scope, wizardEvents, imageEvents) { var ctrl = this; ctrl.favoriteColor = 'red'; /////////////////////////// $scope.$on(wizardEvents.ON_SWITCH, function(e, args) { console.info('Wizard is switching step!'); console.info(args); }); $scope.$on(wizardEvents.BEFORE_SUBMIT, function() { console.info('About to submit!'); }); $scope.$on(imageEvents.VOLUME_CHANGED, function(event, newVolume) { console.info(newVolume); }); /////////////////////////// $scope.$watchCollection(getFavoriteColor, watchFavoriteColor); function getFavoriteColor() { return ctrl.favoriteColor; } function watchFavoriteColor(newColor, oldColor) { if (newColor != oldColor) { $scope.$emit('mystep.favoriteColor', newColor); } } } })(); In this tutorial, we will leave this file blank. Include additional information here if your step requires it. Otherwise, remove the file and the helpUrl property from your step. This file contains contents you want to display to the user. We will provide a simple example of a step that asks for your favorite color. The most important thing to note here is the reference to our controller via the ng-controller directive. This is essentially the link to our controller. <div ng- <h1 translate>Blue Plugin</h1> <div class="content"> <div class="subtitle" translate>My custom step</div> <div translate Place your custom content here! </div> <div class="selected-source clearfix"> <div class="row"> <div class="col-xs-12 col-sm-8"> <div class="form-group required"> <label class="control-label" translate>Favorite color</label> <input type="text" class="form-control" ng- </div> </div> </div><!-- row --> </div><!-- clearfix --> </div><!-- content --> </div><!-- controller --> Now that we have completed our plugin, lets package it and test that it works. If you need a refresher, take a look at the installation section in Tutorial: Creating an Horizon Plugin. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/horizon/queens/contributor/tutorials/workflow_extend.html
2018-06-18T03:28:14
CC-MAIN-2018-26
1529267860041.64
[]
docs.openstack.org
New in version 3.2. Content creation wizards¶ See the How-to section on wizards for an introduction to creating wizards. Wizard classes are sub-classes of cms.wizards.wizard_base.Wizard. They need to be registered with the cms.wizards.wizard_pool.wizard_pool: wizard_pool.register(my_app_wizard) Finally, a wizard needs to be instantiated, for example: my_app_wizard = MyAppWizard( title="New MyApp", weight=200, form=MyAppWizardForm, description="Create a new MyApp instance", ) When instantiating a Wizard object, use the keywords: Base Wizard¶ All wizard classes should inherit from cms.wizards.wizard_base.Wizard. This class implements a number of methods that may be overridden as required. Base Wizard methods¶ get_description¶ Simply returns the description property assigned during instantiation or one derived from the model if description is not provided during instantiation. Override this method if this needs to be determined programmatically. get_title¶ Simply returns the title property assigned during instantiation. Override this method if this needs to be determined programmatically. get_success_url¶ Once the wizard has completed, the user will be redirected to the URL of the new object that was created. By default, this is done by return the result of calling the get_absolute_url method on the object. This may then be modified to force the user into edit mode if the wizard property edit_mode_on_success is True. In some cases, the created content will not implement get_absolute_url or that redirecting the user is undesirable. In these cases, simply override this method. If get_success_url returns None, the CMS will just redirect to the current page after the object is created. This method is called by the CMS with the parameter: get_weight¶ Simply returns the weight property assigned during instantiation. Override this method if this needs to be determined programmatically. wizard_pool¶ wizard_pool includes a read-only property discovered which returns the Boolean True if wizard-discovery has already occurred and False otherwise. Wizard pool methods¶ is_registered¶ Sometimes, it may be necessary to check to see if a specific wizard has been registered. To do this, simply call: value = wizard_pool.is_registered(«wizard») You may notice from the example above that the last line in the sample code is: wizard_pool.register(my_app_wizard) This sort of thing should look very familiar, as a similar approach is used for cms_apps, template tags and even Django’s admin. Calling the wizard pool’s register method will register the provided wizard into the pool, unless there is already a wizard of the same module and class name. In this case, the register method will raise a cms.wizards.wizard_pool.AlreadyRegisteredException. unregister¶ It may be useful to unregister wizards that have already been registered with the pool. To do this, simply call: value = wizard_pool.unregister(«wizard») The value returned will be a Boolean: True if a wizard was successfully unregistered or False otherwise. get_entry¶ If you would like to get a reference to a specific wizard in the pool, just call get_entry() as follows: wizard = wizard_pool.get_entry(my_app_wizard) get_entries¶ get_entries() is useful if it is required to have a list of all registered wizards. Typically, this is used to iterate over them all. Note that they will be returned in the order of their weight: smallest numbers for weight are returned first.: for wizard in wizard_pool.get_entries(): # do something with a wizard...
http://django-cms.readthedocs.io/en/release-3.4.x/reference/wizards.html
2018-06-18T03:44:41
CC-MAIN-2018-26
1529267860041.64
[]
django-cms.readthedocs.io
Monitoring and collecting data from Kafka Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies. More information on: How it works This plugin analyzes the performance of your Kafka messaging system. The plugin can get statistics (the number of connections, the number of partitions, etc.) using the JMX interface of the Kafka service and consumers. The minimal supported version of Kafka is 0.8.0. Installation The plugin needs to be installed together with a CoScale agent, instructions on how to install the CoScale agent can be found here. If you want to monitor Kafka inside Docker containers using CoScale, check out the instructions here. Configuration JMX Connection To gather statistics Kafka should be configured to expose JMX. This can be done by setting the JMX_PORT environment variable before running the kafka-server-start command: JMX_PORT=9997 ${KAFKA}/bin/kafka-server-start.sh ${KAFKA}/config/server.properties Restart Kafka to apply these changes. In order to monitor the consumer lags, the agent needs to be installed on the servers running the Kafka consumer and needs to have access to the JMX interface of the consumer. Multiple JMX ports can be provided below in case the Kafka server and consumers are running on the same host. Active checks This plugin can be configured to insert and retrieve a message into your Kafka. This active monitoring allows us to calculate the uptime of the service and the response time of the insert and retrieval of the message. An existing topic should be provided. This topic should be dedicated for this check, since we will insert and retrieve messages from the topic.
http://docs.coscale.com/agent/plugins/kafka/
2018-06-18T04:00:09
CC-MAIN-2018-26
1529267860041.64
[]
docs.coscale.com
Work Manager WIP (Work in Progress) Enhancement With Copado v12, the Work Manager page is capable of Work in Progress concept with the new style. When you open the page, if you click on the down arrow icon in the top left corner of the page, you will see the Enable WIP option at the bottom of the drop-down list. If it is enabled, then you will see min and max settings available in the top right corner of each table. If you click on min or max texts, you will be focusing on the input boxes. Let's see how it works: - If the min value is not reached on the table, then the table header will be shown as yellow. - If the max value is passed on the table, then the table header will be shown as red. - If the user story count on the table is between the max and min values' interval, then the table header will be transparent as before.
https://docs.copa.do/agile-entities/work-manager-wip-work-in-progress-enhancement
2019-09-15T16:56:00
CC-MAIN-2019-39
1568514571651.9
[array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShac2/articles/yxytkyku69/1549629220375/work-manager.png', None], dtype=object) array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShrR0/articles/erpvxp6uhx/1535403266918/screen-shot-2018-08-27-at-22-53-11.png', None], dtype=object) array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShrR0/articles/erpvxp6uhx/1535403590199/screen-shot-2018-08-27-at-22-57-55.png', None], dtype=object) array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShrR0/articles/erpvxp6uhx/1535403657777/screen-shot-2018-08-27-at-23-00-39.png', None], dtype=object) array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShrR0/articles/erpvxp6uhx/1535403760654/screen-shot-2018-08-27-at-23-02-17.png', None], dtype=object) ]
docs.copa.do
Follow the performance and scalability recommendations for your type of deployment. You can deploy Infrastructure Management to collect performance data, collect external events, manage service models, or any combination of these functions. For performance guidelines and tuning recommendations, consult the following topics:. The performance of a BMC TrueSight Infrastructure Management Server depends on several variables, and the performance and scalability guidelines and recommendations address these. Substantial changes in any of the following variables can change the expected scaling and performance numbers: Note An increase in the CPU and memory utilization is seen when you perform resource-intensive operations like PATROL Agent upgrade, server restart and reconciliation, and CRUD operations on groups. This is temporary and the resource utilization returns to normal. for Infrastructure Management.
https://docs.bmc.com/docs/display/TSOMD107/Performance+benchmarks+and+tuning+for+Infrastructure+Management
2019-09-15T17:16:11
CC-MAIN-2019-39
1568514571651.9
[]
docs.bmc.com
All content with label 2lcache+amazon+dist+getting_started+import+infinispan+listener+out_of_memory+transaction+tutorial. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, release, query, deadlock, archetype, jbossas, nexus, guide, schema, cache, httpd, s3, grid, ha, test, jcache, api, xsd, ehcache, wildfly, maven, documentation, jboss, userguide, write_behind, ec2, eap, 缓存, eap6, s, hibernate, aws, interface, clustering, setup, eviction, gridfs, mod_jk, concurrency, jboss_cache, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, cloud, mvcc, notification, jbosscache3x, read_committed, xml, distribution, 2012, meeting, cachestore, data_grid, cacheloader, hibernate_search, cluster, development, permission, async, interactive, xaresource, build, domain, searchable, demo, installation, scala, ispn, mod_cluster, client, non-blocking, migration, as7, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, infinispan_user_guide, standalone, webdav, hotrod, snapshot, repeatable_read, docs, jgroup, consistent_hash, batching, store, jta, faq, as5, protocol, docbook, lucene, jgroups, locking, favourite, rest, hot_rod more » ( - 2lcache, - amazon, - dist, - getting_started, - import, - infinispan, - listener, - out_of_memory, - transaction, - tutorial ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+amazon+dist+getting_started+import+infinispan+listener+out_of_memory+transaction+tutorial
2019-09-15T17:11:25
CC-MAIN-2019-39
1568514571651.9
[]
docs.jboss.org
All content with label amazon+concurrency+eviction+faq+import+infinispan+jta+listener+scala+xa. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, recovery,, gridfs, out_of_memory, jboss_cache, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, cloud, mvcc, notification, tutorial, read_committed, jbosscache3x, distribution, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, permission, websocket, transaction, interactive, xaresource, build, searchable, demo, installation, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, repeatable_read, snapshot, webdav, docs, batching, consistent_hash, store, as5, 2lcache, jsr-107, docbook, jgroups, lucene, locking, rest, hot_rod more » ( - amazon, - concurrency, - eviction, - faq, - import, - infinispan, - jta, - listener, - scala, - xa ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+concurrency+eviction+faq+import+infinispan+jta+listener+scala+xa
2019-09-15T16:49:49
CC-MAIN-2019-39
1568514571651.9
[]
docs.jboss.org
All content with label as5+batch+cache+grid+hot_rod+infinispan+interface+listener+xaresource. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release,, jboss_cache, import, index, events, configuration, hash_function, buddy_replication, loader, xa, pojo, write_through, jsr352, cloud, mvcc, notification, tutorial, presentation, jbosscache3x, xml, read_committed, distribution, jira, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, br, websocket, transaction, async, interactive, build, searchable, demo, scala, cache_server, installation, client, jberet, migration, more » ( - as5, - batch, - cache, - grid, - hot_rod, - infinispan, - interface, - listener, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+batch+cache+grid+hot_rod+infinispan+interface+listener+xaresource
2019-09-15T17:09:21
CC-MAIN-2019-39
1568514571651.9
[]
docs.jboss.org
New experience for report consumption Important Some of the functionality described in these release notes has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned Feature details As a report consumer, you need a simple, consistent, and easy-to-use experience when viewing items in Power BI. This feature updates the item consumption experience for reports and dashboards, and gives a face-lift to other navigation UI elements to match the consistent fluent design language. Most Power BI users use one or two reports/dashboards. So optimizing the experience of using a single item is vital to driving improved experiences. See also Power BI The 'new look' of the Power BI service
https://docs.microsoft.com/en-us/power-platform-release-plan/2019wave2/business-intelligence/new-experience-report-consumption
2019-09-15T16:39:52
CC-MAIN-2019-39
1568514571651.9
[]
docs.microsoft.com
Message-ID: <706619593.895.1568564132106.JavaMail.confluence@lga-techdocs02.pulse.prod> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_894_1753883528.1568564132106" ------=_Part_894_1753883528.1568564132106 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The following pages are intended for organizations that place bids via t= he PulsePoint exchange. Documentation regarding PulsePoint's Demand-Side Platform (DSP) integrat= ion. This section includes implementation guides, specs, and other useful i= nformation. Documentation regarding the PulsePoint Portal for Media Buyers and Adver= tisers. Documentation regarding PulsePoint's Buyer Reporting API, which al= lows buyers to run key reports of advertising metrics for autonomous downlo= ading. Documentation regarding legacy PulsePoint applications that are being ph= ased out in favor of more current methods. This information is provided mai= nly for reference and is not intended for use going forward. The PulsePoint exchange supports two APIs: PulsePoint RTB, = a custom specification created by our in-house development team, and = OpenRTB, the standard specification created by the Interactive Adver= tising Bureau (IAB).
https://docs.pulsepoint.com/exportword?pageId=5309327
2019-09-15T16:15:32
CC-MAIN-2019-39
1568514571651.9
[]
docs.pulsepoint.com
You are viewing the RapidMiner Studio documentation for version 8.0 - Check here for latest version Join Paths (RapidMiner Studio Core) SynopsisThis operator delivers the first non-null input to its output. Description The Join Paths operator can have multiple inputs but it has only one output. This operator returns the first non-null input that it receives. This operator can be useful when some parts of the process are susceptible of producing null results which can halt the entire process. In such a scenario the Join Paths operator can be used to filter out this possibility. Input input (IOObject) This operator can have multiple inputs. When one input is connected, another input port becomes available which is ready to accept another input (if any). Multiple inputs can be provided but only the first non-null object will be returned by this operator. Output output (IOObject) The first non-null object that this operator receives is returned through this port. Tutorial Processes Returning the first non-null object This Example Process starts with the Subprocess operator. Two outputs of the Subprocess operator are attached to the first two input ports of the Join Paths operator. But both these inputs are null because the Subprocess operator has no inner operators. The 'Golf' and 'Polynomial' data sets are loaded using the Retrieve operator. The Join Paths operator has four inputs but it returns only the 'Golf' data set because it is the first non-null input that it received.
https://docs.rapidminer.com/8.0/studio/operators/utility/misc/join_paths.html
2019-09-15T16:11:37
CC-MAIN-2019-39
1568514571651.9
[]
docs.rapidminer.com
Can I receive Academic license? When using the Brekeke SIP Server in a qualified educational and/or academic institutions, licenses are free to individual students or staff members. If you or your university would like to get more information or need mulitple licenses, please send details of your request from here. Please refer to the Brekeke Software End User License Agreement (EULA) to see if you, or your institution qualifies for an “Academic Use” license. See also: Which edition is right for me?
https://docs.brekeke.com/sales/can-i-receive-academic-license
2019-09-15T16:06:27
CC-MAIN-2019-39
1568514571651.9
[]
docs.brekeke.com
Taking a Snapshot A Snapshot is a full backup of the selected metadata types into a repository/branch. To take a snapshot, follow the steps below: - Navigate to the Git Snapshots tab and select an existing Git Snapshot record. - Click on Take Snapshot Now (Create Snapshot Now if you are in v11 or under). - Optionally, enter a commit message. With the frequency set, Copado will create a schedule job and will automatically back up your Salesforce org.
https://docs.copa.do/git-snapshot/taking-a-snapshot
2019-09-15T16:55:36
CC-MAIN-2019-39
1568514571651.9
[array(['https://storage.googleapis.com/helpdocs-assets/U8pXPShac2/articles/3xlfjez9fw/1553168914246/take-snapshot-now.png', None], dtype=object) ]
docs.copa.do
Contents IT Business Management Previous Topic Next Topic Demand workbench Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Demand workbench The demand workbench provides a central location for viewing and assessing business demands. The demand workbench makes it easy to manage demands by presenting multiple interactive views of demand information on one page. The workbench is split into two panes: the top pane presents an interactive bubble chart for assessing demands and the bottom pane displays the demand details in a list view. The demand workbench provides real-time interaction between the two panes. Modifying a demand in the bubble chart automatically updates the values in the demand record. Similarly, changes made to a demand record are automatically reflected in the bubble chart. By default, the workbench displays demands screened by stakeholders or qualified by the demand manager. With the demand manager role, you can use the workbench to: View, evaluate, and update demands Create demands Create artifacts from demands, including projects, enhancements, changes, and defects With the demand manager role, you can view and evaluate demands.Figure 1. Demand Workbench The demand workbench includes the following components: The top pane displays demands in a bubble chart. The bottom pane displays demands in a list view. The header includes a back button () that opens the Demands list. Demand workbench bubble chartThe interactive bubble chart on the demand workbench is a dynamically updated graph that plots metrics for multiple demand records. Demand workbench list viewThe lower pane of the demand workbench displays a list of the demands shown in the bubble chart. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/planning-and-policy/concept/c_DemandWorkbench.html
2019-09-15T16:51:47
CC-MAIN-2019-39
1568514571651.9
[]
docs.servicenow.com
Discovery cloud provider requests discovery.cloudProvider.request(target, parameters) A set of discovery functions which query the cloud provider API, for information on the target discovered using the specified DiscoveryAccess and optional parameters. They each return a list depending on the request made. The target must be a DiscoveryAccess node. The following code example is taken from the AmazonWebServices.ELBv2 pattern module: // Get target health for this group. We rely on Reasoning to optimize these // calls if we request the same TargetGroup multiple times target_health_results := discovery.AWS.ELBv2.DescribeTargetHealth (da, TargetGroupArn := group.TargetGroupArn); Discovery functions are provided to query the cloud provider API. The supported requests, their parameters, and returned data are described on the Administration >page. Each request has a popup dialog which explains its usage, provides a code example, and links to the cloud vendor's API documentation for that request. For example: The following requests are supported and available to patterns supported in the BMC Discovery 11.2 release. Additional requests may be provided as part of a monthly TKU update.
https://docs.bmc.com/docs/discovery/112/developing/the-pattern-language-tpl/pattern-overview/body/functions/discovery-functions/discovery-action-functions/discovery-cloud-provider-requests
2019-09-15T17:05:20
CC-MAIN-2019-39
1568514571651.9
[]
docs.bmc.com
Live Forms Licensing and Pricing How can I signup for a free evaluation? What is the cost of a license? How can I request a quote? Where can I find the software license agreement? - What is a concurrent user? - How much does it cost to upgrade to a bigger license? - Am I entitled to upgrades? - What are the Confluence prices? - Does the Confluence plugin requires a Live Forms server license? Cloud Hosted Subscriptions How can I signup for a free evaluation? You can signup for a 30 day free cloud hosted tenant. See Evaluating Live Forms Online. How much does a subscription cost? Cloud hosted tenant subscriptions are priced based on monthly usage. See the monthly subscription price calculator on the pricing page. What are the billing terms? Customers are billed $300 upon cloud tenant subscription signup for provisioning. The $300 includes 2 hours of client services that may be used for anything e.g. tenant configuration, training, quick start form or workflow implementation etc. The first month billing cycle is a flat rate of $75. Second and subsequent month billing cycles are calculated based on metered usage of production form submissions. Read the fine print. What is a submission? See the answer on frevvo's terminology page. What is a production form/flow? See the answer on frevvo's terminology page. What are the terms of service? See frevvo's terms of service. What is the privacy policy? See frevvo's privacy policy. On-Premise Software Licenses How can I signup for a free evaluation? You can download the Live Forms software and a receive a 30 day free trial license key. See Evaluating Live Forms In-house. What is the cost of a license? See the One-Time License section on the pricing page. How can I request a quote? Contact one of our account representatives via a simple contact us form or call 203.208.3117. Where can I find the software license agreement? See frevvo's software license agreement. What is a concurrent user? See the answer on frevvo's terminology page. How much does it cost to upgrade to a bigger license? If you want to increase your license to a larger license you will just pay the cost of the new license size minus the cost you paid for your original smaller license. Your support & maintenance will be 20% of the new larger license cost. Am I entitled to upgrades? All customers with current support & maintenance contracts are entitled to free upgrades to new releases and patches. If your support & maintenance contract has expired you can become current again by purchasing a new annual contract plus paying for the months you skipped. What are the Confluence prices? We sell an add-on to Atlassian's Confluence wiki software on the Atlassian Marketplace. The frevvo Confluence plugin pricing is for Live Forms In-house software that is restricted to use with the frevvo-confluence add-on. Does the Confluence plugin requires a Live Forms server license? Yes, and this is included in the price. Fine Print Cloud Billing Fine Print Cloud hosted customers are billed by the 7th of each following month for the usage calculated for the prior month. If metered usage for any given month falls below $50, the tenant will be accessed a flat minimum $50 charge. Invoices are payable within 15 days of receipt. Cloud tenants will be disabled for non payment after 60 days.
https://docs.frevvo.com/d/display/frevvo90/Purchasing+FAQ
2019-09-15T17:02:36
CC-MAIN-2019-39
1568514571651.9
[]
docs.frevvo.com
First of all, log into your Pushwoosh Control Panel and click on the application you would like to send a push to. We've split all features into two sets of tabs – the first set applies to all platforms: these are General, Scheduling, Additional Data and Filters tabs. Tabs in the second set are platform-specific and let you customize notifications for each platform. Sending multi-language notifications. Multi-language lets you create push notifications in different languages with one click, and they will be delivered depending on device locale settings. Talk to your users in their language! Use Language tabs to create language-speicific pushes for devices with corresponding locales. The default language in Pushwoosh is English; if no content for English is specified, Pushwoosh randomly chooses a default language from those that are set. For example, if you input a message only for English (“en” tab), all other language subscribers will receive the content in English, as it is the default language. If you input the message only for “nl” and “fr”, the message will be sent to all devices, but English and Turkish subscribers will receive a randomly chosen message – intended for either Dutch (“nl”) or French (“fr”) subscribers, since “en”-message is empty In order to send language-specific pushes to language-specific devices only, you should use Language filters. Please refer to Filters tab paragraph below. Use Scheduling tab to set time you want your notifications to be sent. By default, all notifications are set to be sent now, immediately after you press Woosh! In case the "Send according to user's timezone" is set to OFF, scheduled messages are sent according to the date and time settings of your browser. If you would like to send notifications at a certain time in the future, click “Send on” and choose the time and date; scheduling date limit is 30 days. Checking the Select timezone checkbox, you can specify the timezone according to which the scheduling applies (different from your browser settings). Both scheduled and non-scheduled pushes might be cancelled in the Message History section. This is possible only when the status of a push message is Pending, Waiting, or Processing. The status Done indicates that we've already sent the notification, in which case it's not possible to cancel it. Another significant scheduling feature is “Send according to user’s timezone”. Set the toggle to ON, and your users will receive the message at the specified local time. Please note that in this case the push status will be changing from Pending to Processing back and forth for 24 hours, until all timezones are covered.. To enable Frequency Capping for your account, please contact our Customer Support Team or your Customer Success Manager. given above means that in case a user has received 10 push messages from that app within the last 30 days, the current message won't be sent to that user. Please note that frequency capping takes into account push notifications of all sources, including Triggered Messages, Autopushes, Geozone pushes, etc. URL. You can send a URL along with a push notification. When a user opens this notification, the application will be launched first, and then the URL will be opened in the default browser, for example, Safari on iOS. Rich Media. Send a deeply customized Rich Media page along with your push notification. Rich Media pages can contain formatted text, links, images, embed videos and other data. It allows to send colorful flyers, pictures and ads directly to your app. Rich Media pages are displayed in the app's webview, so it’s easy for a user to return to the app after closing the page. Custom data. Pass any additional JSON parameters in a {“key”:”value”} format to your app. Deep Link. Use Deep Links to drive user directly to a specific product or piece of content within your app instead of the home page. Deep Links allow displaying the same image across all platforms with no need to specify the image URL for each platform separately. To add the image to your message containing a Deep Link, enter the URL into deep link’s params field. The system looks at the available Open Graph meta tags for that URL. If there are any, og:image, og:title, og:description meta tags are parsed to the URL’s preview. By clicking Populate image settings, you set the og:image content as a value of the corresponding platform-specific params: for iOS — iOS 10+ Media attachment field; for Android — Banner field; for Chrome — Large image field. For example, if the following meta tag is specified on the entered URL: <meta property="og:image" content="" /> Then the image URL will be populated with: As a result, the push notification is sent with the deep link you selected and the image specified in URL’s og:image meta tag. If there is no og:image meta tag or it’s set improperly, the push message will contain no image unless you specify the image URL for each platform manually. In this tab you can apply a filter which you should create in the Filters section first. Filters are used to send push notifications to a specific segment of devices only. For example, in order to send push to devices with French locale only, you have to create a #Language(fr) filter first, and apply this filter to your push. Thus, the message will be sent only to devices subscribed with the French language. If you do not wish to use a chosen filter, click the “Clear filter” button. Title. Specify the custom title of a push notification different from the app name. To boost open rates, personalize the message's title by using the Dynamic Content. Subtitle. Specify the subtitle for the iOS push notification. It'll be displayed between the title and the text of push message. Subtitles can be personalized with the Dynamic Content placeholders. Badges. Set the iOS badge number to be sent with your push. Use +n / -n to increment / decrement the current badge value. Sending 0 clears badge from your app’s icon. Sound. Specify the custom sound from the main bundle of your application. iOS8 Category. Select a Category with the set of buttons for iOS8. iOS Thread ID. Identifier to group related notifications by threads. Messages with the same thread ID are grouped on the lock screen and in the Notification Center. To create a thread ID, press Edit: Enter the name and ID in the window opened, then press Save: Select the thread ID from the drop-down list: Take a look at how grouped push notifications with two different thread IDs look on a device: iOS Root Params. Root level parameters to the APS dictionary. iOS10+ Media attachment. URL to any video, audio, picture or GIF for iOS rich notification. See this guide for more details on iOS 10 Rich Notifications. Send silent notification. Allows to send a silent push with content-available property. When a silent push arrives, iOS wakes up your app in the background, so you can get new data from your server or do background information processing. Critical Push. Stands for iOS critical alerts playing a sound even if Do Not Disturb is on or iPhone is muted. Critical alerts are allowed only for apps entitled by Apple. To enable critical alerts for your app, submit the entitlement request at Apple Developer Portal. Newsstand notification. Allows to send a push to your iOS newsstand application. Expiration time. Sets the period after which the push won't be delivered if the device was offline. Badges. Specify the badge value; use +n to increment. Header. Specify your Android notification header here. Personalize the message's header with the Dynamic Content placeholders. Sound. Specify the custom sound filename in the “res/raw” folder of your application. Omit the file extension. LED. Choose LED color, the device will do its best approximation. Image Background Color. Icon background color on Android Lollipop. Force Vibration. Vibrate on arrival; use for urgent messages only. Icon. Path to the notification icon. Insert the Dynamic Content placeholders to personalize the icon. Banner. Enter image URL here. Image must be ≤ 450px wide, ~2:1 aspect, and it will be center-cropped. Insert the Dynamic Content placeholders to personalize the banner. Android root params. Root level parameters for the Android payload, custom key-value object. Delivery priority. Enables notification’s delivery when the device is in the power saving mode. Notifications with high delivery priority will be delivered nevertheless, while normal delivery priority means that the notification will be delivered after the power saving mode is turned off. Importance level. Sets the “importance” parameter for devices with Android 8.0 and higher, as well as the “priority” parameter for devices with Android 7.1 and lower. This parameter with valid values from -2 to 2 establishes the interruption level of a notification channel or a particular notification. Urgent importance level (1-2) - the notification makes a sound and appears as a heads-up notification High importance level (0) - the notification makes a sound and appears in the status bar Medium importance level (-1) - the notification makes no sound but still appears in the status bar Low importance level (-2) - the notification makes no sound and does not appear in the status bar Expiration time. Set the period after which the push won't be delivered if the device was offline. Notifications Channels. Starting from Android 8.0, you can create Notification Channels. To create a channel, there are two steps you need to do: Set up the channel’s configuration. Specify all required parameters, such as sound, vibration, LED, and priority; Specify channel's name by adding the following key-value pair to Android root params: {“pw_channel”:“NAME OF CHANNEL”}. To send a notification to an existing channel, you need to specify the very same key-value pair in Android root params. It is not possible to change the channel’s parameters after it’s created on the device. First, choose the Windows Phone notification type: Toast or Tile. Customize your Tile type push with the following parameters: Count. The number displayed on the tile front. Front background image. Full path to the image to be used as a background for tile front. Back content. Single line of text at the top of tile backside. Back background image. Full path to the image to be used as tile backside background. Back title. Single line of text at the bottom of tile backside. Windows 8 provides over 60 toast, tile, raw and badge templates, so we added only toast templates to our GUI since they're the most popular. Tile, raw and badge templates are available via Remote API only.. Header. Specify your Amazon notification header here. Sound. Specify custom sound filename in the “res/raw” folder of your application. Omit the file extension. Icon. Path to notification icon. Banner. Full path to notification banner. Expiration time. Period after which the push won't be delivered if the device was offline. Title. Specify your Safari notification title here. This field is required, otherwise the push will not be sent. Personalizing the Safari push title with Dynamic Content, you boost open rates and increase audience loyalty levels. Action button label (optional). Specify custom action button label here. If not set, “Show” will be displayed as a default. URL field. Replace placeholder with part of URL you specified in the app’s Safari configuration. Users will be redirected to this URL in Safari upon opening your notification. Expiration time. Set the period after which the push won't be delivered if the device was offline. Icon. Specify icon name from your extension resources or full path URL. Personalizable with Dynamic Content placeholders. Title. Specify Chrome notification title. Personalizable with Dynamic Content placeholders. Large image. Add a large image to your notification by specifying the full path URL to the image. Chrome root params. Set parameters specific for pushes sent to Chrome. For example, to send a regular link to Chrome platform in parallel with a Deep Link sent to mobile devices, enter the link here as follows: {"l": ""} Chrome root params are prioritized over general push parameters for push notifications sent to Chrome. So Chrome subscribers will get the link you specify here while users with mobile devices will get the Deep Link. Buttons. Create custom buttons for your pushes. Specify the Title (required) and the URL (optional) for buttons if needed. Duration. Specify time for push to be displayed. Set to infinity, if you would like to display notification until user interacts with it. Expiration time. Set the period after which the push won't be delivered if the device was offline. Icon. Specify the icon name in the resources of your extension, or the full path URL. Personalizable with Dynamic Content placeholders. Title. Specify Firefox notification title. Personalizable with Dynamic Content placeholders. Firefox root params. Set parameters specific for pushes sent to Firefox. For example, to send a regular link to Firefox platform in parallel with a Deep Link sent to mobile devices, enter the link here as follows: {l: “”} Firefox root params are prioritized over general push parameters for push notifications sent to Firefox. So Firefox subscribers will get the link you specify here while users with mobile devices will get the Deep Link. Easy, isn’t it?
https://docs.pushwoosh.com/platform-docs/getting-started/control-panel-overview
2019-09-15T15:57:00
CC-MAIN-2019-39
1568514571651.9
[]
docs.pushwoosh.com
obabel and babel - Convert, Filter and Manipulate Chemical Data¶ obabel and babel are cross-platform programs designed to interconvert between many file formats used in molecular modeling and computational chemistry and related areas. They can also be used for filtering molecules and for simple manipulation of chemical data. Synopsis¶ obabel is recommended over babel (see Differences between babel and obabel). Options¶ Information and help obabel [-H <help-options>] babel [-H <help-options>] Conversion options obabel [-i <input-ID>] infile [-o <output-ID>] [-O outfile] [OPTIONS] obabel -:"<SMILES string>" [-o <output-ID>] [-O outfile] [OPTIONS] babel [-i <input-ID>] infile [-o <output-ID>] [outfile] [OPTIONS] Note If only input and output files are given, Open Babel will guess the file type from the filename extension. For information on the file formats supported by Open Babel, please see Supported File Formats and Options. Examples¶ The examples below assume the files are in the current directory. Otherwise you may need to include the full path to the files e.g. /Users/username/Desktop/mymols.sdf and you may need to put quotes around the filenames (especially on Windows, where Differences between babel and obabel¶option. This is closer to the normal Unix convention for commandline programs, and prevents users accidentally overwriting the input file. obabel is more flexible when the user needs to specify parameter values on options. For instance, the --uniqueoption. Format Options¶: obabel mymol.cml out.svg -a2 -xb Append property values to the title¶: obabel. (Note that the related option --addtotitle simply adds the same text to every title.) The append option only takes one parameter, which means that with babel all of the descriptor IDs or property names must be enclosed together in a single set of quotes. With obabel this is usually unnecessary. punctuation character other than ‘_’, it is used as the separator instead. If the list starts with “t”, a tab character is used as a separator. Generating conformers for structures¶ The command line option --conformer allows performing conformer searches using a range of different algorithms and options: --log- output a log of the energies (default = no log) --nconf #- number of conformers to generate Forcefield-based methods for finding stable conformers: --systematic- systematically (exhaustively) generate all conformers --random- randomly generate conformers --weighted- weighted rotor search for lowest energy conformer --ff <name>- select a forcefield (default = MMFF94) Genetic algorithm based methods (default): --children #- number of children to generate for each parent (default = 5) --mutability #- mutation frequency (default = 5) --converge #- number of identical generations before convergence is reached --score #- scoring function [rmsd|energy] (default = rmsd) You can use them like this (to generate 50 conformers, scoring with MMFF94 energies but default genetic algorithm options): obabel EtOT5D.cml -O EtOT5D0.xyz --conformer --nconf 50 --score energy or if you also wish to generate 3D coordinates, followed by conformer searching try something like this: obabel ligand.babel.smi -O ligand.babel.sdf --gen3d --conformer --nconf 20 --weighted Filtering molecules from a multimolecule file¶ Six of the options above can be used to filter molecules: -s- convert molecules that match a SMARTS string -v- convert molecules that don’t match a SMARTS string -fand -l- convert molecules in a certain range --unique- only convert unique molecules (that is, remove duplicates) --filter- convert molecules that meet specified chemical (and other) criteria This section focuses on the --filter option, which is very versatile and can select a subset of molecules based either on properties imported with the molecule (as from a SDF file) or from calculations made by Open Babel on the molecule. The aim has been to make the option flexible and intuitive to use; don’t be put off by the long description. You use it like this: obabel filterset.sdf -osmi --filter "MW<130 ROTATABLE_BOND > 2" It takes one parameter which probably needs to be enclosed in double quotes to avoid confusing the shell or operating system. (You don’t need the quotes with the Windows GUI.): obabel in the class OBPairData) in preference to a descriptor if one exists in the molecule. So with the example file, which can be found here: obabel: obabel filterset.sdf -osmi --filter "ROTATABLE_BOND MW<130" converts only those molecules with a ROTATABLE_BOND property and a molecular weight less than 130. If you wanted to also include all the molecules without ROTATABLE_BOND defined, use: obabel filterset.sdf -osmi --filter "!ROTATABLE_BOND || (ROTATABLE_BOND & MW<130)" The ! means negate. AND can be & or &&, OR can be | or ||. The brackets are not strictly necessary here because & has precedent in the following sections. String descriptors¶ obabel filterset.sdf -osmi --filter "title='Ethanol'" The descriptor title, when followed by a string (here enclosed by single quotes), does a case-sensitive string comparison. (‘ethanol’ wouldn’t match anything in the example file.) The comparison does not have to be just equality: obabel: obabel filterset.sdf -osmi --filter "title<129" will convert the molecules with titles 56 123 and 126, which is probably what you wanted. obabel filterset.sdf -osmi --filter "title<'129'" converts only 123 and 126 because a string comparison is being made. String comparisons can use * as a wildcard if used as the first or last character of the string (anywhere else a * is a normal character). So --filter "title='*ol'" will match molecules with titles ‘methanol’, ‘ethanol’ etc. and --filter "title='eth*' will match ‘ethanol’, ‘ethyl acetate’, ‘ethical solution’ etc. Use a * at both the first and last characters to test for the occurrence of a string, so --filter "title='*ol*'" will match ‘oleum’, ‘polonium’ and ‘ethanol’. SMARTS descriptor¶ This descriptor will do a SMARTS test (substructure and more) on the molecules. The smarts ID can be abbreviated to s and the = is optional. More than one SMARTS test can be done: obabel filterset.sdf -osmi --filter "s='CN' s!='[N+]'" This provides a more flexible alternative to the existing -s and -v options, since the SMARTS descriptor test can be combined with other tests. InChI descriptor¶ obabel): obabel filterset.sdf -osmi --filter "inchi='1: obabel filterset.sdf -osmi --filter "inchi=C2H6O" will convert both Ethanol and Dimethyl Ether. Substructure and similarity searching¶ For information on using babel for substructure searching and similarity searching, see Molecular fingerprints and similarity searching. Sorting molecules¶ The --sort option is used to output molecules ordered by the value of a descriptor: obabel infile.xxx outfile.xxx --sort desc If the descriptor desc provides a numerical value, the molecule with the smallest value is output first. For descriptors that.: obabel infile.xxx outfile.yyy --sort ~logP As a shortcut, the value of the descriptor can be appended to the molecule name by adding a + to the descriptor, e.g.: obabel aromatics.smi -osmi --sort ~MW+ c1ccccc1C=C styrene 104.149 c1ccccc1C toluene 92.1384 c1ccccc1 benzene 78.1118 Remove duplicate molecules¶ The --unique option is used to remove, i.e. not output, any chemically identical molecules during conversion: obabel: obabel infile.xxx -onul --unique Truncated InChI¶ It is possible to relax the criterion by which molecules are regarded as “chemically identical” by using a truncated InChI specification as param. This takes advantage of the layered structure of InChI. So to remove duplicates, treating stereoisomers as the same molecule: obabel¶ The input molecules do not have to be in a single file. So to collect all the unique molecules from a set of MOL files: obabel *.mol uniquemols.sdf --unique If you want the unique molecules to remain in individual files: obabel *.mol U.mol -m --unique On the GUI use the form: obabel *.mol U*.mol --unique Either form is acceptable on the Windows command line. The unique molecules will be in files with the original name prefixed by ‘U’. Duplicate molecules will be in similar files but with zero length, which you will have to delete yourself. Aliases for chemical groups¶ Forcefield energy and minimization¶ Open Babel supports a number of forcefields which can be used for energy evaluation as well as energy minimization. The available forcefields as listed as follows: C:\>obabel -L forcefields GAFF General Amber Force Field (GAFF). Ghemical Ghemical force field. MMFF94 MMFF94 force field. MMFF94s MMFF94s force field. UFF Universal Force Field. To evaluate a molecule’s energy using a forcefield, use the --energy option. The energy is put in an OBPairData object “Energy” which is accessible via an SDF or CML property or --append (to title). Use --ff <forcefield_id> to select a forcefield (default is Ghemical) and --log for a log of the energy calculation. The simplest way to output the energy is as follows: obabel infile.xxx -otxt --energy --append "Energy" To perform forcefield minimization, the --minimize option is used. The following shows typical usage: obabel infile.xxx -O outfile.yyy --minimize --steps 1500 --sd The available options are as follows: --log output a log of the minimization process (default= no log) --crit <converge> set convergence criteria (default=1e-6) --sd use steepest descent algorithm (default = conjugate gradient) --newton use Newton2Num linesearch (default = Simple) --ff <forcefield-id> select a forcefield (default = Ghemical) --steps <number> specify the maximum number of steps (default = 2500) --cut use cut-off (default = don't use cut-off) --rvdw <cutoff> specify the VDW cut-off distance (default = 6.0) --rele <cutoff> specify the Electrostatic cut-off distance (default = 10.0) --freq <steps> specify the frequency to update the non-bonded pairs (default = 10) Note that for both --energy and --minimize, hydrogens are made explicit before energy evaluation. Aligning molecules or substructures¶ The --align option aligns molecules to the first molecule provided. It is typically used with the -s option to specify an alignment based on a substructure: obabel pattern.www dataset.xxx -O outset.yyy -s SMARTS --align Here, only molecules matching the specified SMARTS pattern are converted and are aligned by having all their atom coordinates modified. The atoms that are used in the alignment are those matched by SMARTS in the first output molecule. The subsequent molecules are aligned so that the coordinates of atoms equivalent to these are as nearly as possible the same as those of the pattern atoms. The atoms in the various molecules can be in any order. Tha alignment ignores hydrogen atoms but includes symmetry. Note that the standalone program obfit has similar functionality. The first input molecule could also be part of the data set: obabel dataset.xxx -O outset.yyy -s SMARTS --align This form is useful for ensuring that a particular substructure always has the same orientation in a 2D display of a set of molecules. 0D molecules, for example from SMILES, are given 2D coordinates before alignment. See documentation for the -s option for its other possible parameters. For example, the matching atoms could be those of a molecule in a specified file. If the -s option is not used, all of the atoms in the first molecule are used as pattern atoms. The order of the atoms must be the same in all the molecules. The output molecules have a property (represented internally as OBPairData) called rmsd, which is a measure of the quality of the fit. To attach it to the title of each molecule use --append rmsd. To output the two conformers closest to the first conformer in a dataset: obabel dataset.xxx -O outset.yyy --align --smallest 2 rmsd Specifying the speed of 3D coordinate generation¶ When you use the --gen3d option, you can specify the speed and quality. The following shows typical usage: obabel infile.smi -O out.sdf --gen3d fastest The available options are as follows: You can also specify the speed by an integer from 1 (slowest) to 5 (fastest).
https://open-babel.readthedocs.io/en/latest/Command-line_tools/babel.html
2019-09-15T15:54:09
CC-MAIN-2019-39
1568514571651.9
[]
open-babel.readthedocs.io
cannes Cannes theme. This will open the Theme Details popup box. Click on the Delete button in the bottom right corner to remove all theme files. Upload the new version cannes cannes: Cannes Core - Cannes Core Plugin expands the functionality of the theme. Adds shortcodes and much more. WPBakery Page Builder - WPBakery Page Builder for WordPress will save you tons of time working on the site content. Now you'll be able to create complex layouts within minutes! It's built on top of the modern technologies – get the best for your lovely. WP Instagram Widget - WP Instagram widget is a no fuss WordPress widget to showcase your latest Instagram pics. It does not require you to provide your login details or sign in via oAuth. WooCommerce - WooCommerce is a flexible, open-source eCommerce solution built on WordPress. Whether you’re launching a business, taking an existing brick and mortar store online, or designing sites for clients you can get started quickly and build exactly the store you want. Meta Box - Meta Box helps you add custom fields and details on your website such as pages, posts, forms and anywhere you want using over 40 different field types such as text, images, file upload, checkboxes, and more. - Navigate to the Pages → Add New page in the WordPress admin. - Switch to WPBakery Page Builder mode - Enter a name for Page - Click button Add Element - Chosen shortcode Contact Form 7 - Select contact form then click button Save Changles - Click the Pulsish button. How to setup MailchimpForm Code: <div class="cannes-newsletter"> <label>Subscribe to the Cannes newsletter to receive timely updates from newsletter to receive timely updates from texts would hardly be possible even now.<_5<< Install Demo Content Well, it's really simple. If you're using the theme to build a new website which doesn't have content yet, I strongly recommend you to use the demo content files. - Go toTheme Options in the WordPress Dashboard - Click the IMPORT DEMO DATA tab - Click to select the demo you want to import GENERAL Configs You can manage theme options by navigating to Theme Options. - Click the GENERAL - Choose 1 of 3 themes styles Cannes - Default/Cannes - Travel Magazine/Cannes - Food Blog - Configuration default color for site - Configuration fonts body - Configuration fonts heading - Configuration fonts main menu - Click the Save Changes button. In this tab you can select theme style, default color, font body, font heading, mainmenu font Header Configs You can manage theme options by navigating to Theme Options. - Click the Header - Choose Logo for Site - Choose 1 of 3 header styles - Configuration width for header conatainer or container full - Click the Save Changes button. 3 Header Style Style default Style 1 Style 2 You can manage theme options by navigating to Theme Options. - Click the Blog - Choose a layout blog style for page blog - Configuration hide/shows sidebar - Configure the number of posts on 1 page - Click the Save Changes button. Featured Post Slide You can manage theme options by navigating to Theme Options. - Click the tab Featured Posts Slide - Choose Enable/Disable Featured Posts Slider - Choose 1 of 4 featured posts styles - Configure the number posts of Slide - Select a Category or Select Posts - Click the Save Changes button. 4 Featured Posts Style Style 1 Style 2 Style 3 Style 4 Promo Boxes You can manage theme options by navigating to Theme Options. - Click the tabPromo Boxes - Choose Enable/Disable Promo Boxes - Click Add Slide button, each slide is a promo box - Enter the information for promo box : Title, link, image - After creating the Promo box click Save Changes button Only support from 1 to 4 boxes Promo Boxes Style Blog Layout Shortcode Blog Layout helps you drag and drop blog layout style available on the page There are 7 types of blog layouts You can look at this page - Zigzag - List - Standard List - Grid 2 Cols - Grid 3 Cols - Masonry 2 Cols - Masonry 3 Cols - Mix Blog Layouts Style Zigzag List Standard List Grid 2 Cols Grid 3 Cols Masonry 2 Cols Masonry 3 Cols Mix Blog Posts Short code posts block allows you to display article blocks by category order by Popular Posts or Latest Posts There are 6 types of blog layouts You can look at this page 7 Block Posts Style Source and Credits Along the project, I've been using the following assets, even if they were images, icons or other files, as listed. Social Network< You can manage theme options by navigating to Theme Options.
http://docs.theme-xoda.com/cannes/
2019-09-15T16:53:43
CC-MAIN-2019-39
1568514571651.9
[array(['images/update-theme.jpg', None], dtype=object) array(['images/contact1.jpg', None], dtype=object) array(['images/contact2.jpg', None], dtype=object) array(['images/contact3.jpg', None], dtype=object) array(['images/contact4.jpg', None], dtype=object) array(['images/mailchimp1.jpg', None], dtype=object) array(['images/data_import_1.jpg', None], dtype=object) array(['images/themeop1.jpg', None], dtype=object) array(['images/themeop2.jpg', None], dtype=object) array(['images/header1.jpg', None], dtype=object) array(['images/header2.jpg', None], dtype=object) array(['images/header3.jpg', None], dtype=object) array(['images/themeop3.jpg', None], dtype=object) array(['images/themeop4.jpg', None], dtype=object) array(['images/slide1.jpg', None], dtype=object) array(['images/slide2.jpg', None], dtype=object) array(['images/slide3.jpg', None], dtype=object) array(['images/slide4.jpg', None], dtype=object) array(['images/themeop6.jpg', None], dtype=object) array(['images/promobox.jpg', None], dtype=object) array(['images/shortcode1.jpg', None], dtype=object) array(['images/zigzag.jpg', None], dtype=object) array(['images/list.jpg', None], dtype=object) array(['images/stanrdardlist.jpg', None], dtype=object) array(['images/grid2.jpg', None], dtype=object) array(['images/grid3.jpg', None], dtype=object) array(['images/mas1.jpg', None], dtype=object) array(['images/mas3.jpg', None], dtype=object) array(['images/mix.jpg', None], dtype=object) array(['images/shortcode2.jpg', None], dtype=object) array(['images/style1.jpg', None], dtype=object) array(['images/style2.jpg', None], dtype=object) array(['images/style3.jpg', None], dtype=object) array(['images/style4.jpg', None], dtype=object) array(['images/style5.jpg', None], dtype=object) array(['images/style6.jpg', None], dtype=object) array(['images/style7.jpg', None], dtype=object)]
docs.theme-xoda.com
Try it now and let us know what you think. Switch to the new look >> You can return to the original look by selecting English in the language selector above. StopDBCluster Stops an Amazon Aurora DB cluster. When you stop a DB cluster, Aurora retains the DB cluster's metadata, including its endpoints and DB parameter groups. Aurora also retains the transaction logs so you can do a point-in-time restore if necessary. For more information, see Stopping and Starting an Aurora Cluster in the Amazon Aurora User Guide. Note This action only applies to Aurora DB clusters. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - DBClusterIdentifier The DB cluster identifier of the Amazon Aurora DB cluster to be stopped. This parameter is stored as a lowercase string. Type: String Required: YeserNotFoundFault DBClusterIdentifier doesn't refer to an existing DB cluster. HTTP Status Code: 404 - InvalidDBClusterStateFault The requested operation can't be performed while the cluster is in this state. HTTP Status Code: 400 - InvalidDBInstanceState The DB instance isn't in a valid state. HTTP Status Code: 400 Example Sample Request ?Action=StopDBCluster &DBClusterIdentifier=mydbcluster &SignatureMethod=HmacSHA256 &SignatureVersion=4 &Version=2014-09-01 5f99e81575f23e73757ffc6a1e42d7d2b30b9cc0be988cff97 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/APIReference/API_StopDBCluster.html
2019-09-15T16:54:28
CC-MAIN-2019-39
1568514571651.9
[]
docs.aws.amazon.com
Data Transfer API Name: AccountingJob__c Label API Name Type Description Accounting Job Name Name AutoNumber Accounting Month AccountingPeriod__c Text(7) This field contains the accounting year and month of the ledger data (accounting record proposal). Client Id ClientId__c Text The combination of consultant and client number. Client Name ClientName__c Text(255) The Client Name as reported by the DATEV API. Client Number ClientNumber__c Text(5) The client number for this job. Consultant Number ConsultantNumber__c Text(7) The consultant number for this job. DATEV Status DatevStatus__c Picklist The status of the job in the DATEV system. The status can be refreshed with the button 'Refresh Status'. Folder Name FolderName__c Text(255) The name of the target folder in the DATEV system. Import Type ImportType__c Picklist Defines the type of accounting details, which are part of this job. Job Id JobId__c Text(255) The Id of the Job in the DATV system. Last Error LastError__c Text(255) Holds the last error, which occured while processing this job. Local Status LocalStatus__c Picklist The local status of the job. The job will be processed by DATEV, when it has been transferred and is in status 'Closed'.
https://docs.juston.com/dco_objects/DataTransfer/
2019-09-15T16:29:53
CC-MAIN-2019-39
1568514571651.9
[]
docs.juston.com
Novelty - How to Display Footer Widgets on the Home Page By default, the footer widgets do not display on the homepage of the Novelty theme. If you'd like to display them on the home page, navigate to Appearance > Editor > front-page.php. Look for this section of code: Add // before the remove_action, so that it looks like this:
https://docs.restored316.com/article/792-novelty-how-to-display-footer-widgets-on-the-home-page
2019-09-15T16:38:21
CC-MAIN-2019-39
1568514571651.9
[]
docs.restored316.com
YITH WooCommerce Wishlist is a plugin that allows your customers to arrange the products they are interested in, in wish lists. This process allows them to find these products again easily at a later moment and eventually proceed to purchase them. Combining these two plugins you will have the chance to edit all the information in the “My Wishlist” tab (title, icon, content and position) in the My Account page, directly from the admin panel of Customize My Account Page. By simply activating both plugins, the endpoint will be added automatically. The customer will be able to see the ‘My Wishlist’ tab in their my account page.
https://docs.yithemes.com/yith-woocommerce-customize-myaccount-page/premium-version-settings/yith-woocommerce-wishlist/
2019-09-15T16:35:38
CC-MAIN-2019-39
1568514571651.9
[]
docs.yithemes.com
A list of Western Australian jobactive Provider Work for the Dole Contacts. English / Australian English If, Skills, Small and Family Business’s business hours are between 8.00 am and 6.00 pm nationally, Monday to Friday. Document List Online Induction 'How to use and access the induction.' Report detailing the Ernst & Young on-site work health and safety audits of 200 Work for the Dole activities between January and May 2016 as part of the then Department of Employment (now Department of Jobs and Small Business) program assurance and to inform practice improvement. The department's Work Health and Safety Policy. The primary aim of a Work Readiness Assessment is to measure Work Readiness Participant’s state of work readiness and open up a discussion with the Work Readiness Participant of how to address any barriers to becoming ready for work. A person who is ‘work ready’ is defined in the ParentsNext 2018-2021 Deed (the Deed) as a person... In the 2017–18 Budget the Australian Government announced the introduction of the Targeted Compliance Framework (TCF), commencing from 1 July 2018. The framework is designed to ensure only those job seekers who are persistently and wilfully non-compliant incur financial penalties while providing protections for the most vulnerable. It is... Portfolio Budget Statements for the Workplace Gender Equality Agency Portfolio Budget Statements for the Workplace Gender Equality Agency Workplace Gender Equality Agency Portfolio Budget Statements This document outlines the Minister’s expectations on the Workplace Gender Equality Agency regarding the objectives and priorities of the deregulation agenda. Portfolio Budget Statements for the Workplace Gender Equality Agency The department's Site Induction Flyer for Contractors. Presentation given on December 9 2014 on the labour market outcomes in the Yarrabah and broader Cairns region following the Survey conducted in July 2014 Pages
https://docs.employment.gov.au/language/english?page=98
2019-09-15T16:24:49
CC-MAIN-2019-39
1568514571651.9
[array(['https://docs.employment.gov.au/misc/feed.png', 'Subscribe to English / Australian English'], dtype=object)]
docs.employment.gov.au
You should follow these recommendations when implementing the S3 REST API for use with StorageGRID Webscale. If your application routinely checks to see if an object exists at a path where you do not expect the object to actually exist, you should use the "Available" consistency control. For example, you should use the "Available" consistency control if your application HEADs a location before PUT-ing to it. Otherwise, if the HEAD operation does not find the object, you might receive a high number of 500 Internal Server errors if one or more Storage Nodes are unavailable. You can set the "Available" consistency control for each bucket using the PUT Bucket consistency request, or you can specify the consistency control in the request header for an individual API operation. You should not use random values as the first four characters of object keys. This is in contrast to AWS recommendations for key prefixes. Instead, you should use non-random, non-unique prefixes, such as image. If you do follow the AWS recommendation to use random and unique characters in key prefixes, you should prefix the object keys with a directory name. That is, use this format: mybucket/mydir/f8e3-image3132.jpg Instead of this format: mybucket/f8e3-image3132.jpg If the Stored Object Compression grid option is enabled for StorageGRID Webscale, S3 client applications should avoid performing GET Object operations that specify a range of bytes be returned. These "range read" operations are inefficient because StorageGRID Webscale.
https://docs.netapp.com/sgws-111/topic/com.netapp.doc.sg-s3/GUID-6A36FE68-35EA-44A7-8F0E-6CF6F2DD6671.html
2021-02-25T00:10:34
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
Enable Users to Opt Out of SSL Decryption Allow users to choose whether they want to continue to a site for which traffic is decrypted or opt out and allow the firewall to terminate the session, preserving the user’s privacy but preventing the connection to the site. In privacy-sensitive situations, you may want to alert your users that the firewall is decrypting certain web traffic and allow them either to continue to the site with the understanding that their traffic is decrypted or to terminate the session and be block from going to the site. (There is no option to go to the site and also avoid decryption.) The first time a user attempts to browse to an HTTPS site or application that matches the decryption policy, the firewall displays a response page notifying users that it will decrypt the session. Users can either click Yesto allow decryption and continue to the site or click Noto opt out of decryption and terminate the session. The choice to allow decryption applies to all HTTPS sites that users try to access for the next 24 hours, after which the firewall redisplays the response page. Users who opt out of SSL decryption cannot access the requested web page, or any other HTTPS site, for the next minute. After the minute elapses, the firewall redisplays the response page the next time the users attempt to access an HTTPS site. The firewall includes a predefined SSL Decryption Opt-out Page that you can enable. You can optionally customize the page with your own text and/or images. However, the best practice is to not allow users to opt out of decryption.. - (Optional) Customize the SSL Decryption Opt-out Page. - Select.DeviceResponse Pages - Select theSSL Decryption Opt-out Pagelink. - Select thePredefinedpage and clickExport. - Using the HTML text editor of your choice, edit the page. - If you want to add an image, host the image on a web server that is accessible from your end user systems. - Add a line to the HTML to point to the image. For example:<img src=" Acme-logo-96x96.jpg?1382722588"/> - Save the edited page with a new filename. Make sure that the page retains its UTF-8 encoding. - Back on the firewall, select.DeviceResponse Pages - Select theSSL Decryption Opt-out Pagelink. - ClickImportand then enter the path and filename in theImport Filefield orBrowseto locate the file. - (Optional) Select the virtual system on which this login page will be used from theDestinationdrop-down or select shared to make it available to all virtual systems. - ClickOKto import the file. - Select the response page you just imported and clickClose. - Enable SSL Decryption Opt Out. - On thepage, click theDeviceResponse PagesDisabledlink. - Select theEnable SSL Opt-out Pageand clickOK. - Committhe changes. - Verify that the Opt Out page displays when you attempt to browse to a site.From a browser, go to an encrypted site that matches your decryption policy.Verify that the SSL Decryption Opt-out response page displays. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/9-1/pan-os-admin/decryption/enable-users-to-opt-out-of-ssl-decryption.html
2021-02-24T23:29:13
CC-MAIN-2021-10
1614178349708.2
[]
docs.paloaltonetworks.com
Copyright 2017 OpenStack Foundation This work is licensed under a Creative Commons Attribution 3.0 Unported License. Ethercalc¶ Include the URL of your StoryBoard story: The PTG will have unconference like scheduled rooms/activities. It has been requested that we run an Ethercalc server to help facilitate the adhoc scheduling of these rooms. The spreadsheet setup makes it easy to track a matrix or rooms against times and Ethercalc’s distributed editing feature makes it a good choice for the PTG. Problem Description¶ Run an Ethercalc server to facilitate distributed scheduling of shared (real world) resources. Proposed Change¶ We will deploy a new server, ethercalc01.openstack.org, using the puppet-nodejs and puppet-redis modules to install nodejs+npm from which we can install ethercalc. Ethercalc unlike etherpad does have a hard dependency on Redis so puppet-redis will be used to deploy a colocated Redis server. Alternatives¶ We could use some on site physical scheduling medium. A common choice at unconferences is post it notes on a grid. White boards would also work. The trouble with these setups is that they are centralized to a physical location making it difficult for people not on site or in different areas to keep up with the current state. Implementation¶ Gerrit Topic¶ Use Gerrit topic “ethercalc” for all patches related to this spec. git-review -t ethercalc Work Items¶ Write puppet module for ethercalc Write operational documentation in system-config Deploy new server using new puppet module Update DNS when service is ready Announce service availability Repositories¶ Will need to create puppet-ethercalc. An alternative would be to tack this on to the existing puppet-etherpad module as they are similar. DNS Entries¶ Need to create: ethercalc01.openstack.org A $IPFROMCLOUD ethercalc01.openstack.org AAAA $IPFROMCLOUD ethercalc.openstack.org CNAME ethercalc01.openstack.org
https://docs.opendev.org/opendev/infra-specs/latest/specs/ethercalc.html
2021-02-24T23:11:20
CC-MAIN-2021-10
1614178349708.2
[]
docs.opendev.org
Installing Key Trustee Server Using Cloudera Manager If you are installing Key Trustee Server for use with HDFS Transparent Encryption, the Set up HDFS Data At Rest Encryption wizard installs and configures Key Trustee Server. - . - Download the latest Key Trustee Server parcel from the Cloudera.com/downloads page. - Follow the steps in Using a Local Parcel Repository to register the local parcel with Cloudera Manager. - On the Key Trustee Server cluster home page, click the More Options (ellipsis) icon, then click Add Service. - Select Key Trustee Server, then click Continue. - Use the Add Key Trustee Server Service wizard to install Key Trustee Server. - Key Trustee Server appears in the cluster components list.
https://docs.cloudera.com/cloudera-manager/7.1.1/installation/topics/cdpdc-installing-key-trustee-using-cm.html
2022-06-25T11:29:13
CC-MAIN-2022-27
1656103034930.3
[]
docs.cloudera.com
Download URLs Download the appropriate release for your New Relic .NET agent: Fixes - Fixes Issue #224 where leading "SET" commands will be ignored when parsing compound SQL statements. (#370) - Fixes Issue #226 where the profiler ignores drive letter in HOME_EXPANDEDwhen detecting running in Azure Web Apps. (#373) - Fixes Issue #93: when the parent methods are blocked by their asynchronous child methods, the agent deducts the child methods' duration from the parent methods' exclusive duration.(#374) - Fixes Issue #9 where the agent failed to read settings from appsettings.{environment}.jsonfiles. (#372) - Fixes Issue #116 where the agent failed to read settings from appsettings.jsonin certain hosting scenarios. (#375) - Fixes Issue #234 by reducing the likelihood of a Fatal CLR Error. (#376) - Fixes Issue #377 when using the AddCustomAttributeAPI with Microsoft.Extensions.Primitives.StringValuestype causes unsupported type exception. (378) Checksums Upgrading - Follow standard procedures to update the .NET agent. - If you are upgrading from a particularly old agent, review the list of major changes and procedures to upgrade legacy .NET agents.
https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/net-release-notes/net-agent-83600/?q=
2022-06-25T11:22:16
CC-MAIN-2022-27
1656103034930.3
[]
docs.newrelic.com
Payara Enterprise 5.24.0 Release Notes New Features [FISH-656] - CLI Upgrade Tool MVP for Payara Server Enterprise Bug Fixes [FISH-505] - Server instance tries to load Application not assigned to instance 66] - [Community - sgflt] Improper synchronization of session map Component Upgrades [FISH-184] - Backport Yasson 1.0.6 to Payara Enterprise
https://docs.payara.fish/enterprise/docs/Release%20Notes/Release%20Notes%205.24.0.html
2022-06-25T10:35:11
CC-MAIN-2022-27
1656103034930.3
[]
docs.payara.fish
Menus are composed of menu items. Menu items can contain other menu items within certain limits. Menu bars can have two levels, the navigation tree three levels and the simple menu bar only one. General Properties Caption The caption is the text that will appear in the menu widget. This is a translatable text. See Translatable Texts. Icon The glyph icon or image will appear next to or above the caption in the menu widget. Target The target of the menu item is the page or microflow that will be opened when the item is clicked. A menu item that has subitems cannot have a target itself. You can open a page that contains a data view from a menu item by setting as target a microflow that first retrieves an object for the data view and then opens the page.
https://docs.mendix.com/refguide5/menu-item
2019-05-19T10:44:09
CC-MAIN-2019-22
1558232254751.58
[]
docs.mendix.com
Security playbook¶ Sqreen provides you with built-in playbooks to help you get started as fast as possible. You can also create your own, based on custom events (tracked via our SDK) or the ones Sqreen automatically tracks based on your apps' traffic. Visit your Sqreen dashboard to get started. What's a security playbook?¶ A playbook is made of 3 elements: A trigger. Security response(s). Notifications. Trigger¶ The playbook's trigger represents the conditions for the plugin to raise an alert. The trigger is made of: An event (built-in or custom) filtered by conditions (optional) to monitor. A detection method (threshold only for now) to apply. A period of time. A type of actor (IP / user account). Tracking events¶ Refer to your technology guide to learn how-to track your first custom events: Ruby Python Node.js PHP Java Go Finding the right threshold¶ When using the threshold based detection, it's often tricky to set the threshold to the right value. Using the Event Explorer, you can quickly visualise the event trend and determine what an usual volume of activity represents for your use-case. Security Response¶ Sqreen libraries contains code to dynamically change your app behavior for supicious actors (IP and/or user accounts). Security responses can be applied for a pre-defined duration (5 minutes to 24 hours). You can always remove any live security response from your Sqreen dashboard. What blocked IP or user will see¶ Blocked IP or user visiting your application will see this page. If you're willing to display a custom page instead, we recommand you to use the redirect security response. Interested in customising this page? Contact us! Notifications¶ Whenever a live playbook triggers, Sqreen can notify you immediately by email or through Slack. See how to setup Slack in your account.
https://docs.sqreen.com/security-automation/introduction-playbooks/
2019-05-19T11:50:38
CC-MAIN-2019-22
1558232254751.58
[]
docs.sqreen.com
Deploying PySparkling Pipeline Models¶ This tutorial demonstrates how we can import PySparkling pipeline models for scoring. Let’s first create and export the model as: from pyspark.ml import Pipeline from pysparkling import * from pysparkling.ml import * hc = H2OContext.getOrCreate(spark) # Helper method to locate the data file def locate(file_name): return "examples/smalldata/smsData.txt" # Prepare the data def load(): row_rdd = spark.sparkContext.textFile(locate("smsData.txt")).map(lambda x: x.split("\t", 1)).filter(lambda r: r[0].strip()) return spark.createDataFrame(row_rdd, ["label", "text"]) # load the data data = load() # Create the H2O GBM pipeline stage gbm = H2OGBM(ratio=0.8, seed=1, predictionCol="label") # Create a pipeline with a single GBM step pipeline = Pipeline(stages=[gbm]) # Fit and export the pipeline model = pipeline.fit(data) model.save("exported_model") Once we have exported the model, let’s start a new ./pysparkling shell as we want to demonstrate that for scoring, H2OContext does not need to be created as the H2OGBM step is internally using MOJO which does not require run-time of H2O. First, we need to ensure that all Java classes internally stored in the PySparkling distribution are distributed in the Spark cluster. For that, we use the following code: from pysparkling.initializer import Initializer Initializer.load_sparkling_jar(spark) Once we initialized PySparkling, we can load the model as: from pyspark.ml import PipelineModel model = PipelineModel.load("exported_model") And we can run the predictions on the model as: df_for_predictions = .. model.transform(df_for_predictions) If we don’t initialize the PySparkling using the Initializer, we would get class not found exception during loading the model as Spark would not now about the required classes. But as we can see, we do not need to initialize H2OContext for scoring tasks.
http://docs.h2o.ai/sparkling-water/2.3/latest-stable/doc/deployment/pysparkling_pipeline.html
2019-05-19T11:11:18
CC-MAIN-2019-22
1558232254751.58
[]
docs.h2o.ai
AttachRolePolicy Attaches the - RoleName The name (friendly name, not ARN) of the role Example AWS SDKs, see the following:
https://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachRolePolicy.html
2019-05-19T11:23:31
CC-MAIN-2019-22
1558232254751.58
[]
docs.aws.amazon.com
Transmitting component updates unreliably (“quality of service”) In order to prevent a worker becoming overloaded, you can choose to transmit some component updates unreliably. This means they may be dropped if the network is congested, giving more bandwidth to messages that can’t be dropped. You can configure which component updates are transmitted reliably and unreliably in the bridge settings of a worker configuration file. When will an update be dropped? A component update will be dropped (not sent to the worker) if the worker is not reading messages from the network quickly enough. This can happen if the worker is overloaded (it has too much work to do and is not able to keep up with processing all the updates it’s receiving) or if the network is overloaded. may not be able to keep up with the rate of component updates being sent by SpatialOS. You might see this causing latency to increase in the system, leading to an unplayable game or (in extreme cases) memory leaks on a server which could cause the entire deployment to crash. Which components should I enable unreliable transmission for? In light of the above trade-offs, you should consider enabling unreliable transmission for components that are updated frequently, since these updates will be making up a good chunk of the network traffic and losing an update to one of these components should not cause the component to become very stale. You can find out which components are updated most often by looking at the Entities Grafana Dashboard.
https://docs.improbable.io/reference/13.2/shared/worker-configuration/qos
2019-05-19T10:39:21
CC-MAIN-2019-22
1558232254751.58
[]
docs.improbable.io
Integrating with Slack Each user on MayaOnline will have instantaneous notification from MuleBot to a configured slack channel to help deliver the important alerts. Users can also interact with the MuleBot to query the status of their OpenEBS clusters. So the data management operations are made easy with MayaOnline. ChatOps IntegrationChatOps Integration Integrating ChatOps enables interaction between clusters present in the organization on MayaOnline and user through Slack. The user will be able to receive alerts and query applications present health status in MayaOnline. What is Mulebot?What is Mulebot? Mulebot application covers the storage operational support of Kubernetes enabled OpenEBS clusters. DevOps developers and administrators receive alerts, analytics of their OpenEBS volumes deployed across multi-cloud Kubernetes clusters into their Slack channels, and also provides a way to query any configuration and status from Slack. The Mulebot functionality also includes interacting with DevOps developers and administrators to manage the YAML configurations files in their CI/CD system. The Mulebot enhances your experience with MayaOnline by allowing you to query the MayaOnline cluster configuration and MayaOnline applications using the following slash commands. - /maya get clusters - Lists all the clusters imported in MayaOnline whether active or inactive. - /maya get cluster cluster-name - Fetches all details of that particular cluster when cluster-name is provided. - /maya help - Displays a list of all the available slash commands and their functionality that can be used to query the Mulebot.. Mulebot also keep you informed about the current status of clusters that you have imported in MayaOnline by sending alerts as required. Once you integrate Slack, you will receive alerts related to the clusters imported in MayaOnline to the specified Slack channel. Following are the various types of alerts that you will receive. - Cluster Up - Cluster Down - Volume Up - Volume Down - Volume Write Latency - Volume Read Latency and so on. Adding Slack ConfigurationAdding Slack Configuration You can either create a new workspace or configure an existing slack workspace. The following is the procedure to configure slack integration to a OpenEBS cluster. Click on Slack on the left panel in the project level page and then click on Connect a new Slack card. Configure a new Slack workspace if you do not have a Slack workspace, then enter the workspace Slack URL and click Continue. The following screen is displayed if you already have a Slack workspace, for example, Kingdom. Enter your email address and password and then click on Sign in. The following screen is displayed once you logged in with your slack credentials. Select a channel from the Post to drop-down list and Click on Authorize to proceed further. Select the required cluster from the list and click Done. You have configured a Slack integration of a particular cluster. The details of slack integrated clusters will be viewed from the Slack page. Note: Repeat the above procedure to configure more clusters to multiple slack channels.
https://docs.mayaonline.io/docs/slackint.html
2019-05-19T10:24:01
CC-MAIN-2019-22
1558232254751.58
[]
docs.mayaonline.io
Using WebVR with Microsoft Edge The Microsoft Edge browser (build 15002+ with the Windows 10 Creators Update or later) supports immersive 3D Virtual Reality (VR) applications on the web using the WebVR 1.1 JavaScript API. Viewing WebVR content requires a Windows Mixed Reality headset, or the Windows Mixed Reality Portal Simulator (accessible via Developer Mode). Note To check which version of Microsoft Edge you currently have installed: - Open Microsoft Edge. - In the top-right corner of the browser, select … to open the menu. - Select Settings. - Scroll down to find your version number under the About this app heading. Updates to Edge are automatically installed when Windows 10 is updated. To keep Edge up to date, you need to keep Windows 10 up to date. To see which version of Windows 10 your device is currently running, select the Start button, then Settings > System > About. To get Microsoft Edge build 15002+, you must be running the Windows 10 Creators Update or later. Go to the Microsoft software download website, and select Update now to install the latest version of Windows 10. While technically support for WebVR was added to Edge in the Creators Update, we highly recommend that you run at least the Fall Creators Update, as there were many bug fixes and important changes added. The WebVR API surface area is present at all times within Microsoft Edge. However, a call to getVRDisplays will only return a headset if the operating system has been placed in Developer Mode, and either a headset is plugged in or the simulator has been turned on. What's new in the Windows 10 April 2018 Update With the Windows 10 April 2018 Update, you can now run WebVR inside JavaScript/HTML Windows 10 applications, including PWAs (Progressive Web Apps). See WebVR in Progressive Web Apps for more information. Additionally, you can now run WebVR inside WebView controls in Windows 10 apps. See WebVR in WebView for more information. You can do a lot with WebVR on Microsoft Edge - Create 3D virtual objects. - Create immersive 3D virtual worlds. - Display 360º panoramic images. - Engage users with 3D interfaces and game controllers. - Play Babylon.js 3D games right in your browser. Setting up your Mixed Reality headset If you have a Windows Mixed Reality headset, follow the Immersive headset setup guide to get started. If you do not have a physical device, you can instead use the Windows Mixed Reality simulator to develop and test your WebVR experience. - Ensure you have a 64-bit version of Windows 10—Check your compatibility - Ensure your Windows 10 installation is up-to-date - How to launch the Mixed Reality Portal - Connect your Headset or turn on Simulation Additional resources - Where can I buy a Windows Mixed Reality headset? + other FAQs - How to set up Motion Controllers, Room Boundary, Speech - Troubleshooting Running WebVR on your Mixed Reality headset To experience WebVR content on a Windows Mixed Reality headset (using hardware or simulation) you must, with your headset connected: - Launch Microsoft Edge either on the desktop, or within Mixed Reality. - Navigate to a WebVR-enabled page. - Click the Enter VR button within the page (the location and visual representation of this button may vary per website). - The first time you try to enter VR on a specific domain, the browser will ask for consent to use immersive view. Click Yes. - Your headset will begin presenting. *Press the Windows button or escape key to exit the immersive view. Test WebVR support with your headset The code samples below enable you to test support for your VR headset with Microsoft Edge. Just click the headset icon to enter WebVR mode. Rendering and animating a simple shape using Babylon.js: Rendering and animating shapes using A-Frame: Viewing a 360-degree photograph using A-frame: Add WebVR support to your 3D Babylon.js game in the Microsoft Edge browser If you've created a 3D game with Babylon.js and thought that it might look great in easily-accessible virtual reality on the web, follow the steps in this tutorial to add WebVR support to your Babylon.js game. Feedback Send feedback about:
https://docs.microsoft.com/en-us/microsoft-edge/webvr/webvr-with-edge
2019-05-19T11:26:23
CC-MAIN-2019-22
1558232254751.58
[]
docs.microsoft.com
:hover Pseudo-class Sets the style of an element when the user hovers the mouse pointer over the element. Syntax Possible Values. Windows Internet Explorer 7 and later, in standards-compliant mode (strict !DOCTYPE), The following example sets the hover style of an anchor. When the user hovers the mouse pointer over a link, the text appears in bold red, over a beige background. <style> a:hover { color:red; background-color:beige; font-weight:bolder; } </style> <a href="#below">Click here to move to the bottom of this page.</a> <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/> <a name="below"></a> The following example demonstrates the type of effects you can achieve without script by using the :hover pseudo-class in Internet Explorer 7. <> Standards Information This pseudo-class is defined in CSS, Level 2 Revision 1 (CSS2.1) .. Applies To
https://docs.microsoft.com/en-us/previous-versions/ms530766(v=vs.85)
2019-05-19T10:25:42
CC-MAIN-2019-22
1558232254751.58
[]
docs.microsoft.com
Based on the type of beacons you have you will have to follow different steps to open them up and then replace their specific batteries. Indoor / Outdoor beacons Remove the back cover using something sharp (e.g. a guitar pick) to find 4 AA batteries. Battery type - AA Keychain beacons Open up the beacon from the bottom hinge near the keychain hanger using a pick or something sharp. Pocket beacons Open up the beacon by pulling it up from near the power button. Long range beacons Remove the grey rubber lining present on the back-side of the beacon using a pair of forceps. Remove the screw on the four corners and then lift the cover to find 4 AA batteries. Battery type - AA
https://docs.beaconstac.com/beacon-hardware/replacing-batteries-in-your-beacon
2019-05-19T11:29:27
CC-MAIN-2019-22
1558232254751.58
[array(['https://downloads.intercomcdn.com/i/o/60297077/1b85fd7825c04e21131c39a0/Screen+Shot+2018-05-22+at+11.25.16+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60298393/bc21ff857514ca1613ce0946/OpeningKeychain.jpg', None], dtype=object) ]
docs.beaconstac.com
If you are not using a sensitive AWS account and do not have a lot of experience with IAM configuration, attaching the existing policy AdministratorAccess to your IAM user will make getting started much easier. See the eksctl documentation. This is the most extensive set of permissions and are only required for spinning up EKS. The operator requires read permissions for any data sources, read and write permissions for the Cortex S3 bucket, and read and write permissions for the Cortex CloudWatch log group. The pre-defined AmazonS3FullAccess and CloudWatchLogsFullAccess policies cover these permissions, but you can create more limited policies manually. If you don't already have a Cortex S3 bucket and/or Cortex CloudWatch log group, you will need to add create permissions during installation. In order to connect to the operator via the CLI, you must provide valid AWS credentials for any user with access to the account. No special permissions are required. The CLI can be configured using the command cortex configure. By default, your Cortex APIs will be accessible to all traffic. You can restrict access using AWS security groups. Specifically, you will need to edit the security group with the description: "Security group for Kubernetes ELB (cortex/nginx-controller-apis)".
https://docs.cortex.dev/v/0.3/operator/security
2019-05-19T11:26:05
CC-MAIN-2019-22
1558232254751.58
[]
docs.cortex.dev
Getting Started¶ To install with pip: $ pip install swiftseq Or to get the latest development version: $ pip install git+ Caution The development version may not be stable Once SwiftSeq is install, software dependencies can be installed with Bioconda: $ swiftseq install-env This will produce an executables.config file that the user can pass directly into a SwiftSeq run. Note The above command will only work if the user has Anaconda/Miniconda installed. It’s provided as a convenience; if the user would rather install software dependencies manually, Swiftseq only needs an executables.config at runtime and is indifferent to where it comes from. The user can then run Swiftseq: swiftseq run --exe-config /path/to/executables.config [options]
https://swiftseq.readthedocs.io/en/latest/getting_started.html
2019-05-19T11:27:25
CC-MAIN-2019-22
1558232254751.58
[]
swiftseq.readthedocs.io
CanvasLayer¶ Inherited By: ParallaxBackground Category: Core Description¶). Tutorials¶ Property Descriptions¶ The custom Viewport node assigned to the CanvasLayer. If null, uses the default viewport instead. Layer index for draw order. Lower values are drawn first. Default value: 1. The layer's base offset. The layer's rotation in radians. The layer's rotation in degrees. The layer's scale. - Transform2D transform The layer's transform.
http://docs.godotengine.org/ko/latest/classes/class_canvaslayer.html
2019-05-19T10:17:34
CC-MAIN-2019-22
1558232254751.58
[]
docs.godotengine.org
All Files Creating a Textured Brush T-LAY-001-004.
https://docs.toonboom.com/help/harmony-15/premium/drawing/create-textured-brush.html
2019-05-19T11:04:26
CC-MAIN-2019-22
1558232254751.58
[]
docs.toonboom.com
Optional AltAnalyze Dependencies to be Installed The compiled versions of AltAnalyze (Windows, Mac OSX and Ubuntu) should run immediately after extraction, however, the source code implementations may require additional Python modules installed. These are: - Required: Python version 2.3 or greater - Graphical User Interface requirement: Tkinter - WikiPathways API visualization: lxml - Faster Fisher Exact Test: Scipy - Visualization output files: Matplotlib, Numpy, Scipy - PNG viewing in the GUI: Python Imaging Library (PIL) ImageTk - Statistical libraries for Combat: Patsy and Pandas - Network visualization in igraph: igraph and cairo These libraries can be installed from source code or from installers provided by the open-source project websites. To determine if a pre-compiled version of AltAnalyze is compatible with your operating system, download the program and double-click the executable file name "AltAnalyze". The program can also be initiated from a terminal command-line (e.g., "./AltAnalyze in Linux). Feel free to contact us about any problems. Instructions for Windows/Linux/Mac OS X Mac OS X By default, Mac OS X has Python installed along with Tkiner. Python version 2X should be default and used instead of python 3. Windows Install Python 2.7 from. Tkinter is installed by default. Linux (see Ubuntu below) If python or tkinter is not installed, install Python 2.7 from. Tkinter is installed by default. Test by typing python at the terminal and then import tkinter. Cross-Platform Installation Options Python and Tkinter are all that are needed for AltAnalyze to run, but other dependencies are required if you wish to visualize WikiPathways (suds, PIL), cluster or QC plots (Matplotlib, Numpy, Scipy) or speed up the Fisher Exact Test analysis (scipy). To install these, see the below instructions. - Install setuptools: - Install suds: sudo easy_install suds - Install matplotlib: sudo easy_install matplotlib - Install numpy: sudo easy_install numpy - Install scipy: sudo easy_install scipy - Install Imagetk: easy_install --find-links Imaging - Install patsy: sudo easy_install patsy - Install pandas: sudo easy_install pandas - Install fastcluster: recommend install from source - Install igraph: recommend installer - Install cairo: recommend install from source (see INSTALL file or here for Windows users) - Install ordereddict (Python 2.6 or below) : easy_install ordereddict Instructions for Ubuntu Adding support for python applications call Tk is particularly challenging on Ubuntu, since Python is installed but Tkinter is not. We recommend: - Ensure all Ubuntu updates have been installed - Install Tkinter: apt-get install python-tk(restart after) - Install setuptools: sudo apt-get install python-setuptools - Install suds: sudo easy_install suds - Install matplotlib: sudo apt-get install python-matplotlib - Install numpy: sudo apt-get install python-numpy - Install scipy: sudo apt-get install python-scipy - Install Imagetk: sudo apt-get install python-imaging python-imaging-tk - Install patsy: sudo easy_install patsy - Install pandas: sudo easy_install pandas - Install fastcluster: sudo easy_install fastcluster Note: Installing from source is necessary when apt-get or easy_install does not properly obtain all dependent libraries. To install from source use these commands: python setup.py build sudo python setup.py install For igraph, install as described here - This will install to the default system python (default or python installed by apt-get). If installed via other means (e.g., from source to /usr/local/bin), you will need to sym link the igraph folder from the equivalent system python version to your local installed and use the command export LD_LIBRARY_PATH=/usr/lib/or export LD_LIBRARY_PATH=/usr/local/lib/if igraph gives an ImportError. Developers Only Both PyInstaller and cx_Freeze have been used to build AltAnalyze binary distributions. In general, PyInstaller works the best with the described patch for Ubuntu, whereas cx_Freeze will work for some but not all Ubuntu releases and configurations (e.g., compatible with 10.04). - Install cx_Freeze from source (problems with apt-get) (may require libssl-dev be installed) - OR download and extract PyInstaller 1.6 (requires this patch) - In the AltAnalyze main program directory (ensure .py files are in this root and not just Source_code), run: python setup.py build - Paste the files in the build/exe.linux directory into the AltAnalyze main program folder. - Test the "AltAnalyze" executable file in a copy of the AltAnalyze directory prior to distribution. Creating Compilers (advanced) If you wish to create your own compiled version of AltAnalyze for distribution to your users (e.g., custom versions or unsupported operating systems), install the above and be mindful of the following: - Customization of the setup.pyfile may be required for inclusion or exclusion of OS specific libraries (dll files). - See the setup.pyscript for run instructions on different operating systems. - The suds package folder must exist in the Python site-packages folder (not just the egg - zip must be extracted). - Issues with matplotlib external dependencies may occur (see existing setup.pyfiles for details). - Issues with conflicting dependencies may occur on some OSs resulting in strange errors during binary creations. These include: the python library six may need to be called as site-packages/six/six.py rather than the egg file. Potential Issues - Pandas installs it's own dateutil which may cause errors to be displayed in the console and possibly other issues. - Eggs are not well supported by py2exe
http://altanalyze.readthedocs.io/en/latest/StandAloneDependencies/
2018-03-17T16:21:46
CC-MAIN-2018-13
1521257645248.22
[]
altanalyze.readthedocs.io
Proxying Web Services with CXF Normally when building CXF web services, you can databind the XML to POJOs. A CXF component might receive an OrderRequest object, or you might send an OrderRequest object via a CXF outbound router. However, it is often useful to work with the XML directly when building web services or consuming other web services. The CXF module provides the ability to do this. Deciding How to Proxy Your Web Service While many times you can proxy web services without using CXF (see Proxying Web Services), You would use wsdl-cxf instead of proxies when you want to invoke a service and one or more of the following applies: You don’t want to generate a client from WSDL You don’t have the raw XML The service takes simple arguments such as string, int, long, or date CXF proxies support working with the SOAP body or the entire SOAP envelope. By default only the SOAP body is sent as payload, but the payload mode can be set via the "payload" attribute to envelope if needed. Server-Side Proxying To proxy a web service so you can work with the raw XML, you can create a CXF proxy message processor: <cxf:proxy-service /> This will make the SOAP body available in the Mule message payload as an XMLStreamReader. To service a WSDL using a CXF proxy, you must specify the WSDL namespace as a property: Client-Side Proxying Similarly, you can create an outbound endpoint to send raw XML payloads: <cxf:proxy-client/>
https://docs.mulesoft.com/mule-user-guide/v/3.9/proxying-web-services-with-cxf
2018-03-17T16:29:29
CC-MAIN-2018-13
1521257645248.22
[]
docs.mulesoft.com
APIkit for SOAP 1.0.2 Release Notes This version of APIkit for SOAP includes a fix to the runtime. In Studio, if a SOAP action is not defined, an attempt to generate flows based on a WSDL (scaffold a WSDL) now throws an error message: SOAP action is required. The scaffolding process is aborted. No flows are generated. For Studio, this version is distributed as Anypoint APIkit SOAP Extension 1.1.3.
https://docs.mulesoft.com/release-notes/apikit-for-soap-1.0.2
2018-03-17T16:08:06
CC-MAIN-2018-13
1521257645248.22
[]
docs.mulesoft.com
. - In the Node Library, select a Peg node and drag it to the Node view. - Press Ctrl + P (Windows/Linux) or ⌘ + P (Mac OS X)._2<< -_3<< -_4<< - You can unparent layers by holding down Shift and dragging the selected parents away from the child layer. Drop your selected between other layers. - In the Node Library view, select the Move tab. - Select a Peg node and drag it to the Node view. You can also press Ctrl + P (Windows/Linux) or ⌘ + P (Mac OS X). - In the Node view, select the Peg node's output port and connect it to a Drawing or Camera node. The advanced connections in the Node view are shown in the Timeline view, unless they cannot be reproduced in a timeline layout.
https://docs.toonboom.com/help/harmony-14/premium/motion-path/add-peg.html
2018-03-17T16:25:43
CC-MAIN-2018-13
1521257645248.22
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_addPeg1.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_addPeg2.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_addPeg3.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_addPeg4.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_pegNetwork.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/anp_addpegnetwork002.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_networkView.png', None], dtype=object) ]
docs.toonboom.com
Legacy: Puppet activity: Install Module The Install Module activity installs a module to a Puppet Master or puppet node. This activity implements the behavior detailed by the puppet-module Install action. Table 1. Input variables Field Description Puppet master The IP address of the Puppet Master or node you want to install the module to. A Puppet Master must have a module before it can push that module to individual puppet nodes. Module name The name of the module you want to install. Enter module names using the format provider/module name, such as puppetlabs/apache. You can find a list of module names by viewing your module repository. Version The version of the module you want to install, in the format #.#.#, such as 0.0.3. Leave this field blank to install the latest available version of the module. Module path The directory path you want to install the module to. This path can be any location on the puppet node. Module repository The repository that contains the module you want to install. For example, you can access the Puppet Forge repository at forge.puppetlabs.com. Ignore dependencies Option to have this activity not attempt to install dependencies along with the specified module. For example, when installing the apache package, if you select Ignore Dependencies, ServiceNow does not install the stdlib or firewall package dependencies. Force Option for overwriting any existing module with the same Module name.
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/product/configuration-automation/reference/r_InstallModule.html
2018-03-17T16:34:04
CC-MAIN-2018-13
1521257645248.22
[]
docs.servicenow.com
Use of Minecarts¶ This masterclass on Minecarts, containing significant original research, was written by Larix. See the original here, or check out the wiki on minecarts. The wiki article contains a lot of varied information, but i’ve delved quite a bit into minecart pathing; i.e. where a minecart goes when you let it run free, how its paths change and so on. There’s almost no in-depth information on all this stuff on the wikI and it seems to me that much of it isn’t well understood. I encourage all readers to replicate my designs and experiments and offer corrections or alternative interpretations. In order to properly trace what’s going on, you will need to look at events closely and that means (unless you have an infallible hack script to do it for you) you’ll need to pause the game, advance by single steps and count the steps exactly. You can’t just eyeball the speed as “pretty fast” or “sort of sluggish”, you’ll have to e.g. count out a hundred steps and look how far a cart travels in that time, so you can definitely tell whether a cart moves at 45000 or 55000 speed. Contents - Use of Minecarts - Lesson One: Track on flat floor - Lesson Two: Ramps, basics - Lesson Three: rollers and guided carts - Lesson Four: Flight - Lesson Five: diagonal movement and how to fix it - Lesson Six: False ramps - Lesson Seven: Pathing across levels - Lesson Eight: Meet the checkpoint bug - Lesson Nine: Practical implications of the checkpoint bug - Lesson Ten: Corner ramps Let’s start simple. Lesson One: Track on flat floor¶ In short, the only type of track that matters to a free-running cart are corners. All other types are irrelevant. A sweeping statement, sure, but I found it to be perfectly true. The basic rule is that a minecart will move in a straight line. The only exception is when it encounters a track corner (the two-connection type, not T-junctions) that’s connected to the direction the cart is coming from. Let’s take an example: a═══════╗ b═════╝ Cart gets pushed east from a/b, moves east until the corner and turns south/north there. a║╚╔╣╩╦╠╗ b╬╥╨╞╡╝ Cart gets pushed and behaves exactly the same as above. The cart incidentally also behaves like that when the route before the corner is entirely non-tracked floor, it’ll path just the same, it will only slow down more thanks to higher friction. Why is this so? As far as I can tell, the game doesn’t calculate any sort of “heading” for the cart, it just keeps track of the velocity, probably split between x- and y- axis. When the cart moves over flat floor, all that’ll happen on “direction-neutral” track is deceleration. When the cart is re-pathed by a legal corner, the whole input speed is taken and turned into speed in the exit direction of the corner. “Straight” tracks have no pathing power over minecarts, they don’t keep them “on track”, because carts don’t consider themselves “on track” in the first place. They’re in contact with the floor and react to corners/constructions, or they’re in flight and don’t. That’s it. Lesson Two: Ramps, basics¶ Of course, everyone who works with carts for a while will probably get to love ramps. They allow carts to climb levels, they can provide speed, they even allow perpetual motion. First of all, what makes a ramp tick? A ramp is only fully traversible for carts and can only provide acceleration if it’s properly connected by track. There are two arguments that get checked, and they concern track connections and nothing else. It doesn’t matter where the cart comes from, whether it changes level or not, it’s all about how the ramp is built. - Requirement one: the ramp must have track connection to wall. One, two or three connections are all acceptable. - Requirement two: the ramp must have exactly one track connection to a non-wall tile. This can be an adjacent ramp on the same level, flat floor or a hole (e.g. containing a down ramp). Examples of functional track ramps: # # # # # # ║ ║ ║ ╚+ #╩+ #╬# + ▲ ▼ + All shown track engraved on up ramp. Examples of non-functional track ramps: ### ### ## +═+ +╩+ #╝ - first example: no connection to wall - second: more than one connection to floor - third: no connection to floor When a ramp is properly connected, it provides acceleration towards its “down” direction; ~5000 speed units for every step a cart moves across it. Ten steps of acceleration give as much speed as a highest-speed roller, but you’ll need multiple ramps for that. The important part is that the game only checks if the ramp is properly connected, it doesn’t check where the cart’s coming from. This is the foundation of the fabled impulse ramp - a cart entering this ramp: ### +╚+ from the west will be accelerated towards the east, the same as a cart going down a level down such a ramp: #═+ (cart’s coming from track or a ramp on the level above). Impulse ramps thus grant speed without needing to sacrifice height, no more. They do not provide more or a different acceleration, just the exact same amount (which is quite a lot considering it means perpetual motion at practically any speed up to ~250.000 that you desire). If a cart moves onto a ramp from the ramp’s down direction, it’ll be accelerated in the direction it was coming from, i.e. it decelerates (at the normal ramp rate). Excepting a rather powerful bug that’ll come up later, this deceleration will stop carts that are moving at the speed of a medium-speed roller or less before they reach a ramp’s top, whereupon they’ll roll back down from the place they’ve reached. The resulting speed when leaving the ramp again will be less than the speed the cart entered with, a cart bouncing between two ramps separated by one tile of level floor. ##### ##### ##### #▲═▲# #═══# #╚═╝# ramps track variants will after about a dozen bounces stop on the flat middle tile. There’s no observable difference between the two ramp layouts. I built a fifteen-level straight ramp slope to measure the speeds different numbers of ramps will give to a cart. While the speeds found were just as expected and only of minor practical use (as reference to “generate” carts of specific speeds), the experiment provided a few valuable pointers, stuff that has been worked out by others before but doesn’t seem to be widely known: - a dropped cart (I always dropped it off a hatch) will land in the middle of the tile below and only roll down _half_ the ramp it lands on. - the speed rises with the number of turns the cart spends on the ramps (just under 5000 speed for every turn, ~130 000 for a cart sent down a fifteen-level ramp), but one turn is subtracted from the count; i.e. the cart is charged ~5000 speed for leaving the ramps in the end. - the “length” of a ramp is bigger than that of a flat tile. Since I only had full steps to calculate with, the numbers aren’t super-precise, but it appears to be sqrt 2 times the length of a flat tile (the “lost” acceleration step mentioned above actually is needed for the length calculation to best fit the results). Out of curiosity, I checked “catching” a cart falling down a vertical shaft with a track ramp since i’ve seen reports that paying attention to the exact level and different designs were required. I drilled down a 40-z shaft and built a ramp at the bottom of it (ordinary EW ramp with wall to the west and floor to the east). The cart fell down, landed half-way up the ramp and rolled off at the usual half-ramp speed of ~20.000. I tried it at different adjacent levels, and the result was always the same: none of the vertical speed was preserved (over 1 zlevel per step), the cart never failed to be accelerated. If you got different results, i’d like to hear how you got them. I dropped the cart via hatch, which afaik is the easiest way to guarantee a clean drop without colliding with the shaft’s walls. Enough for now, more to follow. As a sum of lesson two, i’d offer: Ramps’ main parameter is the direction they accelerate to. The exact track engraved on them and where a cart enters the ramp is secondary. If there’s a corner engraved on the ramp and the cart actually moves following that corner, the corner is respected (and things get weird), otherwise exact track is just as irrelevant as on level floor. Lesson Three: rollers and guided carts¶ Rollers are the powered means of providing speed to a cart. As has been widely observed - for practical purposes, it’s easiest to assume that rollers simply set the cart’s speed to a fixed value. see below, “Late P.S.” - I found that rollers really provide acceleration, it’s just a very large amount and the acceleration gets capped at the roller’s set speed. The effect rarely materialises, only when working with high-speed carts and only when speed and attitude are just “right”. - rollers will not slow down a cart moving faster than the roller’s set speed; they will “brake” and turn around a cart moving in the opposite direction - rollers working laterally to a cart’s current movement direction result in diagonal movement - rollers only affect free-running carts (not guided carts) and only when they are on top of track. Rollers on ordinary floor are ignored. - rollers which are not powered are completely ignored, you just get the effects of the tile underneath. The “braking” power of rollers is impressive: each step spent on an opposing roller slows down a cart by 100 000 speed units. The strongest track stops slow a cart by 50.000 per step. You need a very speedy cart to keep moving past a single-tile roller. A cart moving more than one tile per step will, however, not be affected by the friction of tiles it “skips” over during its turns; only the tiles it’s in at the end of a step do count. Two rollers (of any speed) in the correct spots are enough to stop and turn around a cart moving at maximum ramp speed (~270.000, that’s eight tiles in three steps on average). Since non-corner track doesn’t matter, it also doesn’t matter much what kind of track you build a roller on. A west-pushing roller on an E-W track tile works the same way as a west-pushing roller on N-S track, an E “track end” or a NSE t-junction. Corners do affect the way rollers work, however: a) a roller pushing “from” a connection of a track corner results in movement towards the corner’s exit direction, not towards the roller’s push direction. b) a roller on a track corner pushing from a direction the corner does not connect to pushes to its normal direction (of necessity one of the track corner’s connections) and may cause diagonal movement. a): .║ ║ ═╗═ ═╢═ ║ ║ No matter where the cart comes from, it exits to the south as long as the roller is powered (and the cart isn’t super-fast). The most evidently useful application is with a cart coming from the east, because that allows a simple powered switch: when the roller is off, the cart moves off to the west, if the roller is on, the cart goes south. Still, it’s interesting to see that carts coming from the north or south are not thrown into a diagonal, although the roller’s nominal push direction is lateral to their movement. It looks like the corner sort of turns the roller’s effect around. b): .║ ║ ═╗═ ═╟═ ║ ║ Results: when the cart comes from the east or west, it moves west. Carts coming from the north go on a diagonal heading southwest. Carts coming from the south go on a diagonal heading northwest. Since the roller’s “push from” direction is not in line with the corner, it keeps its “push to” direction and causes the cart coming from the south to ignore the corner track. The latter isn’t some kind of cumulative speed derailing, it happens when combining two low-speed effects which even theoretically can’t add up to more than 40.000 (50.000+ is the derail threshold). The only rationale I can find for it is that the roller indeed “overrules” the corner when active. More likely, however, the laterally-working roller just “adds” its movement speed to the cart’s velocity and leaves it to the corner to sort things out. The observation remains valid that a roller pushing “into” a corner is less likely to cause wild diagonal movement even when working laterally, while a roller working towards a corner’s exit often causes trouble. On the whole, however, cart motion is most predictable and controllable when working with rollers in line or opposed to cart direction, not with lateral rollers. Corners appear to apply at the end of a turn, after all speed changes on the tile are done with, so in the example above, the “bend the cart to the south” effect of the corner happens after the last “set speed towards east” effect of the roller and the leaving cart goes off with southward-only speed and no eastward component. A cart encountering a laterally-working roller which does not sit on a corner will generally be thrown off onto a diagonal trajectory. Diagonally-moving carts are great fun, because (Lesson One) only corners matter, so the carts’ll merrily barrel all across your carefully laid-out track, smacking into walls and stopping or going places you don’t want them (most cases of unexplicably-stopping carts are due to diagonal movement and wall collisions). Unless you manage to thread them through track corners, that is, because a cart properly taking a corner will move precisely in its exit direction and will not retain any diagonal movement component. More on that later. Late PS: rather confusing results of a recent roller-based device show that rollers indeed accelerate carts, at 100.000 subtiles/step² in their given direction, but capped at roller’s set speed. When a cart moves from roller to roller, this won’t matter: since the highest speed that can be imparted by a roller is 50.000, the 100.000 acceleration is enough to neutralise the speed of a cart incoming at max. roller speed and impart max speed, all in a single step. However, if the cart moves at higher speeds, one step of acceleration may only change it from, say, -70.000 to +30.000 and when the cart leaves the roller’s tile on the following turn, it will move at the received 30.000 speed, even if it was affected by a highest-speed roller with a set speed of 50.000. In addition, the cart actually calculates the distances it moves on the roller’s tile, so the right combination of cart speed and “offset” can result in very irregular speeds. A rather bare-bones test allowed achieving different non-max speeds from a highest-speed roller by slightly varying input speed - from 12.000 to 22.000, both from a “highest” roller. I’ll make guided carts short, because there’s not much about them: guided carts ignore special track buildings like rollers or track stops, the pushing dwarf just moves them at their walking speed, much like a wheelbarrow. Track must be “connected” for dwarfs to actually guide a cart. If they find no connection, they’ll lug the cart by hand, which ranges from much slower to abysmally slow. “Connectivity” is quite lenient, however - in most cases, a tile only needs one track connection in the correct direction, and bridges are accepted as track, too. It’s best to just engrave/build an identifiable unbroken track, though. Guided track can go up/down ramps without trouble, all at the normal dwarven walking speed. Lesson Four: Flight¶ Carts can be sent over ramps or over the lips of cliffs, and the game will trace a ballistic trajectory. Carts in flight are not subject to air friction (according to hack scripts), but they are subject to gravity. Someone did the calculation, I forget. Anyway, observation tells us that a free-falling cart takes as long to reach the bottom of a shaft as one rolling down a flight of ramps. Thus, the acceleration is the same - corrected for the greater length of ramps (sqrt 2 times length of a flat tile), we get something just under 0,035 zlevels/step². (Which shows that dwarven physics are screwy, acceleration on a ramp should be lower than free-fall acceleration.) A cart released by a hatch takes six steps before it’s displayed on the next level down, which suggests - hm, that the cart is considered to start falling about 2/3 up the current level? There’s some tricky stuff going on with the decision whether a cart’s actually in contact with the floor (and thus subject to corners, rollers and track stops): carts can make small jumps in some cases which don’t move them to a different level, and in those cases it seems to take ~those six steps before they start registering as “on floor” again. A cart pushed off a cliff follows an ordinary downward curve. It keeps its horizontal velocity and will keep moving at the same speed when it lands, while vertical speed will build up during the fall and will completely disappear when it hits the ground. If a cart is sent over an upward ramp into the open sky, it can go up several levels, depending on its speed. A highest-speed roller will barely manage a hop, the cart won’t even reach the level above the ramp, but it’ll be in flight for a few steps. A cart accelerated by a long downward slope or an impulse ramp array can go over the ramp at much higher speeds and can reach heights of up to 26 z-levels (or more with added trickery). The “launch ramp” converts the horizontal speed of the incoming cart into ramped-upward velocity, and the upward component will grant height while gravity nibbles away at it. Counting steps and trying to calculate out the results, my best estimation for ramp launches is as follows: The baseline is the speed on horizontal track. This speed is converted into speed calculated for ramps. When released, the cart moves vertically at 1/2 the original speed and horizontally at ~70% of the original speed. Assuming this is all ramp stuff, it’s likely sqrt 1/2 the original speed horizontally. As per usual, vertical speed disappears upon landing and if the cart is launched off a ramp again, its horizontal speed will be 1/2 the original, vertical speed sqrt 1/8 (ca. 35%) and the height reached will only be about one half of what the first jump achieved (a bit less because of ramping speed costs). Standard design for a launch ramp: . ____/# Fast cart comes from the west, goes over the ramp, flight happens. It must be a proper track ramp :P Carts that fail to enter a hole in the floor “jump” over it, and this also seems to count as flight: speedy carts will not follow a track going down a ramp when coming from level track, and they will ignore corners directly behind the hole because they haven’t touched floor again. The peculiar feature of speed supercharging still exists in 0.40.11: if two carts of similar speed collide frontally and the “pushing” cart is between 1 and 100% heavier than the “pushed” cart, momentum of the pusher will be conserved. That’s to say, the pushed cart will move off at a speed higher than what the pushing cart brought to the collision. This allows breaking the speed limit on ramp and gravity acceleration (270.000 reportedly). Carts moving that fast are subject to an exceptional friction of 10.000 per step, all the time, thus only very short bursts of extreme speed are possible and since high-speed collisions are required, no cargo can be transported. In a quick-and-dirty test for 40.11, I just smashed two hazel wood carts together, one loaded to double weight, and right enough, the pushed cart moved 29 tiles in six steps. In .34.11, I managed burst speeds of up to 17 tiles/step through tiered collisions and ramped jumps of 45 z-levels. The latter was what I meant with “added trickery” above. Bodycount: 6 dogs (+1 since last update), one mangled dwarf (survived and is fine, but keeps cleaning himself). Lesson Five: diagonal movement and how to fix it¶ Diagonal movement, on the face of it just means that a cart is not moving in a cardinal direction and will eventually move off the “straight line” or bump into a wall, stopping dead. I admit that this is just interpretation, but i’m reasonably certain that diagonal movement is not handled as a “heading” like “fifteen marks east off north” but rather as a combination of movement on the two flat axes. Laborious example: A cart pushed north by one highest-speed roller, then east by a lowest-speed roller doesn’t move “north by northeast” but rather “50.000 north and 10.000 east” and each of these components is separately subject to floor friction. Letting the cart roll over higher-friction floor (like non-track floor) shows that the cart will only take five steps (and three tiles) to move the first step to the east (since its eastward movement started in the middle of the tile, it only needs to move half a tile to switch over to the next), twelve steps and six tiles for the next, 22 steps and nine tiles for the third, and it won’t make a fourth step to the east: after fifty steps, the eastward component of the cart’s movement should be entirely gone. (It would take a rather unfeasible 1000 steps on track-engraved floor.) Admittedly, accepting the sideways aberration and trying to remove it by floor friction is rarely an option. Diagonal movement commonly occurs when a cart moves up a corner ramp. Since minecarts don’t care about flat-floor track apart from corners, a long straight track line will do nothing to rule in a diagonally-moving cart, it’ll just move along and take its sideways step when it’s time. And if there’s a wall next to the track (e.g. because you’re trying to keep accelerating the cart via impulse ramps) it’ll just hit the wall and stop, at least temporarily. If it stops on flat track, it’ll stop for good, if it stops on a ramp, it’ll start moving again, but it may lose its load. As far as I can tell, that was the problem encountered in this water gun design. Thanks to uncorrected truetype font turning all text into garbage, I can only guess (and you better ramp speed up to 1000+ and “step” the thing yourself by hitting forward/pause repeatedly). Note: in my experience, a cart always gets one ramp-step’s speed (i.e. about 5000, 1/20 tile/step) to the “outside” of the curve on the corner ramp. It will step off the straight path on the eleventh step after the corner, i.e. after this lateral speed component has accumulated half a tile of distance. This holds both for a cart propelled by a highest-speed roller (50.000 speed) and a maximum-speed cyclotron (265.000); both will stop/go off the straight path after ten steps. I’ve re-built WanderingKid’s impulse/something elevator and found the problem he faced (reported here) was also nothing fancier than diagonal movement: sending the output of a corner ramp onto a straight (i.e. inconsequential) track. In my re-build, the cart would move off the straight line on the eleventh step after the corner. So how to avoid diagonal-movement troubles? The easiest option is not to generate diagonal movement in the first place: don’t use corner ramps to move carts up levels. For moving carts up levels, straight ramps work just as well as corner ramps; better in fact, since they don’t cause the added 1000 speed loss from the corner (and don’t cause diagonal movement). There are some special cases of upward movement over multiple levels which require corner ramps, but if you only want to go up a single level, just use a straight ramp. The other option, when corner ramps are used, is to use the one track type carts care about: corners. If a cart tries to leave a corner tile, the game checks whether the border the cart tries to leave over is “blocked” by the corner: on a NW corner, those will be the E and S borders. If a cart tries to leave to the south, it’s treated as coming from the north, and it leaves towards the west. This rule appears to only care for the tile border the cart tries to leave over. A diagonally-moving cart is also subject to these checks: let’s assume a cart moving from the northwest towards the southeast: if the tile the cart’d leave to would be the one directly south of the corner, the cart will turn around to the west and will move west only. Notably, the resulting speed is the cart’s previous N->S velocity, the W->E velocity will disappear. If the cart would have left to the eastern tile, it’ll turn north (moving at the previous W->E velocity). If the cart’s go-to tile is the exact southeastern one, the corner will not affect it. Which of the two axial speeds is higher doesn’t matter. A cart moving from northeast to southwest will only be affected by the corner if its go-to tile is the southern one. If it tries to leave to the western (or southwestern) tile, it’ll stay on its diagonal course, because the border over which it attempts to leave isn’t blocked. My standard approach to the output of corner ramps is to just put a corner on the tile immediately behind the ramp, like this: z+0 z+1 #### ══╗ ══▲# ++▼ #### ══╝# track on ramp I’ve yet to see a case where this doesn’t work (if necessary propped up by a wall behind the corner above when working with fast carts). PS: my best interpretation is that a corner “sets” the cart’s speed in the exit direction to its previous value in the “input” direction. Since the diagonal component is actually velocity on the corner’s exit axis, that part of the cart’s movement speed just gets overwritten. Result in any case: successfully rounded corners fix diagonal movement. Example of weird behaviour: ╔═╧# ╚═╝ Upon first being pushed, the cart goes around the circuit normally. But when it then reaches the roller again, it will move south into the corner after two steps, then north after one to two steps, then south again and then once more through the loop. Interpretation: the cart is pushed into a southeasternish course, which is recognised as coming from the west by the corner, so it gets bent around to the north, reflected by the roller and then goes through the corner normally, entering from the north and leaving to the west this time. Lesson Six: False ramps¶ In the ramps section, I mentioned ramps which don’t accelerate carts. Those may seem kind of pointless for building tracks, but the lack of acceleration can actually be a benefit. If a ramp connecting levels doesn’t cause friction, you can change level without losing/gaining speed (apart from ordinary floor friction). It’s decidedly weird - my constructions only work when the cart enters at very low speed - around that of a dwarven push - but a single push can move a cart up/down 40+ levels without notably changing the cart’s speed. (example (o hey, it was 47 z. You can safely speed past the end, I just showed that each ramp was a non-functional E-only one.) It’s of course also possible to do this without dwarven labour, you just need sufficiently regulated cart speeds from proper ramps or rollers, if needed combined with a few track stops. A super-low-tech and low-risk way of lifting a cart up a huge number of levels. Another application of false ramps is to make the loading of liquids into carts easier pioneered by flameaway. I found it to be an impressively fast, fully-automatable loading mechanism for waterguns allowing cadences of up to one shot per ten steps (using multiple carts in one barrel). It works so well because it doesn’t accelerate/decelerate the carts. The loader simply consists of a single channelled-out tile containing a track ramp with no actual down direction. Its track connections only go to wall, therefore it is treated as ordinary flat floor by the game. The cart is never at the “bottom” of the “ramp”, because as far as the minecart engine is concerned, there’s no ramp here. Thus, the cart also doesn’t need to “climb” out of the hole, it just needs enough forward motion to roll to the next tile. A cart moving slowly enough will pick up water/magma from a 7/7 tile; the speed imparted by a high-speed roller is just low enough. Dwarven pushes have the advantage that they “teleport” the cart to the middle of the first pushed-to tile, which makes them the fastest loading event. They’re decidedly less automatable, though. There’s no need to engrave a corner into the pond tile, a straight fake ramp works better. Bodycount: nothing new! Well, one diagonal vs. roller test ended up giving a dog a bruised stomach. Big deal, I don’t really count dogs if they don’t end up in multiple parts, like the puppy that during the last round teleported its torso through a wall while leaving all its limbs on the other side. The highly irresponsible flying minecart test, however, didn’t cause any harm at all. Lesson Seven: Pathing across levels¶ Pathing on flat floor is easy enough: only corners matter. It’s not quite so easy when minecart paths go to different z-levels, either up or down. Getting a cart to move upwards is easy enough - just offer it a track ramp. Carts will not go up ramps without engraved track, and they will not reliably go up “false” ramps (i.e. ramps which don’t accelerate/decelerate carts). You’ll eventually want the cart to stop going up, and there things can go awry. A cart moving up a ramp with no closed ceiling (or building) immediately above the exit tile may get airborne. The speed from a highest-speed roller is enough for this, but high-speed rollers or equivalent speeds like the acceleration from a single down ramp can suffice, too. An airborne cart will not be in contact with the floor underneath it and will thus not care about track corners, rollers or track stops on that tile. A closed ceiling or building (bridge, hatch cover etc.) above the exit tile will make the cart behave and stick to the floor, regardless of its speed - a high-speed roller cart will be reined in by a ceiling just the same as a highest-ramped-speed cart or a supercharged cart. If there’s open ceiling above the exit tile, a cart can still be ruled in by a functional ramp on the exit tile. z+0 z+1, a) b) c) d) ###### # # # # ▲▲▲▲▲▲▲#▲══ ▼═▼ ▼▲▼ ▼▲▼ ▼▲▼ ###### # # # # ╚╚╚╚╚╚═#═══ ▼═▼ ▼╚▼ ▼║▼ ▼╝▼ Cart comes from the west, accelerated by a series of impulse ramps, then goes over an up ramp. a) - no ramp (can be smoothed floor instead of straight track): cart goes into flight, several z-levels up. b), c), d): cart goes down the ramp to the east and follows the track. Notably, the orientation of the ramp on the top tile doesn’t matter, it just needs to be a legal ramp. Carts can be made to “level out” via ramp, but as seen here, they can also be forced down an adjacent ramp this way. So, if you send a cart up several levels to the surface and don’t want it to go flying, put a ramp on the exit tile. When you want a cart to enter a downward path, there are a few issues and solutions, as well: A cart coming upon a hole in the ground will by default just jump across it. If the cart moves at a speed of at least 1/5th of a tile per step, it can jump over one tile of open space and continue moving on flat floor on the other side. A dwarven push or low-speed roller are enough for this purpose. A peculiar issue was found with dwarven pushes: a dwarf pushing a cart from right next to a hole in the floor cannot move the cart across. It will collide with the hole’s edge and fall down into the pit. This seems to happen because the push “teleports” the cart to the middle of the adjacent tile, without giving it the “lift” gained by a jump. If there’s one tile of “buffer” between the dwarf and the hole, the cart jumps just fine. If there is a ramp in a hole (ordinary floor ramp or track ramp, both are recognised), a cart will treat the hole as an appropriate pathing destination and will directly move into it (i.e. without spending time in the “open space” above the hole) as though it were rounding a “downward” track corner. Carts moving at derail-capable speeds will not enter a downward ramp, they’ll jump over the tile and continue beyond it. In addition, the tile before the ramp must be a “track” tile - either engraved track or a bridge. Carts coming from ordinary floor will jump, regardless of their speed. As noted above, however, a cart coming from a legal track ramp (any orientation!) will enter a downward track ramp just fine. This allows sending very fast carts down ramps simply by putting an impulse ramp before the actual ramp entrance: . # # ══▲▼ ══╚▼ Other ramp orientations seem to work just the same, as long as they’re legal and don’t open a diverging path. Ramps will not send a cart into a hole that doesn’t contain a ramp. Lesson Eight: Meet the checkpoint bug¶ Let’s face the possibly most powerful feature/bug of minecarting. Nope, not impulse ramps. For demonstration purpose, let’s take two sets of opposed ramps: a) b) #▲═▲# #▲▲# #═══# #══# Offer open floor above and to the sides. Drop a cart onto one of the ramps via hatch. In each case, the cart will start out by rolling along a ramp for five steps. In a), the cart will then pass over the flat tile in a single step, spends eight steps on the opposing ramp, rolls across the middle tile in a single step again, spends seven steps on the first-touched ramp, then across in a single step etc., until after a few iterations it sits still in the middle tile. In b), the cart goes onto the opposing ramp, passes over it in a single step, goes to the tile above and to the side, passes over that tile in a single step again and then moves off at about 1/5 tile per step (~19 000 speed). If you offer no exit, the cart will bounce between the two ramps forever, spending eight steps on each. You can temporarily stop it by blocking the opposite ramp with another minecart, but as soon as one cart is removed, the remaining cart starts bouncing again. What we’re seeing is an artefact of the game having to switch distance calculations as soon as ramps get involved. The upshot is that a) if track changes from flat track to a ramp, the cart must step onto the new ramp tile. No matter how fast the cart is, the tile cannot be skipped. I’ll call this a “half checkpoint”. b) if track changes from a type of ramp to anything else, the “changed” tile cannot be skipped and the cart will spend exactly one step on it, regardless of its speed (as long as speed is above zero). Finally, the last speed increment the cart received on the ramp is erased, presumably by applying equivalent acceleration in the opposite direction. I’ll call this a “full checkpoint”. “Anything else” notably means that checkpoints happen whenever the cart passes from a ramp to a different ramp, i.e. a ramp with a different slant (accelerate-to direction), and when passing to a non-ramp tile, preferably flat track. The biggest effect here is that checkpoints effectively divorce the rate of movement from internal speed of the cart. Cart propelled by a single ramp (about 1/3 tile per step) going over checkpoint? Spends exactly one step there. Cart propelled by maximum number of ramps (about 2,5 tiles per step) crossing checkpoint? Spends exactly one step there. In fact, if a cart is moving along a ramp- and corner-heavy track and crosses one tile each step, it’s almost a given that you’re dealing with chained-up checkpoints. Simple example: ########## ########## ═▲═▲═▲═▲═▲ ═╚═╚═╚═╚═╚ A cart going in at sufficient speed (must be ~72 000+) will cross this track spending one step on each tile and will come out on the east at almost exactly the speed it went in. This holds both for a 72 000 speed and a 265 000 speed cart, they’ll move at the same rate through this track, they’ll only lose the speed for normal track friction but the slower cart will also not accelerate. Their actual internal speeds will only again assert themselves after the cart left this track section. This happens because each impulse ramp is a half and each flat tile a full checkpoint. The slower cart is just fast enough to make it off the ramp in a single step (apparently a cart moves its full movement rate “into” a half-checkpoint (but not past it when moving faster than one full tile per step): a fast-enough cart makes it to just past the half-way point of the ramp upon entering, and just past the tile’s “exit” on the very next turn). PS: I haven’t checked this exact design, but as long as incoming speed is at least 80.000, this thing should work the same way in both directions - carts going “with” the impulse ramps won’t accelerate, and those going “against” them won’t slow down. Let’s look at the first example with the double-ramp again and see what happens by checkpoint rules, dropping the cart onto the western ramp: -cart goes “down” ramp to the east, picks up 25 000 speed. -cart enters ramp slanting west - checkpoint: accelerate 5000 to the west (compensating for last step of acceleration), go to end of tile -cart “accelerates” west by 5000 on the west-slanting ramp, has 15 000 speed left to cross the threshold to the next tile, thus reaches flat tile above and to the east - checkpoint: accelerate 5000 east (compensating for westward acceleration), go to end of tile - cart keeps moving on flat track to the east, now with normal distance calculations so it takes five steps per tile again. Why the weird “accelerate backwards on the checkpoint” thing? Because in example a), the cart actually stops. It also explains why the highest speed i’ve got through ramps (measuring actual track covered) is not 270.000 but 265.000. For a clearer example: #▲+ ═ #═══ Station a cart on the ramp, then open the door. The cart instantly rolls onto the flat tile and stops there. This is, it picked up speed from the ramp, used that speed to pass over to the flat ground, but had no speed left thereafter (or it’d have moved to the next tile east on the next step). I interpret this so that the cart actually loses its speed after taking the move. Other evidence supports the interpretation. This bug allows deriving speed from pits in the floor and moving carts up levels with ease. It’s the actual power behind the “impulse elevator” shown on the wiki. WanderingKid’s elevator uses impulse ramps to gain speed, but checkpoints to go up levels. I’ll leave you with this for now. More to come. Bodycount: kitty! Someone’s pet cat wandered into the cyclotron. It’s the only contraption that has caused any real damage so far, and the only dwarf who was hurt remains the spinner/leatherworker who tried to “clean” puppy blood out of it while it was spinning. Lesson Nine: Practical implications of the checkpoint bug¶ The checkpoint bug affects all manner of minecart constructions, as soon as ramps get involved. For a start, let’s look at the lowly single-ramp cyclotron: ##### ##### #╔═╗# #╔═╗# #╚▲╝# #╚╔╝# ##### ##### Cart cycles counter-clockwise and its speed oscillates somewhere between 70,000 and 80,000. It won’t go any faster, ever, although one step of ramp acceleration gives 4900 speed while four corners and, say, seven steps of movement cost no more than 4070. Evidently, if the cart spends only one step on the ramp, this acceleration is eaten up by the checkpoint compensation when moving off the ramp to level floor. It’ll only really pick up speed when it spends at least two steps on the ramp and it must be slower than ~72.000 for this to happen. Indeed, the cart cycles at an oscillating speed: it goes five rounds at eight steps each (spending two steps on the ramp each time) and seven steps in the sixth round (spending only one step on the ramp). For speed to keep building up, you need an unbroken stretch of three impulse ramps: due to the greater length of ramp tiles, the maximum speed available through ramps (270.000) is just less than two ramp tiles per step, so a cart will always spend at least two consecutive steps on the three-ramp stretch. Such a three-ramp cyclotron is enough to achieve maximum ramp speed. When moving a cart laterally onto an impulse ramp track, the checkpoint effect can be used to prevent diagonal movement. Throwing a cart directly into a sideways impulse ramp: a) b) #### #### #### #### ▲▲▲▲ ╝╝╝╝ ▲▲▲▲ ╝╝╝╝ ║ ║ ▲# ╚# from the south like in a) will have the cart accelerate to the west on top of a pre-existing and lingering northward speed. It’ll either bump into the wall and temporarily stop or exit the impulse stretch on a diagonal trajectory. Sending it through an immediately adjacent impulse ramp lets it pass right through the first ramp of the acceleration stretch via checkpoint effect, stopping it against the wall and cancelling the northward speed instantly, so that it can accelerate west on a straight course. Of course, others have, often unknowingly, used checkpoint effects in their constructions. Take the “impulse elevator” on the wiki: #### ##╗# #### ▼╔╝# ##╚# ╔╝▼# ▼### #▼▼# ##▼# All track on ramps, going up from left to right. Looking at the thing in action, we’ll see that the cart moves at a rate of exactly one tile every step until after five levels or so it stops, rolls back from an “up” ramp in eight turns, spends another eight steps on the ramp behind, then starts going at the previous rate for another five levels. Clearly, this means that the cart moves at one ramp-length per step, i.e. 140.000 speed, right? Haha, of course not. It’s checkpoints all the way up. The cart hiccups and stops not because it’s too fast, but because it ran all out of speed and had to checkpoint-cheat itself some new steam. Observe the ramp slants in the example above: E, W, N, S, W, E. Slant changes every tile, thus every tile is a full checkpoint. The checkpoint bug runs the cart up at a rate of one ramp every step, until speed falls to zero. At that point, the cart makes it onto the next tile (and technically all the way “up” on it) but has no more speed to make it to the next tile (up), so it stays on the ramp and accelerates there for the full eight steps. This moves it back to the last (opposing) ramp, which it again fully crosses, but here it bumps against a wall and accelerates all the way forward again. With the shiny new 35.000 speed, it can take the up checkpoint and have speed leftover to keep moving. It’s peculiar that this thing loses speed so quickly - it appears to burn through its store of ~35.000 speed points in five levels, although it should only lose 1.000 speed per level for the corner. It’s almost as if there’s something fishy with corner ramps that enforces a higher speed loss. Another ramp spiral was invented by WanderingKid and has the advantage of doing without the annoying back-and-forth every few levels. The cart in that design just keeps going. Let’s check it out: z+0 z+0, track z+1 z+1, track z+2 (z+0 mirrored) #### #### #### #### ▲▲╗# ▼### ▼### ##▲# ##║# ##▼# ╚▲▲# ╚╔╝# ##▼# ##▼# #### #### #### #### #### #### This one surprised me at first, because it “somehow” manages to send a cart up two levels, seemingly with a single checkpoint. Spoiler: of course it’s two checkpoints. The east-pointing ramp on z+0 works as a proper speed-granting impulse ramp here, because the cart enters it from flat floor, not from another ramp. When I tried it out, the cart spent two or three steps (repeating pattern of different rates, like in the cyclotron above) on the ramp each time, so there was always speed gained here. The corner up ramp is, unsurprisingly, a checkpoint, the cart passes it in a single step. What I hadn’t fully understood yet - the next, straight, ramp is also a checkpoint, because the slant of ramps changed, from west to south. The flat corner is yet another full checkpoint, which doesn’t really matter in and of itself, but the fact that it’s normal floor and not a ramp saves the following impulse ramp from being a full checkpoint, so it can actually do its impulse work. Let’s crack an old puzzle next: the 2x2 ramp spiral. It’s a notoriously ill-behaved contraption, carts keep stopping on it for no discernible reason. At the same time, it looks so simple: #### #### #### #### #### #╔╗# #▲▼# #▼## #### ##▲# #╚╝# #### #▲## #▼▲# ##▼# #### #### #### #### #### Spread over four levels, one corner on each level, each leading into the next. Throwing a cart down such a spiral lets the cart start going at one ramp per step, but after five, it stops, starts again, goes another five, stops again etc. Ho hum. Is it picking up too much speed? I put a few stone blocks into a cart and sent it down there. The blocks stayed in the cart. Well, it was moving at one ramp per step, so it was probably checkpoint-hopping again. Makes sense, of course, since ramp slant changes on every tile. So it probably stopped simply because its speed dropped to zero. Still, a cart going down a ramp spiral and losing speed? I revved up a cart in the trusty cyclotron and sent it down a nice long spiral. It kept going and emerged 21 z-levels below - at 130.000 speed. The cart was definitely losing ~6.000 speed on every ramp, a few more tests confirmed this. In fact, a downward spiral slows down a cart exactly as much as an upward spiral. Inspired by rhesusmacabre’s long table, I built a few simple test spirals, and yes, I was getting checkpoint-movement up the spirals, over nice large numbers of levels, and my eyeballed speed loss of 6.000 per level seemed to work out. I definitely needed to crack the puzzle of corner ramps. But first, some light entertainment. Since different-slant ramps work as checkpoints for each other and the compensating speed effects cancel out their acceleration, shouldn’t it be possible to send reallllly slow carts along a line of impulse ramps, bouncing one ramp per step until ramps stopped and the actual speed reasserted itself? I built a line of 24 impulse ramps stretching from east to west and with wall to the south, alternating between NS and SW every step, hatch-dropped a minecart on the easternmost (SW) ramp and watched it. Yep, cart rolled down the usual five steps, then went forward at a rate of one ramp every step over the whole line, and once it emerged from the ramp line, it crawled along at the actual ~19.000 speed (five to six steps used for every tile). But shouldn’t the northward acceleration, although it’s cancelled instantly, result in a minor northward displacement on every NS ramp that should eventually push the cart past the northern border? I expanded the row to ~40 ramps, and sure enough, after the thirtieth ramp (15th NS ramp) the cart moved off the ramp-line to the north. To make sure it’s really displacement and not northward velocity, I covered ten ramps with a bridge so that north-pointing ramps #15 to #19 were obscured. The cart moved over this stretch without diverting, went over the SW ramp directly behind the bridge - and made its step to the north when it checkpoint-passed the NS ramp behind it, the twentieth northward ramp in the line, but this time, the fifteenth touched by the cart. Fifteen pushes of presumably 4900 distance units give 73500 distance units, just over half the assumed length of a ramp (140.000 or so - I don’t know the exact number Toady uses). Enough to move over the border to the next tile when starting in the middle of a tile. Seems that it works out. Of course, northward displacement can simply be compensated by southward displacement. I dug out a track all across the embark (normal embark, so just 190ish tiles) and carved out a nice stretch of 160ish alternating track ramps. First ten “forward” ramps interspersed with 10 North-slanted ramps, then (changing the adjacent wall) 20 forward with 20 south-slanted, then another 20/20 stretch forward/north etc.., finally a bit of flat track leading into a little loop at the far end. The cart was dropped in via hatch as usual and moved all across the embark without falling off the row, passing one tile per step as long as it was bouncing over ramps, while the flat track at the end demonstrated its internal speed remained at the original 19.000. The loop itself contained a nice juicy acceleration rail, increasing speed on the route back to ~120.000, and the cart went back all the way, once again at 1 tile/step externally, unfazed by the 80 “opposing” impulse ramps. Lesson Ten: Corner ramps¶ Corner ramps had been bugging me for a while now, so I built a simple test rig: above below #▼═════ ▲# ║ With a SE ( ╔) track ramp. First of all, send a cart up the ramp: no matter what I do, when given straight track, the cart will move diagonally and the first step aside happens after 11 steps, adequate for a lateral component of just under 5000 speed, i.e. the acceleration gained by a single step on a ramp. Curiosly, while the corner should convert all south-to-north velocity of the cart into west-to-east velocity and the ramp slants to the south, the aberration was to the north. Unsurprisingly, the culprit is the checkpoint bug: almost always, a corner ramp passed upward leads to a checkpoint - the ramp slants south and the most sensible connections above are flat track or a west-slanting ramp. Thus, the checkpoint effect is applied: a) the next tile is crossed in a single step. b) compensative acceleration is applied which is opposed to the ramp’s slant. That’s it - the corner outputs the cart on a pure-eastward path but then the “compensating” speed is applied and gives the much-abhorred diagonality to the cart. So, putting it in numbers: when a cart checkpoint-hops up a corner ramp, it loses 5000 from its original incoming speed to ramp acceleration, loses another 1000 for the corner, and the checkpoint doesn’t “refund” the 5000 speed but rather (since it’s applied after the corner turn) applies it as lateral/diagonal speed towards the “outside” of the corner. A cart going up a corner ramp at any speed loses 5000x(time on ramp)+1000(corner penalty) speed, and gains 5000 lateral. That was the easy part. Let’s send a cart down the ramp now. If the cart is fast enough (about 45.000 minimum), it takes the corner and continues perfectly straight in the corner’s exit direction, with a speed loss of ca. 6000. I tried it with a highest-speed roller, and the cart going through a corner ramp would emerge at 44.000 speed, while a cart going down a straight ramp would gain ca. 5000 and emerge at 55.000. Once again, we’re dealing with checkpoints and a corner, so let’s step through it: On the corner ramp, all acceleration goes to the side, it doesn’t accelerate the cart in its original travel direction. Here, we have a cart going west, which is accelerated south. Unsurprisingly, the westward speed isn’t increased by this event. At the end of the turn on which the cart wants to leave the tile, the corner comes into play, converts all westward to southward motion overwriting the extant southern vector, the acceleration gained is therefore lost. On the next step, the cart reaches a checkpoint and to compensate, it is “accelerated” 5000 units to the north. Summa: all southward acceleration was ignored because of the corner, but the compensative deceleration still applies, so the cart loses 5000 speed, plus 1000 for the corner. 6000 in total. What’s that about a 45.000 minimum speed? Ah well, losing speed on a down ramp is not the weirdest thing here. A cart moving at lower speeds than that is liable to malfunction even more blatantly. A cart propelled by a dwarven push emerges at a mostly-south-and-slightly-west trajectory, going off the straight line after two tiles. A cart entering the ramp at between 30.000 and 40.000 speed leaves at an almost-45° angle, a very sharp diagonal. It took me quite a while to think up a solution for that one, but I think it works out: Corners are only checked when a cart tries to leave a tile, and they only check whether the side opposed to the “border” over which the cart is trying to leave is connected. In understandable: if a cart on a southwest heading is trying to leave a tile going over the western border of the tile, the pathing algorithm checks if the tile underneath is a track corner with an eastern connection. If yes, the cart is turned around towards the corner’s other connection. If the cart tries to leave over the southern border, the algorithm checks whether the tile is a north-connected corner. If the checked border is not connected or if the tile isn’t a corner, the cart leaves normally and its speed(s) is (are) unchanged. So what happens with these slower carts is this: they move so slowly to the west and thus pick up so much southward speed on the ramp, that the cart’s “exit” direction from the tile is south (or SW (??) in the case of the somewhat-slow cart), and thus the corner has no power over them. Consequently, they move off on their screwy diagonal course. A cart going down a corner ramp, properly taking the corner, loses 5000(checkpoint compensation)+1000 (for the corner)=6000 speed, independent of lingering time on the ramp. If time on the ramp is too long, the corner starts checking the wrong (unconnected) side of the tile when the cart tries to leave and no longer applies. In that case, the output trajectory is purely diagonal, presumably incoming speed in the incoming direction + 5000x(lingering time minus one) lateral (towards ramp slant), no corner penalty. Bodycount: nothing new, no new tests required. I just wrote up what I had worked out previously. This concludes our course on Minecarts. Annotations, corrections, claims of priority will be gracefully accepted and carefully considered. Possibly. I’ve tried to link to sources and earlier findings. I owe a large debt to other players for their research and inspiring inventions.
http://df-walkthrough.readthedocs.io/en/latest/masterclass/minecart-education.html
2018-03-17T16:39:54
CC-MAIN-2018-13
1521257645248.22
[]
df-walkthrough.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Initiates the asynchronous execution of the DescribeScalingPolicies operation. This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginDescribeScalingPolicies and EndDescribeScalingPolicies. Namespace: Amazon.GameLift Assembly: AWSSDK.GameLift.dll Version: 3.x.y.z Container for the necessary parameters to execute the DescribeScalingPolic
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/GameLift/MIGameLiftDescribeScalingPoliciesAsyncDescribeScalingPoliciesRequestCancellationToken.html
2018-03-17T16:48:19
CC-MAIN-2018-13
1521257645248.22
[]
docs.aws.amazon.com
You can add variables to your in-app messages and emails to make them far more engaging and effective. Just click the icon on the right of the text block on the composer. Here are some useful examples: You can greet each user by their first name. We'll automatically add this at the beginning of each message for you, because it's best practice. It's easy to remove it if you don't need it. Company name Or you could add the company name. If you’re looking for feedback about your product, for example, we recommend addressing the company by their first name. They’re more likely to respond to a personal message sent to a select number of companies, than an impersonal message sent to everyone. Note: If a user is a member of multiple companies, messages using company name will be sent to them once for each company. Add custom attributes and events You can even include custom attributes or events in your message, so that the information displayed is specific to each user. Maybe that’s the number of songs a user has created, the number of teammates they’ve added or the number of times they’ve logged into your product. This is a great way to let customers know that they’re about to reach a limit. Or you can display their achievements and celebrate their progress. For example, displaying the number of projects a user has created might spur them on to achieve even more success. Fallbacks A lot of the time it's a good idea to replace the fallback variable with another word, in case the information is not available for a particular user. For example, if you add "Hey <first name>.." at the beginning of your message and use the fall back as "there", it will say "Hey John" if we know John's name but it will say "Hey there" if we don't. Just click any variable you’ve added, and you’ll be able to include a fallback. You can also add variables to email messages you send to customers.
https://docs.intercom.com/intercom-s-key-features-explained/sending-messages/personalizing-messages-using-variables
2018-03-17T16:34:58
CC-MAIN-2018-13
1521257645248.22
[array(['https://uploads.intercomcdn.com/i/o/9664577/773769ad4e7d3b860869ee75/first%2520name%2520variable.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/9664560/e0351bd8c997705bff805a10/company%2520variable.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/9664596/45cb611b932494e49e0c0eac/custom%2520attributes%2520and%2520events.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/9664616/0b0348511801874babb9a069/fallback%2520.png', None], dtype=object) ]
docs.intercom.com
In all the above functions, you will see some common arguments as given below:). angle is the angle of rotation of ellipse in anti-clockwise direction. startAngle and endAngle denotes the starting and ending of ellipse arc measured in clockwise direction from major axis. i.e. giving values 0 and 360 gives the full ellipse. For more details, check the documentation of cv2.ellipse(). Below example draws a half ellipse at the center of the image. To draw a polygon, first you need coordinates of vertices. Make those points into an array of shape ROWSx1x2 where ROWS are number of vertices and it should be of type int32. Here we draw a small polygon of with four vertices in yellow color. To put texts in images, you need specify following things. We will write OpenCV on our image in white color. So it is time to see the final result of our drawing. As you studied in previous articles, display the image to see it.
https://docs.opencv.org/3.2.0/dc/da5/tutorial_py_drawing_functions.html
2018-03-17T16:14:25
CC-MAIN-2018-13
1521257645248.22
[]
docs.opencv.org
Bathymetry Notebooks and Tools¶ Full Salish Sea Domain Bathymetry¶ - SalishSeaBathy.ipynb: Documents the full domain bathymetry used for the Salish Sea NEMO runs. The notebook includes: - Conversion of the bathymetry data from the 2-Oct-2013 WC3_PREPtarball to a netCDF4 dataset with zlib compression enabled for all variables and least_significant_digit=1 set for the depths (Bathymetry) variable. - Clipping of the depths such that depths between 0 and 4m are set to 4m and depths greater than 428m (the deepest value in the Strait of Georgia) are set to 428m. - Algorithmic smoothing Initial Sub-domain Test Bathymetry¶ - SalishSeaSubdomainBathy.ipynb: Documents the bathymetry used for the initial Salish Sea NEMO runs on a sub-set of the whole region domain. The sub-domain bathymetry was used for the runs known as JPP and WCSD_RUN_tide_M2_OW_ON_file_DAMP_ANALY. The notebook includes 2 approaches to smoothing to bathymetry to get a successful 72 hour NEMO-3.4 run with M2 tidal forcing: - Manual smoothing based on depth adjustments at the locations where test runs failed - Algorithmic smoothing applied to the entire sub-domain - netCDF4bathy.ipynb: Documents the creation of a netCDF4 bathymetry file from the algorithmic smoothed bathymetry with zlib compression enabled for all variables. The resulting file is about 1/6 the size (227 kb in contrast to 1.6 Mb)
http://salishsea-meopar-tools.readthedocs.io/en/latest/bathymetry/index.html
2018-03-17T16:12:21
CC-MAIN-2018-13
1521257645248.22
[]
salishsea-meopar-tools.readthedocs.io
How does a pair-wise alternative exon analysis differ from multiple group comparisons? Answer: Statistics for a pair-wise comparison are calculated by comparing the expression values for two biological groups. For exon array analyses (splicing-index or FIRMA), these means comparison of the gene expression corrected intensities (or residuals) in the control and experimental groups to derive a alternative exon fold change and p-value based on those two groups. For a junction array analysis, this also relies on two groups, control and experimental, but with different algorithm options (ASPIRE and linear regression), involving multiple probesets. When more than two biological sample groups are analyzed for alternative exon expression, AltAnalyze compares the two groups with the most extreme expression values. In the case of an exon array, these are the two groups with the smallest and lowest normalized intensities (mean) to derive a score (splicing-index or FIRMA). To derive a p-value, the variance in all groups are examined by MiDAS and/or by a f-test of the normalized intensities all of group samples.
http://altanalyze.readthedocs.io/en/latest/MultipleComparisons/
2018-03-17T16:15:05
CC-MAIN-2018-13
1521257645248.22
[]
altanalyze.readthedocs.io
Introduction¶ What is ezdxf¶ ezdxf is a Python package which allows developers to read existing DXF drawings or create new DXF drawings. The main objective in the development of ezdxf was to hide complex DXF details from the programmer but still support all the possibilities of the DXF format. Nevertheless, a basic understanding of the DXF format is an advantage (but not necessary), also to understand what is possible with the DXF file format and what is not. ezdxf is still in its infancy, therefore not all DXF features supported yet, but additional features will be added in the future gradually. What ezdxf is NOT¶ - ezdxf is not a DXF converter: ezdxf can not convert between different DXF versions, if you are looking for an appropriate program, use DWG TrueView from Autodesk, but the latest version can only convert to the DWG format, for converting between DXF versions you need at least AutoCAD LT. - ezdxf is not a CAD file format converter: ezdxf can not convert DXF files to ANY other format, like SVG, PDF or DWG - ezdxf is not a DXF renderer (see above) - ezdxf is not a CAD kernel, ezdxf does not provide any functionality for construction work, it is just an interface to the DXF file format. Supported Python Versions¶ ezdxf requires at least Python 2.7 and it’s Python 3 compatible. I run unit tests with the latest stable CPython 3 version and the latest stable release of pypy during development. ezdxf is written in pure Python and requires only pyparser as additional library beside the Python Standard Library, hence it should run with IronPython and Jython also. pytest is required to run the provided unit and integration tests. Data to run the stress and audit test can not be provided, because I don’t have the publishing rights for this DXF files. Supported Operating Systems¶ ezdxf is OS independent and runs on all platforms which provide an appropriate Python interpreter (>=2.7). Embedded DXF Information of 3rd Party Applications¶ The DXF format allows third-party applications to embed application-specific information. ezdxf manages DXF data in a structure-preserving form, but for the price of large memory requirement. Because of this, processing of DXF information of third-party applications is possible and will retained on rewriting. License¶ ezdxf is licensed under the very liberal MIT-License.
http://ezdxf.readthedocs.io/en/latest/introduction.html
2018-03-17T16:28:43
CC-MAIN-2018-13
1521257645248.22
[]
ezdxf.readthedocs.io
Geo Hash Grid Aggregation More info about geo hash grid aggregation is in the official elasticsearch docs A multi-bucket aggregation that works on geo_point fields and groups points into buckets that represent cells in a grid. Simple example { "aggregations" : { "GrainGeoHashGrid" : { "geohash_grid" : { "field" : "location", "precision" : 3 } } } } And now the query via DSL: $geoHashGridAggregation = new GeoHashGridAggregation( 'GrainGeoHashGrid', 'location', 3 ); $search = new Search(); $search->addAggregation($geoHashGridAggregation); $queryArray = $search->toArray();
http://docs.ongr.io/ElasticsearchDSL/Aggregation/GeoHashGrid
2018-03-17T16:30:23
CC-MAIN-2018-13
1521257645248.22
[]
docs.ongr.io
DXF File Encoding¶ DXF Version R2004 and prior¶ Drawing files of DXF versions R2004 (AC1018) and prior are saved as ASCII files with the encoding set by the header variable $DWGCODEPAGE, which is ANSI_1252 by default if $DWGCODEPAGE is not set. Characters used in the drawing which do not exist in the chosen ASCII encoding are encoded as unicode characters with the schema \U+nnnn. see Unicode table DXF Version R2007 and later¶ Starting with DXF version R2007 (AC1021) the drawing file is encoded by UTF-8, the header variable $DWGCODEPAGE is still in use, but I don’t know, if the setting still has any meaning. Encoding characters in the unicode schema \U+nnnn is still functional. See also
http://ezdxf.readthedocs.io/en/latest/dxfinternals/fileencoding.html
2018-03-17T16:27:35
CC-MAIN-2018-13
1521257645248.22
[]
ezdxf.readthedocs.io
Changes related to "Glossary" ← Glossary This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. 21 August 2015 15:42(Page translation log) MATsxm (Talk | contribs) marked Package for translation m 15:36Package (diff; hist; 0) Haydenyoung
https://docs.joomla.org/Special:RecentChangesLinked/Glossary
2015-08-27T22:20:45
CC-MAIN-2015-35
1440644059993.5
[array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
Difference between revisions of "Users User Notes" From Joomla! Documentation Revision as of 09:36, 4 April 2013 Adds a specific note or comment about a user. How to Access From the administrator area, select Users → User Notes from the drop-down menu of the Administration screen, or click on the User Manager icon in the Control Panel and select the User Notes tab. user notes. - Edit. Opens the editing screen for the selected user notes. If more than one user notes is selected (where applicable), only the first user notes will be opened. The editing screen can also be opened by clicking on the Title or Name of the user notes. - Publish. Makes the selected user notes available to visitors to your website. - Unpublish. Makes the selected user notes unavailable to visitors can be edited. <translate> - Help. Opens this help screen.</translate>.
https://docs.joomla.org/index.php?title=Help32:Users_User_Notes&diff=next&oldid=79631
2015-08-27T22:55:11
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Location of template language definition files From Joomla! Documentation This page is tagged because it NEEDS VERSION UPDATING. You can help the Joomla! Documentation Wiki by contributing to it. More pages that need help similar to this one are here. NOTE-If you feel the need is satistified, please remove this notice. Language definition files for front-end templates are stored in [path-to-Joomla]/language/[ln-LN] where [ln-LN] is the language code. Language codes are defined in RFC3066[1] The file must be named [ln-LN].tpl_[template-name].ini where [template-name] is the name of the template (in lowercase). For example, the British English language file for the Beez template is [path-to-Joomla]/language/en-GB/en-GB.tpl_beez.ini You should also create a separate language file for translating the Administrator back-end of your template. This will be stored in [path-to-Joomla]/administrator/language/[ln-LN] but the file naming convention is the same. For administrator templates, as distinct from front-end templates, the second of these files is the only one required. For example, the British English language file for the Khepri administrator template is located in [path-to-Joomla]/administrator/language/en-GB/en-GB.tpl_khepri.ini - ↑ RFC3066: Tags for the Identification of Languages
https://docs.joomla.org/J2.5:Location_of_template_language_definition_files
2015-08-27T21:31:13
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Changes related to "How to install Joomla 1.6" ← How to install Joomla 1.6 This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130127192500&target=How_to_install_Joomla_1.6
2015-08-27T22:58:22
CC-MAIN-2015-35
1440644059993.5
[]
docs.joomla.org
Building and installing NumPy¶ Binary installers¶ In most use cases the best way to install NumPy on your system is by using an installable binary package for your operating system. Windows¶’. Linux¶ Most of the major distributions provide packages for NumPy, but these can lag behind the most recent NumPy release. Pre-built binary packages for Ubuntu are available on the scipy ppa. Redhat binaries are available in the EPD. Mac OS X¶ A universal binary installer for NumPy is available from the download site. The EPD provides NumPy binaries. Building from source¶ A general overview of building NumPy from source is given here, with detailed instructions for specific platforms given seperately. Prerequisites¶. FORTRAN ABI mismatch¶ The two most popular open source fortran compilers are g77 and gfortran. Unfortunately, they are not ABI compatible, which means that concretely you should avoid mixing libraries built with one with another. In particular, if your blas/lapack/atlas is built with g77, you must use g77 when building numpy and scipy; on the contrary, if your atlas is built with gfortran, you must build numpy/scipy with gfortran. One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used. If libgfortran.so is a a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea.. Ubuntu 8.04 and lower¶
http://docs.scipy.org/doc/numpy-1.8.0/user/install.html
2015-08-27T21:29:18
CC-MAIN-2015-35
1440644059993.5
[]
docs.scipy.org
public interface ThrowsAdvice extends AfterAdvice. Note: If a throws-advice method throws an exception itself, it will override the original exception (i.e. change the exception thrown to the user). The overriding exception will typically be a RuntimeException; this is compatible with any method signature. However, if a throws-advice method throws a checked exception, it will have to match the declared exceptions of the target method and is hence to some degree coupled to specific target method signatures. Do not throw an undeclared checked exception that is incompatible with the target method's signature! AfterReturningAdvice, MethodBeforeAdvice
http://docs.spring.io/spring/docs/3.2.0.BUILD-SNAPSHOT/api/org/springframework/aop/ThrowsAdvice.html
2015-08-27T22:00:36
CC-MAIN-2015-35
1440644059993.5
[]
docs.spring.io
Access the Layout Theme Options via Appearance > Theme Options > Layout 1200px. Site Margin Set a Margin for main site area on Desktops.. Body Width Set the main Body width ( does not include the Header or Footer ). Choose between Site Width which is the width set within the Maximum Site Width option or Browser Width which is the full width of the browser. Default Page Layout This option sets the default Page Layout for each page created. You can override this setting within the Options of each Post or Page. Maximum Page Width The the Default Page Layout is set to full width ( Layout One ), you can set the maximum content width. Sticky Sidebar When the Default Page Layout is set to an option with a Sidebar, you can use this option to make the Sidebar Sticky. i.e. The Sidebar follows as the user scrolls down the page. Number of Widget Areas If you wish to create more Sidebars or Widget Areas, enter the number you wish to create here. Breadcrumbs This Breadcrumbs option is a global option that controls the display of Breadcrumbs acrsoss the site. You can override this setting within the Options of each Post or Page. Widget Title HTML Tag This option allows you to set the HTML Tag for Widgets. This is useful for SEO purposes. Search Placeholder Use this option to change the text that appears within the Search Field. Advanced Layout Sidebar Width Use this option in conjunction with a Page / Post layout with a single Sidebar to set the Sidebar Width. Enter any valid CSS unit, e.g. 25% Dual Sidebar Width Use this option in conjunction with a Page / Post layout with a Dual Sidebar to set the Sidebar Widths. Enter any valid CSS unit, e.g. 25% Content Padding Use this option to add padding to the Page content area. This is useful to create extra space between the Page content and the Header, Footer & Sidebar areas. Sidebar Padding Use this option to add padding to the Sidebar area.
http://docs.acoda.com/dynamix/tag/page-width/
2018-04-19T13:22:55
CC-MAIN-2018-17
1524125936969.10
[array(['http://docs.acoda.com/you/wp-content/uploads/2016/03/media_1455954795996.png', 'media_1455954795996.png'], dtype=object) ]
docs.acoda.com
Before starting your first creation, don’t forget to think about it first, even if only for a couple of minutes. A good preparation and understanding of your concept will help you later in the sculpting process. Sometimes, a simple 2D sketch can be very helpful and perhaps save time later. For this purpose, ZBrush offers two plugins: Quick Sketch which as its name says is a quick solution to sketch out your ideas. It uses a few brushes located in the Brush palette, starting with the “Pen” name. Just click on the Quick Sketch button located on the top left of the ZBrush interface and start drawing. You will notice from the first stroke that symmetry is enabled. To disable symmetry just press on the X key or go to the Transform palette and disable the Symmetry mode. When you are done with your drawing, you can save it as a Tool or a Project. From there you can load a new project to start your sculpt in 3D or reset ZBrush by going to the Preferences palette and clicking the Init ZBrush button. PaintStop is a plugin that will temporarily replace the ZBrush default interface and transform it into a full painting software. As mentioned in the introduction of this Starting Guide, ZBrush is also a 2D program, capable of being used to paint beautiful illustrations! PaintStop is designed to mimic real-world media. Draw with different pencils, continue with oil painting, crayon, pastels, watercolors and more to perhaps create more than just simple 2D concepts! To launch PaintStop, go to the Document palette and click on the PaintStop button. You can find the PaintStop documentation in the Documentation folder in your ZBrush directory.
http://docs.pixologic.com/getting-started/basic-concepts/create-concepts-in-2d/
2018-04-19T13:36:03
CC-MAIN-2018-17
1524125936969.10
[]
docs.pixologic.com
. --if ( event.phase == "ended" ) then print ( "Upload complete!" ) end end network.upload( "", "POST", networkListener, "object.json", system.DocumentsDirectory, "application/json" )
http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/api/library/network/upload.html
2018-04-19T13:19:08
CC-MAIN-2018-17
1524125936969.10
[]
docs.coronalabs.com.s3-website-us-east-1.amazonaws.com
DescribeKeyPairs Describes one or more of your key pairs. For more information about key pairs, see Key Pairs. fingerprint- The fingerprint of the key pair. key-name- The name of the key pair. Type: Array of Filter objects Required: No - KeyName.N One or more key pair names. Default: Describes all your key pairs. Type: Array of strings Required: No Response Elements The following elements are returned by the service. - keySet Information about one or more key pairs. Type: Array of KeyPairInfo objects - requestId The ID of the request. Type: String Errors For information about the errors that are common to all actions, see Common Client Errors.:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeKeyPairs.html
2018-04-19T14:01:13
CC-MAIN-2018-17
1524125936969.10
[]
docs.aws.amazon.com
Relog Applies To: Windows Vista, Windows Server 2008, Windows Server 2012, Windows 8 Extracts performance counters from performance counter logs into other formats, such as text-TSV (for tab-delimited text), text-CSV (for comma-delimited text), binary-BIN, or SQL. For examples of how this command can be used, see Examples.. Counter files: - Counter files are text files that list one or more of the performance counters in the existing log. Copy the full counter name from the log or the /q output in [\\<Computer>\<Object> [<Instance>] \<Counter>] format. List one counter path on each line. Copying counters: - When executed, relog copies specified counters from every record in the input file, converting the format if necessary. Wildcard paths are allowed in the counter file. Saving input file subsets: - Use the /t parameter to specify that input files are inserted into output files at intervals of every <n. Examples To resample existing trace logs at fixed intervals of 30, list counter paths, output files and formats: relog c:\perflogs\daily_trace_log.blg /cf counter_file.txt /o c:\perflogs\reduced_log.csv /t 30 /f csv To resample existing trace logs at fixed intervals of 30, list counter paths and output file: Relog c:\perflogs\daily_trace_log.blg /cf counter_file.txt /o c:\perflogs\reduced_log.blg /t 30
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc771669(v=ws.11)
2018-04-19T14:44:59
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
The matrices in this chapter outline what is supported for HDP 2.6.4. for IBM Power. The following operating systems are supported for HDP 2.6.4 for IBM Power: The Ambari Install Wizard runs as a browser-based Web application. You must have a machine capable of running a graphical browser to use this tool. The minimum required browser versions are: On any platform, we recommend updating your browser to the latest, stable version. On each of your hosts: yumand rpm(RHEL 7) curl, jsch, scp, tar, unzip, and wget OpenSSL (v1.01, build 16 or later) Python 2.7 The following Java Development Kits (JDKs) are supported for HDP 2.6.4 for IBM Power: You must install the following OpenJDK 1.8 for PPC packages as a prerequisite on all machines in the cluster: java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless Set your JAVA_HOME: export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk Ambari requires a relational database to store information about the cluster configuration and topology. If you install HDP Stack with Hive or Oozie, they also require a relational database. The following table outlines these database requirements: * Ambari 2.5 installs MySQL 5.6 with the Inno DB engine. ** To use a MySQL as the Ambari database, you must set up the mysql connector, create auser and grant user permissions. More Information Using Existing Databases - Ambari Using Existing Databases - Hive Using Existing Databases - Oozie
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_support-matrices/content/ch_matrices-ibm.html
2018-04-19T14:18:55
CC-MAIN-2018-17
1524125936969.10
[]
docs.hortonworks.com
Using Service Map solution in Azure. This article describes the details of using Service Map. For information about configuring Service Map and onboarding agents, see Configuring Service Map solution in Azure. Use cases: Make your IT processes dependency aware Discovery Service Map outage, and network issues. Incident management Service Map helps eliminate the guesswork of problem isolation by showing you how systems are connected and affecting each other. In addition to identifying failed connections, it helps identify misconfigured load balancers, surprising or excessive load on critical services, and rogue clients, such as developer machines talking to production systems. By using integrated workflows with Change Tracking, you can also see whether a change event on a back-end machine or service explains the root cause of an incident. Migration assurance By using Service Map, you can effectively plan, accelerate, and validate Azure migrations, which helps ensure that nothing is left behind and surprise outages do not occur. You can discover all interdependent systems that need to migrate together, assess system configuration and capacity, and identify whether a running system is still serving users or is a candidate for decommissioning instead of migration. After the move is complete, you can check on client load and identity to verify that test systems and customers are connecting. If your subnet planning and firewall definitions have issues, failed connections in Service Map maps point you to the systems that need connectivity. Business continuity If you are using Azure Site Recovery and need help defining the recovery sequence for your application environment, Service Map can automatically show you how systems rely on each other to ensure that your recovery plan is reliable. By choosing a critical server or group and viewing its clients, you can identify which front-end systems to recover after the server is restored and available. Conversely, by looking at critical servers’ back-end dependencies, you can identify which systems to recover before your focus systems are restored. Patch management Service Map enhances your use of the System Update Assessment by showing you which other teams and servers depend on your service, so you can notify them in advance before you take down your systems for patching. Service Map also enhances patch management by showing you whether your services are available and properly connected after they are patched and restarted. Mapping overview Service Map agents gather information about all TCP-connected processes on the server where they’re installed and details about the inbound and outbound connections for each process. In the list in the left pane, you can select machines or groups that have Service Map agents to visualize their dependencies over a specified time range. Machine dependency maps focus on a specific machine, and they show all the machines that are direct TCP clients or servers of that machine. Machine Group maps show sets of servers and their dependencies. Machines can be expanded in the map to show the running process groups and processes with active network connections during the selected time range. When a remote machine with a Service Map agent is expanded to show process details, only those processes that communicate with the focus machine are shown. The count of agentless front-end machines that connect into the focus machine is indicated on the left side of the processes they connect to. If the focus machine is making a connection to a back-end machine that has no agent, the back-end server is included in a Server Port Group, along with other connections to the same port number. By default, Service Map maps show the last 30 minutes of dependency information. By using the time controls at the upper left, you can query maps for historical time ranges of up to one hour to show how dependencies looked in the past (for example, during an incident or before a change occurred). Service Map data is stored for 30 days in paid workspaces, and for 7 days in free workspaces. Status badges and border coloring At the bottom of each server in the map can be a list of status badges conveying status information about the server. The badges indicate that there is some relevant information for the server from one of the solution integrations. Clicking a badge takes you directly to the details of the status in the right pane. The currently available status badges include Alerts, Service Desk, Changes, Security, and Updates. Depending on the severity of the status badges, machine node borders can be colored red (critical), yellow (warning), or blue (informational). The color represents the most severe status of any of the status badges. A gray border indicates a node that has no status indicators. Process Groups Process Groups combine processes that are associated with a common product or service into a process group. When a machine node is expanded it will display standalone processes along with process groups. If any inbound and outbound connections to a process within a process group has failed then the connection is shown as failed for the entire process group. Machine Groups Machine Groups allow you to see maps centered around a set of servers, not just one so you can see all the members of a multi-tier application or server cluster in one map. Users select which servers belong in a group together and choose a name for the group. You can then choose to view the group with all of its processes and connections, or view it with only the processes and connections that directly relate to the other members of the group. Creating a Machine Group To create a group, select the machine or machines you want in the Machines list and click Add to group. There, you can choose Create new and give the group a name. Note Machine groups are currently limited to 10 servers, but we plan to increase this limit soon. Viewing a Group Once you’ve created some groups, you can view them by choosing the Groups tab. Then select the Group name to view the map for that Machine Group. The machines that belong to the group are outlined in white in the map. Expanding the Group will list the machines that make up the Machine Group. Filter by processes You can toggle the map view between showing all processes and connections in the Group and only the ones that directly relate to the Machine Group. The default view is to show all processes. You can change the view by clicking the filter icon above the map. When All processes is selected, the map will include all processes and connections on each of the machines in the Group. If you change the view to show only group-connected processes, the map will be narrowed down to only those processes and connections that are directly connected to other machines in the group, creating a simplified view. Adding machines to a group To add machines to an existing group, check the boxes next to the machines you want and then click Add to group. Then, choose the group you want to add the machines to. Removing machines from a group In the Groups List, expand the group name to list the machines in the Machine Group. Then, click on the ellipsis menu next to the machine you want to remove and choose Remove. Removing or renaming a group Click on the ellipsis menu next to the group name in the Group List. Role icons Certain processes serve particular roles on machines: web servers, application servers, database, and so on. Service Map annotates process and machine boxes with role icons to help identify at a glance the role a process or server plays. Failed connections Failed connections are shown in Service Map maps for processes and computers, with a dashed red line indicating that a client system is failing to reach a process or port. Failed connections are reported from any system with a deployed Service Map agent if that system is the one attempting the failed connection. Service Map measures this process by observing TCP sockets that fail to establish a connection. This failure could result from a firewall, a misconfiguration in the client or server, or a remote service being unavailable. Understanding failed connections can help with troubleshooting, migration validation, security analysis, and overall architectural understanding. Failed connections are sometimes harmless, but they often point directly to a problem, such as a failover environment suddenly becoming unreachable, or two application tiers being unable to talk after a cloud migration. Client Groups Client Groups are boxes on the map that represent client machines that do not have Dependency Agents. A single Client Group represents the clients for an individual process or machine. To see the IP addresses of the servers in a Client Group, select the group. The contents of the group are listed in the Client Group Properties pane. Server Port Groups Server Port Groups are boxes that represent server ports on servers that do not have Dependency Agents. The box contains the server port and a count of the number of servers with connections to that port. Expand the box to see the individual servers and connections. If there is only one server in the box, the name or IP address is listed. Context menu Clicking the ellipsis (...) at the top right of any server displays the context menu for that server. Load server map Clicking Load Server Map takes you to a new map with the selected server as the new focus machine. Show self-links Clicking Show Self-Links redraws the server node, including any self-links, which are TCP connections that start and end on processes within the server. If self-links are shown, the menu command changes to Hide Self-Links, so that you can turn them off. Computer summary The Machine Summary pane includes an overview of a server's operating system, dependency counts, and data from other solutions. Such data includes performance metrics, service desk tickets, change tracking, security, and updates. Computer and process properties When you navigate a Service Map map, you can select machines and processes to gain additional context about their properties. Machines provide information about DNS name, IPv4 addresses, CPU and memory capacity, VM type, operating system and version, last reboot time, and the IDs of their OMS and Service Map agents. You can gather process details from operating-system metadata about running processes, including process name, process description, user name and domain (on Windows), company name, product name, product version, working directory, command line, and process start time. The Process Summary pane provides additional information about the process’s connectivity, including its bound ports, inbound and outbound connections, and failed connections. Alerts integration Service Map integrates with Alerts in Log Analytics to show fired alerts for the selected server in the selected time range. The server displays an icon if there are current alerts, and the Machine Alerts pane lists the alerts. To enable Service Map to display relevant alerts, create an alert rule that fires for a specific computer. To create proper alerts: - Include a clause to group by computer (for example, by Computer interval 1minute). - Choose to alert based on metric measurement. Log events integration Service Map integrates with Log Search to show a count of all available log events for the selected server during the selected time range. You can click any row in the list of event counts to jump to Log Search and see the individual log events. Service Desk integration Service Map integration with the IT Service Management Connector is automatic when both solutions are enabled and configured in your Log Analytics workspace. The integration in Service Map is labeled "Service Desk." For more information, see Centrally manage ITSM work items using IT Service Management Connector. The Machine Service Desk pane lists all IT Service Management events for the selected server in the selected time range. The server displays an icon if there are current items and the Machine Service Desk pane lists them. To open the item in your connected ITSM solution, click View Work Item. To view the details of the item in Log Search, click Show in Log Search. Change Tracking integration Service Map integration with Change Tracking is automatic when both solutions are enabled and configured in your Log Analytics workspace. The Machine Change Tracking pane lists all changes, with the most recent first, along with a link to drill down to Log Search for additional details. The following image is a detailed view of a ConfigurationChange event that you might see after you select Show in Log Analytics. Performance integration The Machine Performance pane displays standard performance metrics for the selected server. The metrics include CPU utilization, memory utilization, network bytes sent and received, and a list of the top processes by network bytes sent and received. To see performance data, you may need to enable the appropriate Log Analytics performance counters. The counters you will want to enable: Windows: - Processor(*)\% Processor Time - Memory\% Committed Bytes In Use - Network Adapter(*)\Bytes Sent/sec - Network Adapter(*)\Bytes Received/sec Linux: - Processor(*)\% Processor Time - Memory(*)\% Used Memory - Network Adapter(*)\Bytes Sent/sec - Network Adapter(*)\Bytes Received/sec To get the network performance data, you must also have enabled the Wire Data 2.0 solution in your workspace. Security integration Service Map integration with Security and Audit is automatic when both solutions are enabled and configured in your Log Analytics workspace. The Machine Security pane shows data from the Security and Audit solution for the selected server. The pane lists a summary of any outstanding security issues for the server during the selected time range. Clicking any of the security issues drills down into a Log Search for details about them. Updates integration Service Map integration with Update Management is automatic when both solutions are enabled and configured in your Log Anlaytics workspace. The Machine Updates pane displays data from the Update Management solution for the selected server. The pane lists a summary of any missing updates for the server during the selected time range. Log Analytics records Service Map computer and process inventory data is available for search in Log Analytics. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is on-boarded to Service Map. These records have the properties in the following tables. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource. Note As Service Map features grow, these fields are subject to change. There are internally generated properties you can use to identify unique processes and computers: - Computer: Use ResourceId or ResourceName_s to uniquely identify a computer within a Log Analytics workspace. - Process: Use ResourceId to uniquely identify a process within a Log Analytics workspace. ResourceName_s is unique within the context of the machine on which the process is running (MachineResourceName_s) Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add "| dedup ResourceId" to the query. ServiceMapComputer_CL records Records with a type of ServiceMapComputer_CL have inventory data for servers with Service Map agents. These records have the properties in the following table: ServiceMapProcess_CL Type records Records with a type of ServiceMapProcess_CL have inventory data for TCP-connected processes on servers with Service Map agents. These records have the properties in the following table: Sample log searches List all known machines ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId List the physical memory capacity of all managed computers. ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId | project PhysicalMemory_d, ComputerName_s List computer name, DNS, IP, and OS. ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId | project ComputerName_s, OperatingSystemFullName_s, DnsNames_s, Ipv4Addresses_s Find all processes with "sql" in the command line ServiceMapProcess_CL | where CommandLine_s contains_cs "sql" | summarize arg_max(TimeGenerated, *) by ResourceId Find a machine (most recent record) by resource name search in (ServiceMapComputer_CL) "m-4b9c93f9-bc37-46df-b43c-899ba829e07b" | summarize arg_max(TimeGenerated, *) by ResourceId Find a machine (most recent record) by IP address search in (ServiceMapComputer_CL) "10.229.243.232" | summarize arg_max(TimeGenerated, *) by ResourceId List all known processes on a specified machine ServiceMapProcess_CL | where MachineResourceName_s == "m-559dbcd8-3130-454d-8d1d-f624e57961bc" | summarize arg_max(TimeGenerated, *) by ResourceId List all computers running SQL ServiceMapComputer_CL | where ResourceName_s in ((search in (ServiceMapProcess_CL) "*sql*" | distinct MachineResourceName_s)) | distinct ComputerName_s List all unique product versions of curl in my datacenter ServiceMapProcess_CL | where ExecutableName_s == "curl" | distinct ProductVersion_s Create a computer group of all computers running CentOS ServiceMapComputer_CL | where OperatingSystemFullName_s contains_cs "CentOS" | distinct ComputerName_s REST API All the server, process, and dependency data in Service Map is available via the Service Map REST API. Diagnostic and usage data Microsoft automatically collects usage and performance data through your use of the Service Map service. Microsoft uses this data to provide and improve the quality, security, and integrity of the Service Map service. To provide accurate and efficient troubleshooting capabilities, the data includes information about the configuration of your software, such as operating system and version, IP address, DNS name, and workstation name. Microsoft does not collect names, addresses, or other contact information. For more information about data collection and usage, see the Microsoft Online Services Privacy Statement. Next steps Learn more about log searches in Log Analytics to retrieve data that's collected by Service Map. Troubleshooting See the Troubleshooting section of the Configuring Service Map document. Feedback Do you have any feedback for us about Service Map or this documentation? Visit our User Voice page, where you can suggest features or vote up existing suggestions.
https://docs.microsoft.com/en-us/azure/operations-management-suite/operations-management-suite-service-map
2018-04-19T13:57:35
CC-MAIN-2018-17
1524125936969.10
[array(['media/oms-service-map/service-map-overview.png', 'Service Map overview'], dtype=object) array(['media/oms-service-map/status-badges.png', 'Status badges'], dtype=object) array(['media/oms-service-map/machine-group.png', 'Machine Group'], dtype=object) array(['media/oms-service-map/machine-groups-create.png', 'Create Group'], dtype=object) array(['media/oms-service-map/machine-groups-name.png', 'Name Group'], dtype=object) array(['media/oms-service-map/machine-groups-tab.png', 'Groups tab'], dtype=object) array(['media/oms-service-map/machine-groups-machines.png', 'Machine Group machines'], dtype=object) array(['media/oms-service-map/machine-groups-filter.png', 'Filter Group'], dtype=object) array(['media/oms-service-map/machine-groups-all.png', 'Machine Group all processes'], dtype=object) array(['media/oms-service-map/machine-groups-filtered.png', 'Machine Group filtered processes'], dtype=object) array(['media/oms-service-map/machine-groups-remove.png', 'Remove machine from group'], dtype=object) array(['media/oms-service-map/machine-groups-menu.png', 'Machine group menu'], dtype=object) array(['media/oms-service-map/role-icons.png', 'Role icons'], dtype=object) array(['media/oms-service-map/failed-connections.png', 'Failed connections'], dtype=object) array(['media/oms-service-map/client-groups.png', 'Client Groups'], dtype=object) array(['media/oms-service-map/client-group-properties.png', 'Client Group properties'], dtype=object) array(['media/oms-service-map/server-port-groups.png', 'Server Port Groups'], dtype=object) array(['media/oms-service-map/context-menu.png', 'Failed connections'], dtype=object) array(['media/oms-service-map/machine-summary.png', 'Machine Summary pane'], dtype=object) array(['media/oms-service-map/machine-properties.png', 'Machine Properties pane'], dtype=object) array(['media/oms-service-map/process-properties.png', 'Process Properties pane'], dtype=object) array(['media/oms-service-map/process-summary.png', 'Process Summary pane'], dtype=object) array(['media/oms-service-map/machine-alerts.png', 'Machine Alerts pane'], dtype=object) array(['media/oms-service-map/alert-configuration.png', 'Alert configuration'], dtype=object) array(['media/oms-service-map/log-events.png', 'Machine Log Events pane'], dtype=object) array(['media/oms-service-map/service-desk.png', 'Machine Service Desk pane'], dtype=object) array(['media/oms-service-map/change-tracking.png', 'Machine Change Tracking pane'], dtype=object) array(['media/oms-service-map/configuration-change-event.png', 'ConfigurationChange event'], dtype=object) array(['media/oms-service-map/machine-performance.png', 'Machine Performance pane'], dtype=object) array(['media/oms-service-map/machine-security.png', 'Machine Security pane'], dtype=object) array(['media/oms-service-map/machine-updates.png', 'Machine Change Tracking pane'], dtype=object)]
docs.microsoft.com
Remeshing SubTools To remesh one or more SubTools, go to the Tool > SubTool menu and make visible all SubTools which need to be remeshed. Invisible/hidden SubTools won’t be used for this operation. In the Remesh All section, change the options according to your needs and press the Remesh All button to generate a new SubTool. This new SubTool will be appended to your existing model. Combining different SubTools with operators To create a large amount of variation for your remeshing of models and create the base mesh that you need, you will by default combine all SubTools to create the new mesh. You can also subtract the Subtools of your choice, or request the computing of an intersection. ZBrush includes these three Boolean operators for use when generating a remesh. The generated model will be computed from the top SubTool to the bottom one, as listed in the SubTool menu. To activate or change an operator, click on one of the three operators icons in the SubTool selector: Add (default), Subtract or Intersection. These operators can be mixed with the symmetry option of the activated SubTool before launching the Remeshing function.
http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/remeshing/remesh-subtools/
2018-04-19T13:16:23
CC-MAIN-2018-17
1524125936969.10
[]
docs.pixologic.com
In virtual inline mode, unlike WCCP mode, the appliance provides no virtual inline-specific monitoring. To troubleshoot a virtual inline deployment, log into the appliance and use the Dashboard page to verify that traffic is flowing into and out of the appliance. Traffic forwarding failures are typically caused by errors in router configuration. If the Monitoring: Usage or Monitoring: Connections pages show that traffic is being forwarded but no acceleration is taking place (assuming that an appliance is already installed on the other end of the WAN link), check to make sure that both incoming WAN traffic and outgoing WAN traffic are being forwarded to the appliance. If only one direction is forwarded, acceleration cannot take place. To test health-checking, power down the appliance. The router should stop forwarding traffic after the health-checking algorithm times out.
https://docs.citrix.com/en-us/netscaler-sd-wan-hardware-platforms/enterprise-edition/1000-2000-enterprise-edition-appliance/cb-deployment-modes-con/br-adv-virt-inline-mode-con/br-adv-monit-trouble-con.html
2018-04-19T13:44:52
CC-MAIN-2018-17
1524125936969.10
[]
docs.citrix.com
. Note In Protocol Guidelines for tile and icon assets. It is recommended that apps create a new XAML Frame for each activation event that opens a new page. This way, the navigation backstack for the new XAML Frame will not contain any previous content that the app might have on the current window when suspended. Apps that decide to use a single XAML Frame for Launch and File Contracts should clear the pages on the Frame. Related topics Complete example Concepts Tasks Guidelines Reference
https://docs.microsoft.com/en-us/windows/uwp/launch-resume/handle-uri-activation
2018-04-19T13:59:11
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
the Mesh topology that is used (see SetIndices). The most common case of this is Meshes being composed of triangle lists, in which case there are three indices per triangle. (whenVertexBufferPtr, SetIndices. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Mesh.GetNativeIndexBufferPtr.html
2018-04-19T13:24:41
CC-MAIN-2018-17
1524125936969.10
[]
docs.unity3d.com
Revision history of "JDatabaseQuery:: toString/11.1" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 16:16, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JDatabaseQuery:: toString/11.1 to API17:JDatabaseQuery:: toString without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JDatabaseQuery::_toString/11.1&action=history
2015-08-28T03:02:10
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Document Bank Main Page > The working materials in the NRDC Document Bank are listed in reverse chronological order. For additional policy materials including reports and issue papers, see the Issues section of the main NRDC site. “Under the Rug: How Governments and International Institutions Are Hiding Billions in Support for the Coal Industry,” by NRDC, Oil Change International and World Wildlife Fund.. UNEP Intergovernmental Negotiating Committee to Prepare a Global Legally Binding Instrument on Mercury Documents relating to work to prepare for the entry into force of the Minamata Convention on Mercury and for the first meeting of the Conference of the Parties. Results of the sixth session of the Intergovernmental Negotiating Committee to prepare a global legally binding instrument on mercury, Bangkok, 3-7 November 2014. - int_14120401a.pdf Mercury Trade Consent Forms Finalized - int_14120401b.pdf Article 6 Exemption Registration Formats Finalized - int_14120401c.pdf Key Decisions Related to Financial Assistance NRDC Letter to the National Energy Board on IORVL/Chevron advance ruling decision NRDC letter to the National Energy Board regarding a recent decision by the NEB to grant IORVL and Chevron an advance ruling on each developer’s proposed SSRW “equivalency” strategy (July 17, 2014)., September 2013.", April 2013. Continuous Emission Monitoring Systems for Mercury This fact sheet from the Zero Mercury Working Group explains the technology and costs associated with continuous emission monitoring systems for mercury (Hg CEMS). Bangladesh: Best Practices for Textile Mills The world textile industry is a large part of our daily lives, from the clothes we wear to the napkins at eateries. Dyeing and finishing that textile can be an environmentally taxing process: One ton of fabric can lead to the pollution of up to 300 metric tons1 of water with a suite of harmful chemicals, and consume vast amounts of energy for steam and hot water. Now that the industry is centered in countries with still-developing environmental regulatory systems, such as China and Bangladesh, textile dyeing and finishing has a huge environmental footprint. To address this issue, NRDC and a group of apparel retailer and brand partners started the Responsible Sourcing Initiative (RSI) to curb pollution in the sector while saving the industry money. This report summarizes the practices that we found to be the top money-saving, environment-protecting opportunities in Bangladesh, which largely overlap with practices we previously identified in China.. ¿Se necesitan represas en la Patagonia? Un análisis del futuro energético chileno El desarrollo de mega-represas en la Patagonia puede tener altos impactos ambientales, económicos y culturales. Este estudio demuestra con datos precisos que es posible reemplazar el eventual aporte a la matriz energética de proyecto hidroeléctricos como Hidroaysén empleando energías renovables no convencionales y un uso eficiente de la energía. Aporte potencial de Energías Renovables No Convencionales y Eficiencia Energética a la Matriz Eléctrica, 2008 - 2025 Este estudio hace una estimación del potencial técnico y económicamente factible de energías renovables que se podrían incorpora al sistema interconectado central (SIC) hacia el año 2025 bajo tres escenarios posible. El estudio identifica las principales barreras que obstaculizan el aprovechamiento de estas fuentes de energía, y propone recomendaciones de política pública para la plena incorporación de las ERNC en la matriz eléctrica. Breaking Down the Myths: The True Cost of Energy and the Future of Renewable Energy in Chile A white paper summarizing NRDC’s findings about demand growth and the true cost of energy in Chile. Letter from The US Marine Mammal Commission supporting US to list polar bears as an "Appendix I" species under CITES Letter from the United States Marine Mammal Commission supporting a United States to propose listing polar bears as an “Appendix I” species under the Convention on International Trade in Endangered Species. Letter From United States Representatives urging the U.S. to list polar bears as an “Appendix I” species under CITES Letter on June 11, 2012 from United States Representatives urging the United States to propose listing polar bears as an “Appendix I” species under the Convention on International Trade in Endangered Species. Letter From United States Senators urging the U.S. to list polar bears as an “Appendix I” species under CITES Letter on June 13, 2012 from United States Senators urging the United States to propose listing polar bears as an “Appendix I” species under the Convention on International Trade in Endangered Species. Letter in Support of Banning the International Trade in Polar Bear Parts NRDC and other conservation organizations sent a letter to the U.S. Fish and Wildlife Service urging it to propose a ban on the international trade in polar bear parts at the next meeting of the Convention on International Trade in Endangered Species (CITES). The letter argues that we must do everything we can to strengthen polar bear populations, like stopping the killing of polar bears for the international market, to give them a greater chance to survive their primary threat – climate change. Simply put, the world no longer has any polar bears to spare, and certainly not to end up as a rug in front of someone’s fireplace. Building. Letter from 22 Environmental CEOs to President Obama Urging U.S. Leadership at the June 2012 Rio+20 Earth Summit The chief executive officers of 22 American environmental organizations wrote to President Obama to encourage him to lead the U.S. Delegation to the Rio+20 Earth Summit in Rio de Janeiro, Brazil June 20-22, 2012. They stated that the President’ participation was essential to demonstrating our nation’s concern about global challenges and our determination to be a leader in the transition to a low-carbon green economy. The letter also called upon the President to take specific actions at home and abroad, including reducing fossil fuel subsidies, creating an international network for monitoring ocean acidification, and securing a major new commitment of World Bank funding for off-grid clean energy. The Sustainable Cities Working Group’s Submission to the Secretariat of the UN for the “Rio +20 Earth Summit” The Rio+20 Earth Summit Sustainable Cities Working Group is pleased to provide this input “for inclusion in a compilation document to serve as basis for the preparation of zero draft of the outcome document” for the June 2012 UN Conference “Rio +20 Earth Summit”. The Working Group consists of a diverse group of leading civil society organizations and experts. The Working Group believes that it is essential for Rio+20 to give high priority to the challenges and opportunities presented by urban development worldwide. NRDC's Submission to the Secretariat of the UN for the "Rio+20 Earth Summit" NRDC, in consultative status with UN Economic and Social Council, is pleased to submit our views as a contribution for “inclusion in a compilation document to serve as basis for the preparation of zero draft of the outcome document” for the June 2012 UN Conference “Rio+20 Earth Summit”. Here, we set our vision for a different kind of summit, provide a list of potential deliverables from Rio+20, and describe NRDC’s international activities and experience with international sustainability summitry. El costo nivelado de energía y el futuro de la energía renovable no convencional en Chile Una presentación dada por el NRDC a los Comités Congresionales de Chile, sobre el amplio espectro de fuentes locales de energía renovable existentes en Chile, su viabilidad económica actual y las oportunidades disponibles al gobierno para aumentar su uso de energía limpia. Tar Sands Pipeline Safety Risks Report Map This map details the Lakehead pipeline system and proposed Keystone XL pathway, highlighting areas particularly vulnerable to damage from pipelines weakened by diluted bitumen tar sands oil. Peru: Vida y muerte en una tierra seca Desde Lima hacia Los Angeles, la sobrevivencia de las grandes ciudades por todo el mundo depende de fuentes de agua que se están agotando a una velocidad alarmante. Un informe especial desde los Andes en Perú, sobre el derretimiento de los glaciares y otros efectos del calentamiento global. Identifying Near-Term Opportunities For Carbon Capture and Sequestration (CCS) in China After three decades of rapid industrialization fueled by coal, China is now the world’s biggest emitter of carbon dioxide (CO2). China is well positioned to be a global leader in the development and deployment of CCS technologies with broad support and engagement from the international community. NGO Letter to Secretary Clinton re: International Climate Financing, 12/7/10 This letter to U.S. Secretary of State Hillary Clinton was signed by 19 non-governmental organizations, urging the United States to uphold its Copenhagen commitments to finance international climate efforts. Such financing is essential for combating climate change and its impacts, building the U.S. green economy, and protecting our national security. The letter was sent during the second week of international climate negotiations in Cancun, Mexico. Key Outcomes of Climate Negotiations in Cancun, Mexico This position paper contains NRDC's recommendations for the United Nations Climate Conference COP16 talks taking place in Cancun, Mexico from Nov. 29 to Dec. 10, 2010. - int_10120201a.pdf In English - int_10120201b.pdf In Spanish - int_10120201c.pdf In Chinese 2010. Letter to President Obama and Prime Minister Manmohan Singh from TERI and NRDC on Climate Change and Clean Energy Letter to President Obama and Prime Minister Manmohan Singh from The Energy and Resources Institute (TERI) and the Natural Resources Defense Council (NRDC) calling for strengthening cooperation between the United States and India on climate change and clean energy.. NRDC Comments to Ministry of Environment and Forests on Its National Environmental Protection Authority Proposal -- December 2009 NRDC provides perspectives on effective environmental compliance and governance based on its experience in the US and elsewhere. NRDC requested that the Ministry conduct further analysis on its proposal for a National Environmental Protection Authority and did not provide specific comments on the new agency structure. We urged that the Ministry allow for a broadly inclusionary process for discussing the creation of the National Environmental Protection Authority, if such an agency is created. Robust public participation -- especially by civil society -- is warranted to ensure that an effective structure is established. Open letter to President Obama in support of the proposed 2012 Earth Summit An open letter to President Obama from civil society organizations representing over a million Americans calling on his Administration to support the proposed 2012 Earth Summit. ¿Se necesitan represas en la Patagonia?: Un análisis del futuro energético chileno En julio de 2008, Chile Sustentable publicó el informe “Potencial de Energía Renovable y Eficiencia Energética en el Sistema Interconectado Central en Chile, periodo 2008 – 2025”, basándose en los análisis de eficiencia energética realizados por el Programa de Estudios e Investigaciones en Energía (PRIEN) de la Universidad de Chile y el análisis de energías renovables realizado por la Universidad Técnica Federico Santa María (UTFSM) de Valparaíso. NRDC's Recommendations for Strengthening US-China Climate Change and Energy Engagement The United States of America and the People's Republic of China are both key players in international efforts to address global warming and global energy security. Indeed, they are by far the two largest emitters of greenhouse gases (GHGs) in the world, together accounting for over 40% of global CO2 emissions from fossil fuel use. Efforts by these two players over the coming decades to cut greenhouse gas emissions and energy consumption will play a large role in determining the ultimate outcome of efforts to combat global warming. They are, of course, not alone in this effort, but they are the critical actors, jointly holding the key to either sustainability or catastrophe. Building upon NRDC's experience in China and the international global warming negotiations, this paper recommends nine key steps for the incoming Obama administration, US Congress, and leaders in China to strengthen US-China Climate Change and Energy Engagement. NRDC @ IUCN World Conservation Congress Listing all of NRDC events, motions and the names of the delegation members. Climate Change and Sustainable Energy Policies in Europe and the United States A report from the Transatlantic Platform for Action on the Global Environment, a joint project of the Institute For European Environmental Policy and the Natural Resources Defense Council. Letter to the Governors of the Western Governors' Association on Tar Sands Letter to the Governors of the Western Governors' Association about the greenhouse gas and wildlife corridor impacts of oil extraction from Canadian tar sands located in the Boreal Forest of Canada - the largest intact forest ecosystem remaining on the planet. Don't Buy It; Tar Sands Oil is Still Dirty On April 29, 2008, the Deputy Premier of Alberta, Ron Stevens, visited DC to meet with U.S. officials and promote dirty tar sands oil. The visit aimed to assuage the environmental concerns raised by NRDC, while also seeking to ensure tar sands oil's future in the U.S. marketplace. Senator Paul Simon Water for the Poor Act: Responses to U.S. State Department 2007 Report to Congress Joint response from NRDC and WaterAid America, June 26, 2007. Comments Submitted to the Multi-Stakeholder Committee in the Oil Sands Consultation Visioning Process Melanie Nakagawa (Attorney, NRDC International Program) presented at the Oil Sands Consultation held in Bonnyville, Alberta, September 13, 2006 and comments were submitted to the committee on October 3, 2006. Trade in Bigleaf Mahogany: The Need for Strict Implementation of CITES As a species on the brink of Appendix I, full compliance with CITES is essential to protect mahogany populations and the human victimsof illegal logging operations. Under current conditions, this can only be achieved through a suspension of imports. Letter to CITES Mahogany Working Group, Plants Committee June 26, 2006 letter from NRDC and Defenders of Wildlife concerning Peru's continued export of bigleaf mahogany in violation of the provisions of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Native Groups Request a Halt to Trade in Illegal Amazon Mahogany A letter sent to the U.S. government and U.S. wood importers on April 3, 2006 by the Federacion Nativa del Río Madre de Dios y Afluentes, a group representing 27 native Peruvian communities and indigenous people living in voluntary isolation. - int_06041001a.pdf English - int_06041001b.pdf Español
http://docs.nrdc.org/international/
2015-08-28T02:09:05
CC-MAIN-2015-35
1440644060173.6
[]
docs.nrdc.org
Looking for Something? We're sorry. The web address you entered is not a functioning page on our site. Try one of the following links to find the information you're looking for: - Amazon Web Services Home Page: Product information for all our web services. - Resource Center: Documentation, code samples, and other resources for building applications on Amazon Web Services. -The Amazon Web Services Team
http://docs.aws.amazon.com/index.html
2015-08-28T02:12:19
CC-MAIN-2015-35
1440644060173.6
[]
docs.aws.amazon.com
Design the content: Sections and Categories: Joomla! 1.5 From Joomla! Documentation GSheader - 6 Where next? - 7 Further information - 8 Index to other documents in this series Background to creating a new Joomla! heirachy of Sections, Categories and Articles Joomla! has a heir:- -.. For a lot more detailed information about what you can do using the Section Workspace page - click the Icon at the top of the screen.. They do not limit themselves to one level in the heirachy but set up the design to allow for multiple levels of content and also some blog and list layouts.. Example of part of a heirachy for a club web site The example below takes part of a design for a sailing club web site showing how the basic information about the club could be designed in Section and Categories. Where next? Further information - on sections etc - Joomla! Administrator's Manual - on-line - Quick start guide Index to other documents in this series --Lorna Scammell January 2011
https://docs.joomla.org/index.php?title=Design_the_content:_Sections_and_Categories:_Joomla!_1.5&oldid=37327
2015-08-28T03:44:01
CC-MAIN-2015-35
1440644060173.6
[array(['/images/2/22/GSiconHelp.png', 'GSiconHelp.png'], dtype=object)]
docs.joomla.org
Switch between your personal space and work space When BlackBerry Balance technology is set up on your BlackBerry device, you can quickly switch between your personal space and work space. Tip: To differentiate between your personal space and work space, you can set a different wallpaper for your personal space. - From the home screen, to switch between spaces, swipe down from the top of the screen. Tap Switch to Personal or Switch to Work. - From your personal space or work space, do any of the following: When you switch between personal files and work files, the app opens a second instance of the application in the personal space or work space you are currently in. - To switch between your personal pictures and work pictures, in the Pictures app, tap . Tap Open Personal Pictures or Open Work Pictures. - To switch between your personal files and work files, in Adobe Reader, Documents To Go, or File Manager, tap . Tap Personal Space or Work Space. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/61781/als1383599708496.jsp
2015-08-28T02:44:02
CC-MAIN-2015-35
1440644060173.6
[]
docs.blackberry.com
What does a simple Joomla! installation include? From Joomla! Documentation Revision as of 20:39,.
https://docs.joomla.org/index.php?title=J1.5:What_does_a_simple_Joomla!_installation_include%3F&direction=prev&oldid=86317
2015-08-28T02:49:27
CC-MAIN-2015-35
1440644060173.6
[array(['/images/c/c8/Compat_icon_1_5.png', 'Joomla 1.5'], dtype=object)]
docs.joomla.org
GetFeedback Get feedback for an anomaly group. Request Syntax POST /GetFeedback HTTP/1.1 Content-type: application/json { "AnomalyDetectorArn": " string", "AnomalyGroupTimeSeriesFeedback": { "AnomalyGroupId": " string", "TimeSeriesId": " string" }, "MaxResults": number, "NextToken": " - AnomalyGroupTimeSeriesFeedback The anomalous metric and group ID. Type: AnomalyGroupTimeSeries object Required: Yes - MaxResults The maximum number of results to return. Response Syntax HTTP/1.1 200 Content-type: application/json { "AnomalyGroupTimeSeriesFeedback": [ { "IsAnomaly": boolean, "TimeSeriesId": "string" } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - AnomalyGroupTimeSeriesFeedback Feedback for an anomalous metric. Type: Array of TimeSeriesFeedback:
https://docs.aws.amazon.com/lookoutmetrics/latest/api/API_GetFeedback.html
2022-01-17T02:16:14
CC-MAIN-2022-05
1642320300253.51
[]
docs.aws.amazon.com
Quoted identifiers in the names of table columns are supported in Hive 0.13 and later. An identifier in SQL is a sequence of alphanumeric and underscore (_) characters surrounded by backtick (`) characters. Quoted identifiers in Hive are case-insensitive. In the following example, `x+y` and `a?b` are valid column names for a new table. CREATE TABLE test (`x+y` String, `a?b` String); Quoted identifiers can be used anywhere a column name is expected, including table partitions and buckets: CREATE TABLE partition_date-1 (key string, value string) PARTITIONED BY (`dt+x` date, region int); CREATE TABLE bucket_test(`key?1` string, value string) CLUSTERED BY (`key?1`) into 5 buckets; Enabling Quoted Identifiers Set the hive.support.quoted.identifiers configuration parameter to column in hive-site.xml to enable quoted identifiers in SQL column names. For Hive 0.13, the valid values are none and column. hive.support.quoted.identifiers = column
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.3.2/bk_dataintegration/content/hive-013-feature-quoted-identifiers.html
2022-01-17T00:24:55
CC-MAIN-2022-05
1642320300253.51
[]
docs.cloudera.com
A view used to display static text with or without a caption/title describing the content. On this page: Labels allow for static text to be displayed in order to do things like communicate instructions or indicate the current state of a particular property to users. Implement this on a page using HomeSeer.Jui.Views.LabelView.
https://docs.homeseer.com/plugins/viewsource/viewpagesrc.action?pageId=7443464
2022-01-17T00:34:14
CC-MAIN-2022-05
1642320300253.51
[]
docs.homeseer.com
Tutorial: Changing the logo and favicon This tutorial will describe how to change the logo and favicon for your implementation of Open SDG. This is intended to be a continuation of the quick start tutorial. We will replace the default logo and the default favicon. Topics covered¶ - Choosing images - Uploading files in Github - Creating favicons Level of difficulty¶ This tutorial does not require any technical expertise. Find a logo¶ First, find (or create) a replacement logo image. An obvious choice might be your country's flag (such as those available in this repository of flag images) but this is up to your preference. Here are a few guidelines to keep in mind: - The image must be a PNG file named SDG_logo.png. This is case-sensitive, so make sure the name is exactly right. - We recommend the image's width should be at least 600 pixels. - We recommend the image's file size should be 50KB or less. Upload the logo¶ Next we will upload the logo to your site repository. - In a browser go to github.com and log in, then go to your site repository. - In the list of files, navigate to the assets/imgfolder. - Click Add a fileand then Upload files. - Drag in your new SDG_logo.pngfile or click to browse for it. - At the bottom select Create a new branch for this commit and start a pull request. - Click Propose changes. - Click Create pull request. - Wait for the tests to complete, and then click Merge pull request. Find a favicon¶ Next we will replace the "favicon" (the small image that appears in browser tabs). Again, the actual image is up to your preference. A common choice, again, is to your your country's flag. For this tutorial we will use the logo you uploaded above. - In a browser visit favicon.io. - Drag in your new SDG_logo.pngfile or click to browse for it. - Click Download. - Unzip the zip file somewhere on the computer. It should contain several versions of your logo image. Upload the favicon¶ Finally we will upload the favicon to your site repository. - In a browser go to your site repository. - In the list of files, navigate to the assets/img/faviconsfolder. - Click Add a fileand then Upload files. - From the unzipped folder of images, drag in all of the files. Or click to browse for them and select all the files from that unzipped folder. - At the bottom select Create a new branch for this commit and start a pull request. - Click Propose changes. - Click Create pull request. - Wait for the tests to complete, and then click Merge pull request. View your results¶ Your site will now begin rebuilding. After about 5 minutes, if you visit your site you should see the updated logo and favicon. Note that browsers tend to cache favicons aggressively. You may need to refresh the page a few times before you see the new favicon. Troubleshooting¶ If this did not appear to work, here are a few areas to check on: - Have you waited long enough? It can take about five minutes for the site to rebuild. - Is your browser holding onto the old files? Sometimes a browser can aggressively cache outdated image files. Doing a "hard refresh" can help. For most browsers this is done by pressing CTRL and F5, or SHIFT and F5. - Did you name the logo file exactly SDG_logo.png? Even a small difference will prevent Open SDG from "seeing" the file. For example, sdg_logo.pngwill not work, because the sdgis not capitalized. Similarly, SDG-logo.pngwill not work, because the -should be a _(underscore). - Did the files get uploaded to the right place? After completing this tutorial, your new logo should be in the site repository at assets/img/SDG_logo.png. And your new favicon images should be in the site repository at assets/img/favicons/. For example, if your favicons got uploaded to assets/img/(without the faviconssubfolder) then it will not work.
https://open-sdg.readthedocs.io/en/latest/tutorials/change-logo/
2022-01-17T00:27:32
CC-MAIN-2022-05
1642320300253.51
[]
open-sdg.readthedocs.io
Communicate to Your Organization Wether you're a small business or a global enterprise, passwordless is going to impact your users in a positive way. Most people are not aware they can log into systems without a password. Many administrators are not even aware that passwordless authentication exists, let alone works on their system. Having a solid foundation for educating your team about passwordless, the benefits, and how to use it will make for the best possible deployment experience. Email Templates you can use to communicate the rollout to your team End-User Resources for educating people about HYPR, with videos and etc. End User FAQ contains frequently asked questions from users setting up HYPR Onboard Your Help Desk and train them in supporting HYPR users throughout the deployment. Do's and Dont's DO's Do send emails from Security and IT Managers, HR or Operations personnel, or administrators. Do send emails during business hours to ensure visibility. Do inform your users about the benefits of passwordless and how their experience will improve. Dont's Don't deploy HYPR without first communicating to users what they can expect. Don't assume IT folks all know what passwordless authentication means. Take time to educate and answer questions. Don't make it a requirement to use HYPR on day 1. Some users need time to grow accustomed to their passwordless login experience. Next Steps Updated over 1 year ago
https://docs.hypr.com/installinghypr/docs/communicate-to-your-organization
2022-01-17T02:06:09
CC-MAIN-2022-05
1642320300253.51
[array(['https://files.readme.io/914f857-Step_4.png', 'Step 4.png'], dtype=object) array(['https://files.readme.io/914f857-Step_4.png', 'Click to close...'], dtype=object) ]
docs.hypr.com
Looking to sign up for Loome Integrate? Learn more here. The Lo. Visit our website to learn more about our other products in the Loome software suite. You may have a number of ELT processes that run at various times to integrate your data from disparate sources into a single warehouse. To manage a few EL dependent: With over 100 native connectors, Loome Integrate brings any data together quickly and easily. With these. Loome Integrate makes it simple to define dependencies between jobs and tasks across different types of processing. Detailed logging also provides transparency on typical failures and long running processes to help focus on where to improve. In summary, you can use a central console to manage and audit your data integration processes. The process flow below provides a high level overview of a simple Loome Integrate job. We have listed the core features of Loome Integrate below: The key benefits of Loome Integrate include the following.
https://docs.loomesoftware.com/integrate/online/
2022-01-17T00:31:11
CC-MAIN-2022-05
1642320300253.51
[]
docs.loomesoftware.com
This article explains how to upgrade a Preseem system that has or has had RedHat's tuned service installed. Tuned is RedHat's system tuning daemon. Prior to Preseem 1.8, tuned was often used to configure the performance related features of the underlying hardware. Unfortunately, when tuned is installed, the Grub configuration can become corrupted causing the system to become unbootable. Determining if Tuned is Installed To determine if tuned is installed run the following command: dnf list installed | egrep tuned If this command returns no output, then tuned is not installed and this guide does not apply. Removing Tuned Important Note: Perform these steps before updating the system (specifically the kernel) as the updates will cause the Grub configuration to be updated. Execute the following steps to remove tuned: systemctl stop tuned systemctl disable tuned dnf remove tuned Edit /etc/default/grub to make sure tuned variables are no longer present Check that tuned is no longer present in /etc/grub.d/* Edit /boot/grub2/grub.cfg to make sure tuned parameters are not corrupted Execute this step to ensure that the server reboots without an issue: grubby --update-kernel=ALL Reboot: systemctl reboot Upgrading the System With tuned removed, you can now update the system, specifically the kernel package which will automatically generate a new Grub configuration. dnf upgrade --refresh Validating that Grub is not Broken To validate that the Grub configuration is not broken, open up the Grub configuration file (/etc/default/grub and /boot/grub2/grub.cfg, typically) in your favorite editor and search for "tuned". If "tuned" is not found, then your Grub configuration is not affected. Recovering From a Broken Grub Configuration If your system is unbootable and fails with a grub error then it is necessary to boot using external media to fix the problem. Step 1: Attach the serial cable to the appliance. If the hardware is not a Preseem appliance, then attach a monitor and keyboard. Step 2: Download the latest Fedora Live USB image and create a bootable drive with the Fedora Media Writer program (available for Windows and Linux). The Fedora Live USB image is roughly 2GB. Step 3: Insert the Live USB into the hardware running Preseem. Step 4: Reboot. For those using the Preseem appliance, the boot output will appear on the serial console. For those with a keyboard and monitor setup, the output will appear there. Step 5: When the Live USB menu shows, press escape. Step 6: Boot the system from the Live USB image by typing "linux console=ttyS0,115200n8 3" and pressing enter. Step 7: Mount the /boot partition so the Grub configuration file can be edited. For Preseem appliances the commands to run are: mkdir /mnt/boot mount /dev/sda1 /mnt/boot Step 8: Edit the Grub configuration to fix the problem: vi /mnt/boot/grub2/grub.cfg Search for "tuned" and look for "skew_tick" and strings like "=1"=1". Remove all the extraneous =1" from the line and then save the file. Step 9: Power off the hardware Step 10: Remove the Live USB Step 11: Power on to boot into the main OS Now that the OS is booted, follow the steps at the start of this guide to remove tuned.
https://docs.preseem.com/tuned
2022-01-17T02:11:25
CC-MAIN-2022-05
1642320300253.51
[]
docs.preseem.com
Flow 5.3 Add methods csrfToken, isAuthenticated and hasAccess to Security EelHelper csrfTokenreturns CSRF token which is required for “unsafe” requests (e.g. POST, PUT, DELETE, …) isAuthenticatedreturns true, if an account is currently authenticated hasAccessreturns true, if access to the given privilege-target is granted The methods add features that previously were available as view helpers to Eel so they can be used in Fusion directly. Add format method to String EEL Helper This method pretty much just redirects its arguments to the PHP-native vsprintf and allows to format strings without '' + '' interpolation. Make recursion limit for the debugger configurable via settings With the default recursion Limit of 50 in PHP often runs into memory-limits when debugging larger data structures. This change allows defining the recursionLimit via settings. In addition, the default recursionLimit is set to 5. Potentially breaking changes (unplanned extensibility) Introduce ActionResponse in preparation for clean PSR-7 This is the continuation of a clear separation between MVC and HTTP stacks. The introduced ActionResponse offers a very limited interface to work with on the MVC level, with the following methods: setContent() setContentType() setRedirectUri() setStatusCode() setComponentParameter() Everything in the currently used HTTP\Response is still available but deprecated and will be removed in the next major, so make sure to adapt to the above API. This change is marked breaking due to the deprecations, it should not break any existing code. setContent() accepts also PSR-7 StreamInterface implementations and will likely only accept those in the next major. A detailed blog post with the next steps will follow and go into more detail about the final separation and usage. Special note regarding the setComponentParameter, for now this is your extensible portal towards HTTP. You can use it to set component parameters for your own HTTP components to set additional headers. We are likely to extend the interface slightly for the major release as we are aware that this implementation is very limiting, but we need a clean separation between MVC and HTTP to start with. Related discuss post: Deprecate PackageManagerInterface As the package manager cannot be overwritten the interface is purely cosmetic and IF you actually use the package manager in your codebase (which you probably should not in the first place) you can just inject the PackageManager directly instead of the interface. Upgraded our internal testing suite to latest neos/behat version In case you have Behat tests in place but did not set your own Behat version in the dev dependencies in your own, there might be some changes that could break your tests within the Behat version that is now acquired by Flow / Neos.
https://flowframework.readthedocs.io/en/stable/TheDefinitiveGuide/PartV/ReleaseNotes/530.html
2022-01-17T01:06:19
CC-MAIN-2022-05
1642320300253.51
[]
flowframework.readthedocs.io
Jekyll and Open SDG Jekyll is a popular, free and open source static website generator which works like a content management system such as Drupal and WordPress. However, unlike those systems, Jekyll generates at build-time rather than at request-time. Jekyll creates static websites with some templating abilities. In this way, Jekyll is similar to other static site generators, like Next.js and Hugo. Advantages of static sites¶ The Open SDG platform is built with Jekyll and therefore gets all the advantages of using a static site generator. Simple to maintain¶ Given that static websites are just a set of files, the capability requirements of the server are lowered compared to that of a dynamic website, and therefore its simpler create and maintain. The web server need only be capable of serving those files. Speed¶ As the server is simply returning/sending files when they are requested and nothing has to be dynamically generated, there is less processing. Jekyll-based websites load much faster compared to dynamically generated pages, which are generated at request-time and require interaction with the backend database. Stability¶ The dynamic website server involves several software components working together, and if any one component fails the website will fail to be served properly. By contrast a web server, such as one provided by Jekyll, serves only static files so is less likely to fail. Security¶ Security risks such as SQL injection attacks are impossible on a static website since there is no dynamic database that an attacker can exploit. Version-controlled¶ Static sites are a collection of files, which makes them easy to maintain using open-source version-control software like Git. Free services like Github.com are a perfect tool for any static site. Disadvantages of static sites¶ No Graphical Interface¶ Jekyll is a command-line tool, which could be difficult to use for non-technical users. Although graphical user interfaces for the management of Jekyll do exist, one is not used for Open SDG. Long build time¶ Open SDG platforms can take several minutes to build. Slow build times can slow down development (especially when debugging) because the developer has to wait for the build to complete before they can see the result.
https://open-sdg.readthedocs.io/en/latest/jekyll-and-open-sdg/
2022-01-17T00:21:58
CC-MAIN-2022-05
1642320300253.51
[]
open-sdg.readthedocs.io
Dependencies¶ Internal Dependencies¶ Internal dependencies, such as Go libraries that ToDD uses to communicate with a message queue, or a database for example, are vendored in the vendor directory of the repository. There is no additional step to download such dependencies, in order to install ToDD from source and run it. External Dependencies¶ There are a number of external services that ToDD needs in order to run. Please refer to the specific pages linked above for each to see what specific integrations have been built into ToDD in each area.
https://todd.readthedocs.io/en/latest/dependencies/dependencies.html
2022-01-17T02:00:59
CC-MAIN-2022-05
1642320300253.51
[]
todd.readthedocs.io
The document revision process is separate from the publishing process, making it possible to revise a document locally and save it to the database without re-publishing it. The Revise command is available on the right-click menu for drawings, reports, and 3D Model Data documents. In an integrated environment, all revisions are handled by SmartPlant Foundation. Revising and publishing are two separate actions. You specify the document revision using the Revise Command, which creates a revision for the document with Major and Minor set, depending on the revision schema selected. If you are working in an integrated environment, you can modify the other revision information on the document. After setting the revision number, right-click the document and select Properties Command. Select the Revision tab and edit the Revision fields. You should update documents to include any new title block information. You can now re-publish the document with the new revision use the Revise Command only if your model has been registered using the SmartPlant Registration Wizard. See Model registration in the Project Management Help. If the drawing document that perform edit operations on the drawing, including update, revise, and publish.
https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-3D-Reports/13/82928
2022-01-17T01:20:56
CC-MAIN-2022-05
1642320300253.51
[]
docs.hexagonppm.com
Overview The Zebra L10 is a ruggedized tablet PC available with either Windows or Android OS. The version we tested has a built-in Sierra Wireless EM7511 modem. Configuration - Windows 10 When we first received this device in early 2020 there were no special configurations required. However, recent updates to the Windows 10 OS (Sept/Oct 2021) mean you will need to enter an APN for the tablet to conenct. Go to Windows Settings, select Network & Internet, and then Cellular from the left-hand menu. Select the Advanced Options link on the cellular page, and under APN Settings click the button to Add an APN. Enter the following settings; Profile Name: Celona APN: default User name: blank Password: blank Type of sign-in info: None IP Type: IPv4 APN Type: Internet & Attach Apply to this Profile: Checked Click save, and your new APN should be activated. Disable and re-enable the cellular modem to complete your connection. Here is a recording of our early experience with the Zebra L10 tablet, installed with Windows 10 operating system. It is the ideal ruggedized tablet for supply chain operations in logistics and warehousing. Unique to Windows 10, administrators can define which applications on the L10 have to take advantage of private LTE wireless connectivity - enabling predictable performance for critical business apps and use cases supported by L10 when connected to an enterprise wireless network. To see Zebra L10 in action on a Celona private mobile network, enabled with critical apps that you care the most about, feel free to request a custom demo here.
https://docs.celona.io/en/articles/5134019-zebra-l10-tablet-on-celona
2022-01-17T00:11:13
CC-MAIN-2022-05
1642320300253.51
[array(['https://celona-df2d49dbca1b.intercom-attachments-1.com/i/o/410706967/7fad8fb2db222fd6fa48a346/image.jpg', '<a href='], dtype=object) array(['https://downloads.intercomcdn.com/i/o/410706149/a34dd1d7254772f55977632a/apnconfigInternetandAttach.png', None], dtype=object) ]
docs.celona.io
The procedure for recovering a failed Storage Node depends on the type of failure and the type of Storage Node that has failed. Use this table to select the recovery procedure for a failed Storage Node. Technical support will assess your situation and develop a recovery plan. How site recovery is performed by technical support Technical support can determine when it is safe to begin recovery of a second Storage Node. (This includes the case where a Storage Node fails while recovery of another Storage Node is still in progress.) Recovering a StorageGRID appliance Storage Node Recovering from storage volume failure where the system drive is intact Recovering from system drive failure
https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-maint/GUID-D919E1D6-A48B-464B-824C-217714F4200C.html
2022-01-17T00:42:37
CC-MAIN-2022-05
1642320300253.51
[]
docs.netapp.com
If you are using a proxy network, you can configure the proxy for the Cloud Link Service under the NetWitness Platform, System > HTTP Proxy Settings page. This allows the Cloud Link Service to connect using a proxy and transfers data to the Detect AI. To configure proxy for Cloud Link Service: Log in to the NetWitness Platform. Go to Admin > System. In the options panel, select HTTP Proxy Settings. The HTTP Proxy Settings panel is displayed. Click the Enable checkbox. The fields where you configure the proxy settings are activated. Type the hostname for the proxy server and the port used for communications on the proxy server. (Optional) Type the username and password that serve as credentials to access the proxy server if authentication is required. (Optional) Enable Use NTLM Authentication and type the NTLM domain name. (Optional) Enable Use SSL if communications use Secure Socket Layer. To save and apply the configuration, click Apply. The proxy is immediately available for use for the Cloud Link Service.
https://docs.netwitness.rsa.com/admin/proxy/
2022-01-17T00:18:42
CC-MAIN-2022-05
1642320300253.51
[]
docs.netwitness.rsa.com