content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Recently. Related Concepts Monitor Amazon Web Service Related Tasks Configure the Web Query Client Policy for AWS Review AWS Events in SecurityCenter CV Additional Reference Web Query Client Policy Configuration Items
https://docs.tenable.com/lce/4_8/Content/LCE_WebQueryClient/WQC_AWS_Prerequisites.htm
2017-08-16T21:50:00
CC-MAIN-2017-34
1502886102663.36
[]
docs.tenable.com
Voxa¶ - class Voxa(config)¶ If appIds is present then the framework will check every alexa event and enforce the application id to match one of the specified application ids. const skill = new Voxa({ Model, variables, views, appIds }); Voxa. execute(event)¶ The main entry point for the Skill execution skill.execute(event, context) .then(result => callback(null, result)) .catch(callback); Voxa. onState(stateName, handler)¶ Maps a handler to a state skill.onState('entry', { LaunchIntent: 'launch', 'AMAZON.HelpIntent': 'help', }); skill.onState('launch', (alexaEvent) => { return { reply: 'LaunchIntent.OpenResponse', to: 'die' }; }); Voxa. onIntent(intentName, handler)¶ A shortcut for definining state controllers that map directly to an intent skill.onIntent('HelpIntent', (alexaEvent) => { return { reply: 'HelpIntent.HelpAboutSkill' }; }); Voxa. onIntentRequest(callback[, atLast])¶ This is executed for all IntentRequestevents, default behavior is to execute the State Machine machinery, you generally don’t need to override this. Voxa. onLaunchRequest(callback[, atLast])¶ Adds a callback to be executed when processing a LaunchRequest, the default behavior is to fake the alexa event as an IntentRequestwith a LaunchIntentand just defer to the onIntentRequesthandlers. You generally don’t need to override this. Voxa. onBeforeStateChanged(callback[, atLast])¶ This is executed before entering every state, it can be used to track state changes or make changes to the alexa event object Voxa. onBeforeReplySent(callback[, atLast])¶ Adds a callback to be executed just before sending the reply, internally this is used to add the serialized model and next state to the session. It can be used to alter the reply, or for example to track the final response sent to a user in analytics. skill.onBeforeReplySent((alexaEvent, reply) => { const rendered = reply.write(); analytics.track(alexaEvent, rendered) }); Voxa. onAfterStateChanged(callback[, atLast])¶ Adds callbacks to be executed on the result of a state transition, this are called after every transition and internally it’s used to render the transition replyusing the views and variables The callbacks get alexaEvent, replyand transitionparams, it should return the transition object skill.onAfterStateChanged((alexaEvent, reply, transition) => { if (transition.reply === 'LaunchIntent.PlayTodayLesson') { transition.reply = _.sample(['LaunchIntent.PlayTodayLesson1', 'LaunchIntent.PlayTodayLesson2']); } return transition; }); Voxa. onUnhandledState(callback[, atLast])¶ Adds a callback to be executed when a state transition fails to generate a result, this usually happens when redirecting to a missing state or an entry call for a non configured intent, the handlers get a alexa event parameter and should return a transition the same as a state controller would. Voxa. onSessionStarted(callback[, atLast])¶ Adds a callback to the onSessinStartedevent, this executes for all events where alexaEvent.session.new === true This can be useful to track analytics skill.onSessionStarted((alexaEvent, reply) => { analytics.trackSessionStarted(alexaEvent); }); Voxa. onRequestStarted(callback[, atLast])¶ Adds a callback to be executed whenever there’s a LaunchRequest, IntentRequestor a SessionEndedRequest, this can be used to initialize your analytics or get your account linking user data. Internally it’s used to initialize the model based on the event session skill.onRequestStarted((alexaEvent, reply) => { alexaEvent.model = this.config.Model.fromEvent(alexaEvent); }); Voxa. onSessionEnded(callback[, atLast])¶ Adds a callback to the onSessionEndedevent, this is called for every SessionEndedRequestor when the skill returns a transition to a state where isTerminal === true, normally this is a transition to the diestate. You would normally use this to track analytics Voxa.onSystem. ExceptionEncountered(callback[, atLast])¶ This handles System.ExceptionEncountered event that are sent to your skill when a response to an AudioPlayerevent causes an error return Promise.reduce(errorHandlers, (result, errorHandler) => { if (result) { return result; } return Promise.resolve(errorHandler(alexaEvent, error)); }, null); Error handlers¶ You can register many error handlers to be used for the different kind of errors the application could generate. They all follow the same logic where if the first error type is not handled then the default is to be deferred to the more general error handler that ultimately just returns a default error reply. They’re executed sequentially and will stop when the first handler returns a reply. Voxa. onStateMachineError(callback[, atLast])¶ This handler will catch all errors generated when trying to make transitions in the stateMachine, this could include errors in the state machine controllers, , the handlers get (alexaEvent, reply, error)parameters skill.onStateMachineError((alexaEvent, reply, error) => { // it gets the current reply, which could be incomplete due to an error. return new Reply(alexaEvent, { tell: 'An error in the controllers code' }) .write(); }); Voxa. onError(callback[, atLast])¶ This is the more general handler and will catch all unhandled errors in the framework, it gets (alexaEvent, error)parameters as arguments skill.onError((alexaEvent, error) => { return new Reply(alexaEvent, { tell: 'An unrecoverable error occurred.' }) .write(); }); Playback Controller handlers¶ Handle events from the AudioPlayer interface audioPlayerCallback(alexaEvent, reply)¶ All audio player middleware callbacks get a alexa event and a reply object In the following example the alexa event handler returns a REPLACE_ENQUEUEDdirective to a PlaybackNearlyFinished()event. skill['onAudioPlayer.PlaybackNearlyFinished']((alexaEvent, reply) => { const directives = { type: 'AudioPlayer.Play', playBehavior: 'REPLACE_ENQUEUED', token: "", url: '', offsetInMilliseconds: 0, }; return reply.append({ directives }); });
http://voxa.readthedocs.io/en/stable/statemachine-skill.html
2017-08-16T21:29:56
CC-MAIN-2017-34
1502886102663.36
[]
voxa.readthedocs.io
CR 16-052 Rule Text Department of Safety and Professional Services (SPS) Administrative Code Chapter Groups Affected: Chs. SPS 301- ; Safety, Buildings, and Environment Chs. SPS 326-360; General, Part II Administrative Code Chapter Affected: Ch. SPS 360 (Revised) Related to: Erosion control, sediment control and storm water Date Rule Approved by Governor: February 9, 2017 Date Rule Filed with LRB: June 14, 2017 Published Date: July 31, 2017 Effective Date: August 1, 2017
http://docs.legis.wisconsin.gov/code/register/2017/739B/register/final/cr_16_052_rule_text/cr_16_052_rule_text
2017-08-16T21:52:21
CC-MAIN-2017-34
1502886102663.36
[]
docs.legis.wisconsin.gov
MVVM Support The assembly in which RadDataServiceDataSource is located contains a class named QueryableDataServiceCollectionView<T>. This is the collection view that the control uses internally. The only functionality that the control adds over this collection view is XAML-friendly API. In case you are strictly following the MVVM pattern and you cannot embed an UI control inside your view model, you should use the QueryableDataServiceCollectionView<T> class. Like RadDataServiceDataSource, the QueryableDataServiceCollectionView<T> needs a DataServiceContext and a DataServiceQuery<T> to be constructed. Do not touch the context and the query once you have supplied them in the collection view constructor: C# MyNorthwindEntities ordersContext = new MyNorthwindEntities(); DataServiceQuery<Order> ordersQuery = ordersContext.Orders; QueryableDataServiceCollectionView<Order> ordersView = new QueryableDataServiceCollectionView<Order>(this.ordersContext, ordersQuery); VB.NET Dim ordersContext As New MyNorthwindEntities() Dim ordersQuery As DataServiceQuery(Of Order) = ordersContext.Orders Dim ordersView As New QueryableDataServiceCollectionView(Of Order)(Me.ordersContext, ordersQuery) The QueryableDataServiceCollectionView<T> class has the same API as the RadDataServiceDataSource control so all operations are performed in the same way as with the RadDataServiceDataSource control. In fact, the public API of the RadDataServiceDataSource control simply exposes the public API of its inner collection view, which is the one that really does the job.
http://docs.telerik.com/devtools/wpf/controls/raddataservicedatasource/mvvm/mvvm
2017-08-16T21:48:40
CC-MAIN-2017-34
1502886102663.36
[]
docs.telerik.com
A purchase order/strong> (PO) is a commercial document issued by a buyer to a seller. A PO indicates types, quantities, and agreed prices for products or services the seller will provide to the buyer. There are several different ways of purchasing goods. But for business to business (b2b) and professional purchasing, to avoid disputes later on, it is very important that the purchase order is in writing and contain the information that could be needed. After the acceptance from the seller of the purchase order, there is a contractual agreement between the buyer and the seller. That’s also why it is so important that the acceptance should be confirmed by writing. Basically a purchase order should contain the following: - The names of the seller and the purchaser - Delivery address - Invoice address - A clear definition of the goods or services - Price and quantity of the products or services - Delivery date - Terms of payment - Terms of delivery Here are some Purchase Order examples:
http://www.all-docs.net/purchase-order/how-to-write-purchase-order
2015-06-30T08:17:51
CC-MAIN-2015-27
1435375091925.14
[array(['http://www.all-docs.net/wp-content/uploads/2011/09/purchase-order-2.png', 'Purchase Order Purchase Order'], dtype=object) array(['http://www.all-docs.net/wp-content/uploads/2011/09/purchase-order.png', 'Purchase Order Purchase Order'], dtype=object) ]
www.all-docs.net
Revision history of "JForm::findGroup/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 03:28, 5 May 2013 Wilsonge (Talk | contribs) deleted page JForm::findGroup/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JForm::findGroup== ===Description=== Method to get a form field group represented as an XML element object. {{Description:JForm::findGr..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JForm::findGroup/1.6&action=history
2015-06-30T09:03:36
CC-MAIN-2015-27
1435375091925.14
[]
docs.joomla.org
UntagResource Untags the specified tags from the specified Amazon Chime SDK meeting resource. 1024. Pattern: ^arn[\/\:\-\_\.a-zA-Z0-9]+$ Required: Yes - TagKeys The tag keys. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 50 items. Length Constraints: Minimum length of 1. Maximum length of 128. Required: Yes. For example, when a user tries to create an account from an unsupported Region. - UnauthorizedClientException The client is not currently authorized to make the request. HTTP Status Code: 401 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/chime/latest/APIReference/API_UntagResource.html
2020-10-19T22:26:24
CC-MAIN-2020-45
1603107866404.1
[]
docs.aws.amazon.com
System requirements Citrix Hypervisor requires at least two separate physical x86 computers: one to be the Citrix Hypervisor server and the other to run the XenCenter application or the Citrix Hypervisor Command-Line Interface (CLI). The Citrix Hypervisor server computer is dedicated entirely to the task of running Citrix Hypervisor and hosting VMs, and is not used for other applications. Warning: Installing third-party software directly in the control domain of the Citrix Hypervisor is not supported. The exception is for software supplied as a supplemental pack and explicitly endorsed by Citrix. To run XenCenter use any general-purpose Windows system that satisfies the hardware requirements. This Windows system can be used to run other applications. When you install XenCenter on this system, the Citrix Hypervisor CLI is also installed. A standalone remote Citrix Hypervisor CLI can be installed on any RPM-based Linux distribution. For more information, see Command-line interface. Citrix Hypervisor server system requirementsCitrix Hypervisor server system requirements Although Citrix Hypervisor is usually deployed on server-class hardware, Citrix Hypervisor is also compatible with many models of workstations and laptops. For more information, see the Hardware Compatibility List (HCL). The following section describes the recommended Citrix Hypervisor hardware specifications. The Citrix Hypervisor server must be a 64-bit x86 server-class machine devoted to hosting VMs. Citrix Hypervisor creates an optimized and hardened Linux partition with a Xen-enabled kernel. This kernel controls the interaction between the virtualized devices seen by VMs and the physical hardware. Citrix Hypervisor can use: Up to 6 TB of RAM Up to 16 physical NICs Up to 448 logical processors per host. Note: The maximum number of logical processors supported differs by CPU. For more information, see the Hardware Compatibility List (HCL). The system requirements for the Citrix Hypervisor server are: CPUs One or more 64-bit x86 CPUs, 1.5 GHz minimum, 2 GHz or faster multicore CPU recommended. To support VMs running Windows or more recent versions of Linux, you require an Intel VT or AMD-V 64-bit x86-based system with one or more CPUs. Note: To run Windows VMs or more recent versions of Linux, enable hardware support for virtualization on the Citrix Hypervisor server. Virtualization support is an option in the BIOS. It is possible that your BIOS might have virtualization support disabled. For more information, see your BIOS documentation. To support VMs running supported paravirtualized Linux, you require a standard 64-bit x86-based system with one or more CPUs. RAM 2 GB minimum, 4 GB or more recommended Disk space - Locally attached storage (PATA, SATA, SCSI) with 46 GB of disk space minimum, 70 GB of disk space recommended - SAN via HBA (not through software) when installing with multipath boot from SAN. For a detailed list of compatible storage solutions, see the Hardware Compatibility List (HCL). Network 100 Mbit/s or faster NIC. One or more Gb, or 10 Gb NICs is recommended for faster export/import data transfers and VM live migration. We recommend that you use multiple NICs for redundancy. The configuration of NICs differs depending on the storage type. For more information, see the vendor documentation. Citrix Hypervisor requires an IPv4 network for management and storage traffic. Notes: - Ensure that the time setting in the BIOS of your server is set to the current time in UTC. - In some support cases, serial console access is required for debug purposes. When setting up the Citrix Hypervisor configuration, we recommend that you configure serial console access. For hosts that do not have physical serial port or where suitable physical infrastructure is not available, investigate whether you can configure an embedded management device. For example, Dell DRAC or HP iLO. For more information about setting up serial console access, see CTX228930 - How to Configure Serial Console Access on XenServer and later. XenCenter system requirementsXenCenter system requirements XenCenter has the following system requirements: - Operating System: - Windows 10 - Windows 8.1 - Windows Server 2012 R2 - Windows Server 2012 - Windows Server 2016 - Windows Server 2019 - .NET Framework: Version 4.8 - CPU Speed: 750 MHz minimum, 1 GHz or faster recommended - RAM: 1 GB minimum, 2 GB or more recommended - Disk Space: 100 MB minimum - Network: 100 Mbit/s or faster NIC - Screen Resolution: 1024x768 pixels, minimum XenCenter is compatible with all supported versions of Citrix Hypervisor. Supported guest operating systemsSupported guest operating systems For a list of supported VM operating systems, see Guest operating system support. Pool requirementsPool requirements A resource pool is a homogeneous or heterogeneous aggregate of one or more servers, up to a maximum of 64. Before you create a pool or join a server to an existing pool, ensure that all servers in the pool meet the following requirements. Hardware requirements All of the servers in a Citrix Hypervisor previously, there are some other configuration prerequisites for a server joining a pool: It must have a consistent IP address (a static IP address on the server or a static DHCP lease). This requirement also applies to the servers providing shared NFS or iSCSI storage. Its system clock must be synchronized to the pool master (for example, through NTP). It cannot be a member of an existing resource pool. It cannot have any running or suspended VMs or any active operations in progress on its VMs, such as shutting down or exporting. Shut down all VMs on the server before adding it to a pool. It cannot have any shared storage already configured. It cannot have a bonded management interface. Reconfigure the management interface and move it on to a physical NIC before adding the server to the pool. After the server has joined the pool, you can reconfigure the management interface again. It must be running the same version of Citrix Hypervisor, at the same patch level, as servers already in the pool. It must be configured with the same supplemental packs as the servers already in the pool. Supplemental packs are used to install add-on software into the Citrix Hypervisor control domain, dom0. To prevent an inconsistent user experience across a pool, all servers in the pool must have the same supplemental packs at the same revision installed. It must have the same Citrix Hypervisor license as the servers already in the pool. You can change the license of any pool members after joining the pool. The server with the lowest license determines the features available to all members in the pool. Citrix Hypervisor servers in resource pools can contain different numbers of physical network interfaces and have local storage repositories of varying size. Note: Servers providing shared NFS or iSCSI storage for the pool must have a static IP address or be DNS addressable. Homogeneous pools A homogeneous resource pool is an aggregate of servers with identical CPUs. CPUs on a server joining a homogeneous resource pool must have the same vendor, model, and features as the CPUs on servers already in the pool. Heterogeneous pools Heterogeneous pool creation is made possible by using technologies in Intel (FlexMigration) and AMD (Extended Migration) CPUs that provide CPU masking or leveling. These features allow a CPU to be configured to appear as providing a different make, model, or feature set than it actually does. These capabilities enable you to create pools of hosts with different CPUs but still safely support live migrations. For information about creating heterogeneous pools, see Hosts and resource pools.
https://docs.citrix.com/en-us/citrix-hypervisor/system-requirements.html
2020-10-19T20:44:54
CC-MAIN-2020-45
1603107866404.1
[]
docs.citrix.com
. You can trigger the conversion event from a modification, from shared code, or directly from your site's source code. If you use a modification, you trigger the conversion event either from custom content or, if you're using a template, from the template content. To trigger a conversion event: - ## Testing the conversion tracking ##
https://docs.frosmo.com/pages/viewpage.action?pageId=42798574
2020-10-19T20:39:31
CC-MAIN-2020-45
1603107866404.1
[]
docs.frosmo.com
art.estimators.generation¶ Generator API. Mixin Base Class Generator¶ TensorFlow Generator¶ - class art.estimators.generation. TensorFlowGenerator This class implements a GAN with the TensorFlow framework. __init__ Initialization specific to TensorFlow generator implementations. - input_ph¶ Returns the encoding seed input of the generator of shape (batch_size, encoding_length). - Returns The encoding seed input of the generator. -, y, **kwargs) → np.ndarray¶ Compute the gradient of the loss function w.r.t. x. - Parameters x (Format as expected by the model) – Samples. y (Format as expected by the model) – Target values. - Returns Loss gradients w.r.t. x in the same format as x. - Return type Format as expected by the model predict(x: np.ndarray, batch_size: int = 128, **kwargs) → np.ndarray¶ Perform projections over a batch of encodings. - Parameters x – Encodings. batch_size ( int) – Batch size. - Returns Array of prediction projections of shape (num_inputs, nb_classes).
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/modules/estimators/generation.html
2020-10-19T21:51:02
CC-MAIN-2020-45
1603107866404.1
[]
adversarial-robustness-toolbox.readthedocs.io
Postman is a popular and capable platform for working with and testing APIs. While you are in the exploratory stage with the Shipwell API, we recommend Postman as a platform for familiarizing yourself with the various endpoints exposed through the API. Installation Install Postman on your computer. You can download Postman as a Chrome extension and a native application for Windows and Mac OSX, pick the installation that makes the most sense for your development environment and processes. Visit the Postman website, download the appropriate installation package, and install as instructed here. Import postman collection Collections are a group of API endpoints contained inside Postman. All collections save the latest response and header settings from each API call. Shipwell allows users to download the latest API endpoints, even as all endpoints are backward compatible with the current version of your API. - Download the Swagger specification for the latest version of Shipwell's API. - Import the swagger file to Postman as a collection. Click Import > Import file, and select the swagger file downloaded in step one. - Verify that the collection imported by selecting the folder "Shipwell API." Set up environment variables Postman environments allow you to make specific variables against both the Production and Sandbox Environments. Creation of these two environments is ideal for quick testing. To set up a production and sandbox environment: - In Postman, click the Cog icon at the top-right of the screen. The Manage Environments pop-up appears. - Click Add. - Name your environment. We recommend Shipwell - prod for production and *Shipwell - sandbox** for Sandbox. - Enter the following environment variables for baseURL and authoerization_token, based on the environment you are creating. Substitute the values with your user token and API token, which you can find by authenticating. - Click Add. Repeat these steps for the environment you did not add. Test Your Postman Environment After you have set up an environment, you will want to test an endpoint. - Navigate to the authdirectory and find the auth/me. - Navigate to the headers tab and use the authorization_tokenvariable defined in your environment variables. - Upon entering your environment variables, hit send. Postman returns all information involved with your user, including id, name, permissions, and other relevant information. Updated about a month ago
https://docs.shipwell.com/docs/postman-collections
2020-10-19T21:54:06
CC-MAIN-2020-45
1603107866404.1
[array(['https://files.readme.io/2068c5c-Screen_Shot_2020-03-13_at_12.25.58_PM.png', 'Screen Shot 2020-03-13 at 12.25.58 PM.png'], dtype=object) array(['https://files.readme.io/2068c5c-Screen_Shot_2020-03-13_at_12.25.58_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e7340e5-Screen_Shot_2020-03-13_at_3.34.57_PM.png', 'Screen Shot 2020-03-13 at 3.34.57 PM.png'], dtype=object) array(['https://files.readme.io/e7340e5-Screen_Shot_2020-03-13_at_3.34.57_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/fde0153-Screen_Shot_2020-03-13_at_4.15.22_PM.png', 'Screen Shot 2020-03-13 at 4.15.22 PM.png'], dtype=object) array(['https://files.readme.io/fde0153-Screen_Shot_2020-03-13_at_4.15.22_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/aa4777c-Screen_Shot_2020-03-13_at_7.33.00_PM.png', 'Screen Shot 2020-03-13 at 7.33.00 PM.png'], dtype=object) array(['https://files.readme.io/aa4777c-Screen_Shot_2020-03-13_at_7.33.00_PM.png', 'Click to close...'], dtype=object) ]
docs.shipwell.com
. Video display window. Must be ViEAndroidGLES20 or ViETextureView created by RtcEngine(CreateRendererView or CreateTextureView) The rendering mode of the video view: The unique channel name for the AgoraRTC session in the string format. The string length must be less than 64 bytes. Supported character scopes are: The video mirror mode: User ID.
https://docs.agora.io/en/Voice/API%20Reference/java/classio_1_1agora_1_1rtc_1_1video_1_1_video_canvas.html
2020-10-19T21:08:31
CC-MAIN-2020-45
1603107866404.1
[]
docs.agora.io
This topic explains how to use C++test to calculate metrics. Sections include: ... C++test 's metrics analysis calculates various code metrics for your code ,. Calculating Metrics To calculate metrics: - Review the Built-in> Metrics Test Configuration. If you want to customize the metrics that are checked or their acceptable ranges, create a Test Configuration with your preferred metrics settings. - For general procedures related to configuring and sharing Test Configurations, see Configuring Test Configurations and Rules for Policies. - For details on customizing metrics settings, see Customizing Metrics Settings. - To see the description of a specific metric, right-click it and choose View Documentation from the shortcut menu. - Start the test using the preferred Test Configuration. - For details on testing from the GUI, see Testing from the GUI. - For details on testing from the command line, see Testing from the Command Line Interface. - Review and respond to the metrics calculations. - Project-level metrics will be reported in the UI and in reports. For more details, see Generating an XML Report with Metrics Data. - If you configured C++test to report tasks for out of range metrics (by enabling the Test Configuration’s Static> Metrics tab Report tasks for metrics values out of acceptable ranges option), any out of range metrics will be reported in the Fix Static Analysis Violations category of the Quality Tasks view. - For details, see Reviewing and Responding to Metrics Measurements. ... C++test Server edition can generate an XML report with metrics summary information, as well as individual class and method detail data where applicable. From the GUI To configure XML metrics reporting from the GUI: - In the report preferences dialog, enable Add metric details to XML data. From the Command Line To configure XML metrics reporting from the command line: - Modify your local settings file to use the option report.metrics_details=true. - For details on local settings files, see Local Settings (Options) Files. ... This topic covers how to analyze C++test’s metrics calculations. Sections include: - Accessing Metric Measurements - Identifying Metrics Out of the Acceptable Range - Reviewing an Extreme Element - Learning About the Available Metrics ... For tests run in the GUI, metric measurements are reported in the Metrics view. If this is not displayed, choose Parasoft> Show View> Metrics. ... In the Metrics view, you can use the project tree to determine what metrics are shown in the Metrics view (project, file, method, etc.). When applicable (e.g., at the file level), you can expand the tree to view children. Shortcut menu items vary based on the level of metric you are viewing. Customizing the Metrics View You can customize the Metrics view in terms of: - Items shown (all items, items out of recommended ranges, extreme items, items out of standard deviation). - Resources shown (any resource, any resource in the same project, selected resources only, selected resource and its children). - Columns displayed (sum, number of items, mean, standard deviation, maximum, extreme, extreme item name). To customize these options, click the Filters button in top left of the Metrics view, then customize the options available in the dialog that opens. ... ... If you configured C++test to report tasks for out of range metrics (by enabling the Test Configuration’s Static> Metrics tab Report tasks for metrics values out of acceptable ranges option), out of range metrics will be reported in the Fix Static Analysis Violations category of the Quality Tasks view. Additionally, any metric that is outside of the recommended range will be marked in red in the Metrics view. For tests run from the command line interface, out of range metrics. ... For results in the Metrics view, you can focus on the class or method that is reported as an "extreme element" by right-clicking a metric name, then choosing Show extreme items from the shortcut menu. You can then open the related file by right-clicking the element and choosing Open File. ... To see the description of a specific metric, right-click it in the Test Configuration’s Static> Metrics tab, then choose View Documentation from the shortcut menu. ... This topic explains how to control metrics calculation and reporting. Metrics settings can be set in the Test Configuration’s Static> Metrics tab. You can customize options such as: - The types of metrics checked. - The acceptable range for metrics. - Whether metrics results are included in reports and uploaded to Team Server. - Whether tasks are reported when metrics are found to be outside of the acceptable range. For details on how to change these settings, see Metrics tab.]
https://docs.parasoft.com/pages/diffpages.action?originalId=8309421&pageId=9055122
2020-10-19T21:24:27
CC-MAIN-2020-45
1603107866404.1
[]
docs.parasoft.com
Prerequisites: - You need to have the Dashboard Management User Right How to deactivate a Dashboard in your project - Use the menu in the top right corner and select Edit Project - In the Dashboards section, select which Dashboard you want to (de)activate - The Dashboard will (dis)appear in the side menu on the left How to delete a Dashboard - Navigate to the Dashboard you wish to delete - Select the Action menu in the top right corner and select Edit Dashboard - In the top right corner of the editor, select the hamburger menu - Select delete - Confirm
http://docs.calculus.group/knowledge-base/how-to-deactivate-or-delete-a-dashboard/
2020-10-19T20:37:47
CC-MAIN-2020-45
1603107866404.1
[]
docs.calculus.group
Kiite will only provide public information to your co-workers such as your work email or work phone number and will not respond with sensitive or private information. We take privacy and security very seriously at Kiite, you can read more on our website here,. What if a co-worker asks Kiite my personal information? Written by Donna Litt Updated over a week ago Updated over a week ago
http://docs.kiite.ai/en/articles/2379094-what-if-a-co-worker-asks-kiite-my-personal-information
2020-10-19T20:45:36
CC-MAIN-2020-45
1603107866404.1
[array(['https://static.intercomassets.com/avatars/2698656/square_128/new-1563795633.png?1563795633', 'Donna Litt avatar'], dtype=object) ]
docs.kiite.ai
When you install FlexSim on a computer, you are installing the full software. However, the features that are available in FlexSim will depend on the type of license that is activated on that specific computer. For example, if you installed the free trial version of FlexSim, you installed the full version of FlexSim on your computer, but certain features are limited because you have not activated a license. To learn about the available license types and purchase licenses, you can contact a FlexSim Sales representative near you at. When you first install FlexSim, your computer will run FlexSim Express until you activate your license. FlexSim Express is the trial version of the software and is intended to be used by new users for evaluation purposes. The FlexSim Express license has the following limitations: The FlexSim Express license does have some advantages. Using the FlexSim Express version, you can: Before you can activate a license that you have purchased, you will need: In order to use FlexSim, at some point you will need to set up an account. You can use your FlexSim account to edit your account information, download software, view your license information, and access other FlexSim features such as the community forum. First, you will need your login information for your FlexSim Account. There are two possible ways to set up a FlexSim Account: If you have forgotten your FlexSim Account login information or need to set up your account for the first time, please visit. If you don't know your FlexSim Account login information, or have forgotten it, please contact FlexSim Customer Support. Using your FlexSim Account, you can obtain a FlexSim license code, which is called an Activation ID. An Activation ID is a product key that is used to activate your FlexSim license on a PC or LAN license server. Each Activation ID has a defined number of seats associated with it. The seat count determines the maximum number of computers (or instances of FlexSim for network licenses) that can be authorized to run the full version of FlexSim using that particular Activation ID at any given time. This will take you through the steps necessary to activate a standard FlexSim license through the FlexSim License Server. If your company is using a concurrent license server, see Setting Up a License Server. Make sure you have your FlexSim Account and your Activation ID before you start. (See Before Activating Your License for more information.) Once you have your FlexSim account information: If you need to transfer a standalone license from one computer to another, you will need to return the license on the old computer and activate it on a new computer. To return a license: Before attempting to connect the clients to a concurrent server, you must activate your licenses on the server and start a License Server Manager program on the server. To download the instructions, go to. If you have an active license and a valid maintenance agreement, you will be notified about any new FlexSim software updates as soon as they become available. Typically, you will see a pop-up box when you first open FlexSim that notifies you about the update and asks whether you want to update the software. If you click Yes, FlexSim will update the software. After updating the software, you will also need to upgrade the Activation ID associated with your license to the latest version of FlexSim. After installing the software update, FlexSim will automatically prompt you to upgrade your license. When you click Yes, FlexSim will open the License Activation dialog box to the Upgrade Licenses tab. Click the Request Upgrades button to upgrade your license. The status box below this button will notify you if the upgrade was successful or not. Certain information about the licenses are stored in various places on your hard disk and registry. Tampering with these locations may break the license trust flags, which disables the license. For instance, some registry cleaners, operating system upgrades, computer hardware upgrades, or Windows restore points can break the trust flags. To repair a license that has been disabled, begin by attempting to re-activate your software license. Follow the same steps listed in Activating a License. If that doesn't work, you might need to submit a repair request to FlexSim Customer Support: When you activate a license, the status box might possibly display one of the following common errors: When you return a license, the status box might possibly display the following error:
https://docs.flexsim.com/en/19.2/Introduction/ActivatingManagingLicense/ActivatingManagingLicense.html
2020-10-19T20:43:01
CC-MAIN-2020-45
1603107866404.1
[]
docs.flexsim.com
How to assign a new layout to your categories? Category pages are dynamic pages created by wordpress. You can assign a global layout to all categories or to specific categories within the theme panel under the Archives Settings. Global Settings - Individual Settings - From there, you can assign a layout, activate a sidebar and many other adjustments. The same applies to Search Results, Tag Results & Author Posts
https://docs.xplodedthemes.com/article/25-how-to-assign-a-new-layout-to-your-categories
2020-10-19T21:30:51
CC-MAIN-2020-45
1603107866404.1
[]
docs.xplodedthemes.com
The Evolution of Provisioning Standards¶ Most enterprise solutions adopt products and services from multiple cloud providers to accomplish various business requirements. This makes it insufficient to maintain user identities only in a corporate LDAP. Identity provisioning plays a key role in propagating user identities across different SaaS providers. The challenge that the SCIM (system for cross-domain identity management) specification intends to address is, how to propagate these user identities in an unconventional manner. SPML concepts¶ Service Provisioning Markup Language (SPML) is an XML-based framework developed by OASIS for exchanging user, resource and service provisioning information between cooperating organizations. The Service Provisioning Markup Language can lead to automation of user or system access and entitlement rights to electronic services across diverse IT infrastructures, so that customers are not locked into proprietary solutions. SCIM concepts¶ The System for Cross-domain Identity Management (SCIM) specification is designed to make managing user identities in cloud based applications and services,. Brief history of identity provisioning¶ The following diagram illustrates the progressive development that has taken place in the history of identity provisioning. The OASIS Technical Committee for Service Provisioning was formed in 2001 to define an XML-based framework for exchanging user, resource, and service provisioning information. As a result, the SPML (Service Provisioning Mark Language) came up in 2003 and was based on three proprietary provisioning standards by that time. IBM and Microsoft played a major role in building the SPML 1.0. - Information Technology Markup Language (ITML) - Active Digital Profile (ADPr) - Extensible Resource Provisioning Management (XRPM) SPML 1.0 defined a Request/Response protocol as well as couple of bindings. Requests/Responses are all based on XML and each operation has it own schema. One of the bindings defined in SPML 1.0 is the SOAP binding. It specifies how to transfer SPML requests and responses wrapped in a SOAP message. All the SPML operations supported by the provisioning entity should be declared in the WSDL file itself. The other one is file binding. This binding refers to using SPML elements in a file, typically for the purposes of bulk processing provisioning data and provisioning schema documentation. In the closing stages of SPML 1.0, IBM and Microsoft felt strongly that support for complex XML objects needed to be done differently. The OASIS TC voted to postpone this effort until 2.0. As a result, IBM unofficially stated that they wouldn't be implementing 1.0 and would wait on the conclusion of the 2.0 process. IBM and Microsoft, who were part of the initial SPML specification, went ahead and started building their own standard for provisioning via SOAP based services. This is called WS-Provisioning. WS-Provisioning describes the APIs and schemas necessary to facilitate interoperability between provisioning systems in a consistent manner using Web services. It includes operations for adding, modifying, deleting, and querying provisioning data. It also specifies a notification interface for subscribing to provisioning events. Provisioning data is described using XML and other types of schema. This facilitates the translation of data between different provisioning systems. WS-Provisioning is part of the Service Oriented Architecture and has been submitted to the Organization for the Advancement of Structured Information Standards (OASIS) Provisioning Service Technical Committee. OASIS PSTC took both SPML 1.0 and WS-Provisioning specification as inputs and developed SPML 2.0 in 2006. SPML 1.0 has been called a slightly improved Directory Services Markup Language (DSML). SPML 2.0 defines an extensible protocol (through capabilities) with support for a DSML profile (SPMLv2 DSMLv2), as well as XML schema profiles. SPML 2.0 differentiates between the protocol and the data it carries. SPML 1.0 defined file bindings and SOAP bindings that assumed the SPML1.0 Schema for DSML. The SPMLv2 DSMLv2 Profile provides a degree of backward compatibility with SPML 1.0. The DSMLv2 profile supports a schema model similar to that of SPML 1.0. The DSMLv2 Profile may be more convenient for applications that mainly access targets that are LDAP or X500 directory services. The XSD Profile may be more convenient for applications that mainly access targets that are Web services. The SPML 2.0 protocol enables better interoperability between vendors, especially for the core capabilities (those found in 1.0). You can “extend” SPML 1.0 using ExtendedRequest, but there is no guidance about what those requests can be. SPML 2.0 defines a set of “standard capabilities” that allow you to add support in well-defined ways. SPML definitely addressed the key objective of forming the OASIS PSTC in 2001. It solved the interoperability issues, however, it was too complex to implement. It was SOAP biased and was addressing too many concerns in provisioning than what actually was needed. It was around 2009 - 2010 that people started to talk about the death of SPML. In parallel to the criticisms against SPML - another standard known as SCIM (Simple Could Identity Management) started to emerge. This was around mid 2010 - and initiated by Salesforce, Ping Identity, Google and others. WSO2 joined the effort sometime in early 2011 and took part in all the interop events that have happened so far. changed the definition of SCIM to System for Cross-domain Identity Management and it supports only JSON and now XML. As a result of the increasing pressure on OASIS PSTC, they started working on a REST binding for SPML, which is known as RESTPML, around 2011. This is still based on XML and not yet active so far.Top
https://is.docs.wso2.com/en/5.9.0/learn/the-evolution-of-provisioning-standards/
2020-10-19T22:49:46
CC-MAIN-2020-45
1603107866404.1
[array(['../../assets/img/using-wso2-identity-server/history-of-idp.png', 'history-of-idp'], dtype=object) ]
is.docs.wso2.com
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up and 29.533 , Stats. A commercial fisher shall carry each license with him or her at all times while engaged in any part of commercial fishing and shall exhibit the license to the department or its wardens on demand. Each commercial fishing licensee must be a resident of the Wisconsin. Commercial fish helpers and crew members are not required to hold a license but a commercial fisher using helpers or crew members who do not hold a Wisconsin commercial fishing license shall submit to the department a list of all unlicensed helpers' or crew members' names and addresses along with the fisher's application for a license or before the helper or crew member begins to assist the licensee. Commercial fishers whose commercial fishing approvals or privileges have been suspended or revoked may not act as a helper or crew member for another licensee during the period of suspension or revocation. NR may be submitted along with a licensee's monthly catch report. NR 21.11(1)(a) (a) All game fish that are not commercial fish and all fish that are listed in ch. NR 27 as endangered or threatened species that are taken with any net or setline paragraph. NR 21.11(1)(b) (b) Each person required to hold a commercial fishing license shall be present at all times when any of his or her nets or setlines 21.11(1)(c) (c) Measurement of mesh size is by stretch measure. Such measurements apply to net meshes when in use and no allowance will be made for the shrinkage due to any cause. (29.041, 29.516, 29.522) NR 21.11(1)(cf) (cf) 21.11(1)(cm) (cm) No person may set, place, tend or operate any net or setline that is marked or tagged with the license number or metal tag of another person, except for crew members acting under the direction of the licensee while the licensee is present. NR 21.11(1)(d) (d) There are no bag limits for any commercial species other than are expressly provided in this chapter. (29.041) NR 21.11(1)(e) (e) Improperly placed or tagged commercial fishing gear is a public nuisance and will be seized and held by the department subject to order of the court. NR 21.11(1)(g) (g) No licensed commercial fisher or any member of his or her crew or any person with the commercial fisher or crew may possess any game fish that are not commercial fish while operating commercial gear on the ice or in the open waters of the state or when traveling to or from the operation of the gear. NR 21.11(1)(h) (h) No person may possess or control commercial fishing gear not authorized for use in the Wisconsin-Minnesota boundary waters by this chapter while on the ice or the open waters of the state or while engaged in a fishing operation involving those waters. NR 21.11(1)(i) (i) All fishers required to be licensed under the provisions of ss. 29.523 and 29.533 , Stats., shall complete and submit monthly reports on forms available from the department. All reports shall be submitted to the department by the 10th of the month following each month the commercial fisher is required to be licensed. Each monthly report shall be signed by the commercial fisher. Each commercial fisher shall report all fish sold or kept whether these fish were legally or illegally taken or obtained, the buyers name, address and phone number, and all other information requested on the report form. NR 21.11(1)(k) (k) Commercial fish taken by commercial gear pursuant to dealers license. NR 21.11(1)(L) (L) No person may remove roe from a commercial fish while on the water, ice, bank or shore. Commercial fish shall remain intact until the fish reaches the final processing facility or place of business of the commercial fisher. NR 21.11(1)(p) (p) The department or its agents may require any operator of any commercial fishing gear to cease the fishing operations when the department finds these operations are destructive to game fish or they will endanger any other species of wild animal. (29.041) NR 21.11(1)(q) (q) The department by its agents, employees or wardens may in the absence of the licensee, at any time, raise any commercial fishing gear with as little damage as may be for inspection. (23.11, 29.516) NR 21.11(1)(r) (r) The use or operation of all commercial gear except those authorized in this chapter is prohibited in the Wisconsin-Minnesota boundary waters with the exception of such nets as may be authorized by the department. The specifications for the nets operated under contract with the department must be agreed upon between the states of Wisconsin and Minnesota. NR 21.11(2) (2) Gill nets; seasons, closed areas, gill net restrictions. In the Wisconsin-Minnesota boundary waters, no person may conduct commercial fishing operations with the use of gill nets, drive netting, deadset gill nets or drive set gill nets except in the manner prescribed as follows: NR 21.11(2)(a) (a) Gill nets may be used year round on the Mississippi river, except for the areas listed in pars. (b) and (c) . No person may engage in drive netting of fish from the closed areas listed in par. (b) . NR 21.11(2)(b) (b) No person may set or use gill nets in the following areas: NR 21.11(2)(b)1. 1. Within 900 feet below any U.S. corps of engineers lock or dam on the Mississippi river. NR 21.11(2)(b)2. 2. Goose lake lying in Pierce county. NR 21.11(2)(b)3. 3. All of upper pool 8. NR 21.11(2)(b)4. 4. Trempealeau lakes known as Second, Third and Round lakes lying in Trempealeau county. NR 21.11(2)(b)5. 5. In the Mississippi river within 300 feet of the mouth of any stream tributary to the Mississippi river. NR 21.11(2)(b)6. 6. In lower pool 8, Bluff Slough from 7th Street downstream to where Bluff Slough enters Running Slough. NR 21.11(2)(c) (c) No person may set or use the types of gill nets listed in this paragraph in the following areas, during the periods indicated: NR 21.11(2)(c)1. 1. Open water gill net sets are prohibited in Lake St. Croix from the highway 64 bridge at Houlton, downstream to its confluence with the Mississippi river at all times. NR 21.11(2)(c)2. 2. Dead set gill nets and drive set gill nets may not be used in pool 4, between river mile 780 and 797 from March 1 through May 31. NR 21.11(2)(c)3. 3. Dead set gill nets may not be used in all of lower pool 7 at any time. NR 21.11(2)(d) (d) Additional gill net restrictions. NR 21.11(2)(d)1. 1. Only gill nets with a mesh of 7 inches stretch measure or larger may be used. NR 21.11(2)(d)2. 2. Gill nets may not be used as a drift net and may not be used as or in place of a seine. NR 21.11(2)(d)3. 3. At each end of every gill net set in open water there shall be a buoy on each end of the gang. The buoys shall have a staff extending at least 3 feet above the surface of the water. Upon the upper end of the staff there shall be a flag at least 10 inches square. Upon the bowl of the buoys there shall be maintained in plain figures the license number of the licensee. On gill nets set through the ice there shall be maintained on each end of the gang a board or similar material which shall bear the license number authorizing the use of the net. NR 21.11(2)(d)4. 4. Each gill net set in open water shall be lifted and emptied of all fish at least once each day following the day set. Each gill net set under the ice shall be lifted and emptied of fish at least once every 2 days following the day set. NR 21.11(2)(d)5. 5. Gill nets may not be set, drawn, lifted or operated in any manner between one-half hour after sunset and one hour before sunrise without prior permission of the department. NR 21.11 Note Note: Contact a local department conservation warden or fish manager to request prior permission. NR 21.11(2)(d)6. 6. Gill nets may not be set in a manner that will shut-off more than one-half the width of any channel, bay or slough. NR 21.11(2)(d)7. 7. A gill net may not be set within 1,000 feet of any other commercial fisher's gill net or frame net. NR 21.11(2)(d)8. 8. Shovelnose (hackleback) sturgeon may not be taken in gill nets. NR 21.11(2)(d)9. 9. Game fish that are not commercial fish and other fish that are listed in ch. NR 27 as endangered or threatened species which are taken by gill nets shall be immediately returned, carefully and with as little injury as possible to the water from which they were taken. NR 21.11(2)(d)10. 10. No licensee may join his or her net to that of any other licensee. NR 21.11(3) (3) Seines; seasons, closed areas, seine restrictions. In the Wisconsin-Minnesota boundary waters, no person may conduct commercial fishing operations with the use of seines except in the manner prescribed as follows: NR 21.11(3)(a) (a) Seines of any size may be used year round in the waters of the Mississippi river, Lake St. Croix and the St. Croix river downstream from the U.S. highway 8 bridge located in St. Croix Falls, except for the closed areas listed in pars. (b) and (c) . No person may use drive netting techniques including the use of airboats, boats, plungers or other sound producing devices to drive fish from the closed ares listed in pars. (b) and (c) . NR 21.11(3)(b) (b) Permanently closed areas. NR 21.11(3)(b)1. 1. Within 900 feet below any U.S. corps of engineers lock and dam on the Mississippi river. NR 21.11(3)(b)2. 2. Goose lake lying in Pierce county. NR 21.11(3)(b)3. 3. Trempealeau lakes known as Second, Third and Round lakes lying in Trempealeau county. NR 21.11(3)(b)4. 4. Lagoon, which is located on Barron island within the corporate limits of the city of LaCrosse, as well as the connecting waterway south to state highway 16. NR 21.11(3)(c) (c) Seasonally closed areas. NR 21.11(3)(c)1. 1. Pool 4, between river mile 780 and 797, closed from March 1 through May 31. NR 21.11(3)(c)2. 2. In upper pool 8, the Black river from the Onalaska 9-foot spillway dam downstream to the Soo Line railroad bridge, closed from April 15 through October 30. NR 21.11(3)(c)3. 3. In lower pool 8, Bluff Slough from 7th Street downstream to where Bluff Slough enters Running Slough, closed from April 15 through October 30. NR 21.11(3)(d) (d) A commercial fisher and his or her crew members may not remove more than 100 pounds of catfish per seine haul from the Saturday nearest October 1 to April 30. NR 21.11(3)(e) (e) A commercial fisher and his or her crew members may not take more than 100 pounds of catfish per day in seines regardless of the number of seine hauls done in one day, from the Saturday nearest October 1 through April 30. NR 21.11(3)(f) (f) A commercial fisher and his or her crew members may only take 100 pounds of catfish from a seine haul even if the seine haul takes more than one day to complete. NR 21.11(3)(g) (g) Shovelnose (hackleback) sturgeon may not be taken in seines. Down Down /code/admin_code/nr/001/21 true administrativecode /code/admin_code/nr/001/21/III/10 administrativecode/NR 21.10 administrativecode/NR 21?
http://docs.legis.wisconsin.gov/code/admin_code/nr/001/21/III/10
2013-05-18T21:56:05
CC-MAIN-2013-20
1368696382892
[]
docs.legis.wisconsin.gov
JDOC:Local wiki templates Revision as of 21:20, 1 February 2013: Please note that the content on this page is currently incomplete. Please treat it as a work in progress. - This page was last edited by.
http://docs.joomla.org/index.php?title=JDOC:Local_wiki_templates&diff=80340&oldid=6801
2013-05-18T21:20:28
CC-MAIN-2013-20
1368696382892
[array(['/images/c/c8/Compat_icon_1_5.png', 'Only available in Joomla 1.5'], dtype=object) ]
docs.joomla.org
API17:JApplication:: createSession From Joomla! Documentation (Difference between revisions) Revision as of 20:08, 27 April 2011 />
http://docs.joomla.org/index.php?title=JApplication::_createSession/11.1&diff=prev&oldid=55845
2013-05-18T21:56:41
CC-MAIN-2013-20
1368696382892
[]
docs.joomla.org
Submissions from 2013 Cohousing and the Greater Community: Re-establishing Identity in Taunton’s Weir Village, Andrew Kremzier Layered Transparency: the Performance of Exposure, the Exposure of Performance, Colin Gadoury Nature and Architecture: a Holistic Response, Jarrod Martin Community Reclamation: the Hybrid Building, Laura Maynard New Urban Living: High-Rise Vertical Farming in a Mixed Use Building, Boston, MA, Zachary Silvia LAM: Laughing My Architecture Of, Elizabeth Straub Union Wadding Artist Complex: Pawtucket, Rhode Island, Jennifer Turcotte Submissions from 2011 Invention Center: a Building of Inventions, Jonathan T. Archbald Reusable Building Systems, Daniel Boyle Working with Nature, Carolyn Brown An Architecture Of Connection, Jessie Renee Davey-Mallo Graffiti Gallery, Mike Delvalle Waterfront Revitalization: Bridgeport Aquarium and Waterfront Promenade, Ryan Devenney Center for the Creation and Performance of the Arts, Dennis P. McGowan Closer: Designing a Manufacturing Facility for the Zuni Pueblo Solar Energy Reinvestment Initiative, Seth Van Nostrand Adaptive Reuse of the Big Box Store, Mark C. Roderick Re-conceptualizing Performance and Event in the Public Realm: a Multicultural Funeral Home, Ashley Rodrigues Community Wellness Center: Providence, Rhode Islan, Eva Marie Mercurio Reconnecting Schools and Neighborhoods: A proposal for School Centered Community Revitalization in Baltimore Maryland, Cody Miller A Model School in Massachusetts: Preschool, Kindergarten, First Grade, Robin Nichols Submissions from 2007 Envoking the emotions through the experience of space; integration of an outreach community center and the First United Methodist Church of Hightstown, Elizabeth Dicecco Reconnecting society: a home for elderly living, Cheryl Downie A puzzle piece epidemic, Nicole Gerard Taunton Weir renovation project, Jessica Lynn Harwood Living in transition, Dustin Lombardi The fittingness of fitness: the movement of architecture at a human scale: a reinvention of the typical workplace, Emily Parris The rising cemetery project, Gregory Ralph Finding leisure within choas: the Atlanta Highway Resort, David Strumski
http://docs.rwu.edu/archthese/
2013-05-18T21:07:16
CC-MAIN-2013-20
1368696382892
[]
docs.rwu.edu
editors guide < WPD:Projects(Redirected from WPD:Proposals/editors guide) Editor's Guide [DONE] Summary This is a proposed outline for consolidating dozens of pages under a guide to editing WPD. This will be a procedural guide, beginning with getting up and running on the site and communications channels, to reviewing content, to creating new content. All content currently available for assisting editors will be pulled into this guide, either by merging that content into this guide, or by reference. Comments welcome on this page or on the [email protected] email list. SEE FINAL Step 1: Register for a WebPlatform.org wiki account Create an account with your email address, user name, and password. -* Special:UserLogin&type* signup Make sure to verify your account. - The verification link is sent via email. If you don't see the message, check your spam folder. Log into the WebPlatform.org site. Step 2. Get ready to communicate with the online community Join IRC channel. - The WebPlatform.org online community uses IRC extensively. Log in and ask questions. - - Or download and install an IRC client that enables you to chat in the #webplatform channel. Here are some reliable IRC clients you can use: - mIRC (Windows) - Colloquy (Mac) - [COMMENT:] [PULL CONTENT FROM:] - In the IRC client, connect to the WebPlatform channel. - [COMMENT:] [PULL CONTENT FROM:] - [COMMENT:] Recommend merging the information of these two pages. - [COMMENT:] [PULL CONTENT FROM:] - Log in anytime you want to share ideas or ask questions - [COMMENT:] No change to the Meetings pages. - Log in during meetings. Here are the meeting calendars and the archives of meeting notes: Join the [email protected]. - We announce initiatives and work out proposals on the W3.org mailing list. Subscribe to the w3.org mailing list. - [email protected] - Ask questions or help others in the forum. - - [COMMENT:] REALLY wish this was called "Forums" not Q&A - which sounds like an FAQ list to me. Is it possible to set up a forum channel for contributors to ask questions about editing or authoring content to the wiki? (Rather than just general web questions). Access the bug-tracking system. - (external site) - [COMMENT:] Recommend merging the information of these two pages. - - [COMMENT:] [PULL CONTENT FROM:] - [COMMENT:] [PULL CONTENT FROM:] Step 3: Become familiar with the wiki If you can't wait, start contributing. - - Make sure to use the reference resources in Step 5, below, to ensure you're using the correct markup and styles. - [COMMENT:] The Getting Started page is the primary resource that contributors can use to edit the site. Make sure the Getting Started page links to the wiki syntax page. Make sure contributors understand Step 4, how to add comments and flag pages. Be sure to read the wiki philosophy to understand the mission. - Watch the project video to learn about the site's mission and goals. - (29 min.) - [COMMENT:] Rename Getting Started to Getting Started Contributing Content. Shorten the Getting Started page to include less overlap - more links to relevant pages. - Or watch the shorter version: - [COMMENT:] Is one video preferred over the other? Looking forward to additional videos specific to getting started. -* player_embedded&v* Ug6XAw6hzaw (2 min.) - Read the FAQ and site policies and look at the direction we're going in. - - - [COMMENT:] All three of these pages should be edited. The Task Roadmap page will soon be replaced by a project management system, like Bug Genie. - - [COMMENT:] Recommend updating the Policy page to begin with content in Pillars page, above the links. Explore the web development docs. Step 4. Review existing content Add comments to sections. - Add a comment by hovering your mouse over the relevant section heading and click Add Comment. Leaving comments on articles helps others see exactly what needs to be fixed. - [COMMENT:] Provide a screenshot? Flag issues: broken links, spelling, product bias, and more. - Click the Edit button in the top right corner of the article and choose Edit. Mark articles that need revisions by checking the corresponding checkboxes. - - - Notify other editors about pages that require revisions. - Communicate with the WebPlatform.org online community to ask for clarification and help resolving issues. You can raise issues on IRC, send messages to the [email protected] mail list, or post to forums. - [COMMENT:] This is a duplicate of Step 3, but I think it is OK to reiterate this. Step 5. Update existing content Become familiar with MediaWiki syntax conventions. - [COMMENT:] NEW Page/Section NEEDED that explains how we use the MediaWiki syntax, and lists all valide tags and macros for WPD pages. - [COMMENT:] [PULL OR POINT TO CONTENT FROM:] - [COMMENT:] Remove syntax and conventions from the Style Guide page. - [COMMENT:] [PULL OR POINT TO CONTENT FROM:] - [COMMENT:] Create a new page using above content: Getting started with MediaWiki syntax conventions. - [COMMENT:] [PULL CONTENT FROM: Use accepted wikitext syntax conventions] Read the guidelines and best practices for editing the text. - - [COMMENT:] Delete Manual of Style and Style Manual, after resolving all links to Style_Guide page. and. Move out sections: syntax, common terms, images, etc. Read gotchas. Conform content to one style when editing. - WebPlatform.org uses the Yahoo Style Guide: Find content that needs your review. - Review recently edited articles. - - [COMMENT:] Need to clean up older content on this page. Delete Things already done, Pages being worked on, and Completed pages, unless someone plans to keep updating these sections. - Visit the Most Wanted Tasks page to find articles to fix. - - [COMMENT:] Used for DocSprints, and to point contributors to a specific task, if they don't have one in mind. Step 6. Author or upload new content Refer to the Style_Guide to determine if your content is appropriate for the wiki. Let the team know that you are adding new content. - See Step 2 above. Before creating or moving pages, identify pages linked to existing docs. -* Special%3AWhatLinksHere&target* Template%3ACompat+Unknown&namespace Select where to create the new page. - - Check features for cross browser compatibility. - Here are some good cross-compatibility resources: - - [COMMENT:] [PULL CONTENT FROM:] - Visit the New Page center, choose a type (Tutorial) and click Create. - - To author reference docs, create reference articles. - To author tutorials and concept articles follow the guidelines. Step 7. Prepare and upload assets for articles Optimize files. - Optimize PNG image files and resize to a maximum width of 650px. Some popular optimizers are: - - [COMMENT:] This will require a new page. - Name image file names descriptively. - Right: chrome_prefs.png, drop_shadow.png, box_model_diagram.png - Wrong: image 04.png, screenshot.png, figure10.png, code.png Use the Upload File page to upload the images. Add the link to an uploaded image in the article draft. - Enter the syntax to link the uploaded image file in the article: [[File:File.jpg]]
http://docs.webplatform.org/wiki/WPD:Proposals/editors_guide
2013-05-18T21:36:48
CC-MAIN-2013-20
1368696382892
[]
docs.webplatform.org
This New Mexico that is seeking to sell, arrange, or exhibit works of art or to an artist seeking to have his or her works of art sold. Get Unlimited Access to Our Complete Business Library Plus Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium?
http://premium.docstoc.com/docs/107491547/New-Mexico-Agreement-to-Sell-Works-of-Art
2013-05-18T21:17:54
CC-MAIN-2013-20
1368696382892
[]
premium.docstoc.com
The audit.enabled property (which is set to true by default) provides a way to globally enable or disable the auditing framework. However, enabling this property does not necessarily result in the generation of audit data. To enable generation of audit data that you can view in Alfresco server, for auditing to be fully enabled.
https://docs.alfresco.com/community/concepts/audit-enable.html
2019-07-16T04:57:36
CC-MAIN-2019-30
1563195524502.23
[]
docs.alfresco.com
Contents IT Service Management Previous Topic Next Topic Create a change request template Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a change request template You can create a template that can be used to create change requests with pre-defined supporting tasks. Templates simplify the process of submitting new records by populating fields automatically. Before you beginThe administrator must configure the form layout to add these fields: Next Related Template, Next Related Child Template, Link element.Role required: admin About this taskThere are two change request template configuration items. Change_request: This object does not have a link element, because it is at root level. Change_task: This task object is one level below root level, so it uses the parent table as a link element. Procedure Navigate to System Definition > Templates. Click New. Complete the form as described in Create a template using the Template form . Complete the remaining fields, as appropriate. Field Description Next Related Template A template at the same hierarchical level as the current template (sibling).Use this field on a child template to specify an extra child template under the same parent template. For example, you can use child templates to create multiple change tasks for a change request template and specify sibling child templates.This field is not supported on top-level templates. Next Related Child Template A template at the hierarchical level below the current template (child).You can assign a child template to a child template. Link element Specifies a link to a record created from a child template to the record created from the parent template.The template script chooses the first valid reference field that can link to the parent record when this field is left blank. Click Submit. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/change-management/task/create-a-change-request-template.html
2019-07-16T04:44:03
CC-MAIN-2019-30
1563195524502.23
[]
docs.servicenow.com
What is the AspectJ Standard Library project? The AspectJ Standard Library project (Ajlib) is a library of reusable aspects. Below is a class diagram showing their relationships: Aspect Components TODO Basic information TODO - fix links Source Repository CVS details + CVSWeb links Download - Download the latest distribution Mailing lists - For users and developers Issue tracking - Issue tracking Examples - How to implement a project which uses Ajlib
http://docs.codehaus.org/pages/diffpages.action?pageId=29456&originalId=228166126
2015-02-27T06:15:24
CC-MAIN-2015-11
1424936460576.24
[array(['/download/attachments/29456/ajlib.png?version=1&modificationDate=1160416826949&api=v2', None], dtype=object) ]
docs.codehaus.org
:title: Static Service :description: dotCloud's Static Service is a simple web server that can be used to host static content. :keywords: dotCloud documentation, dotCloud service, static, web server Static ====== .. include:: ../dotcloud2note.inc The). .. include:: service-boilerplate.inc .. code-block:: yaml www: type: static approot: hellostatic Our static content will be in the "hellostatic" directory:: mkdir ramen-on-dotcloud/hellostatic And we should create a little "index.html" file here: .. code-block:: html
http://docs.dotcloud.com/0.9/_sources/services/static.txt
2015-02-27T05:59:46
CC-MAIN-2015-11
1424936460576.24
[]
docs.dotcloud.com
DefinitionConfigures your container in a specific directory Explanation The standalone configuration allows configuring your container so that it is setup to start in a directory you choose (see the configuration page for more general explanations). Whenever you configure or start a container which uses a standalone configuration, Cargo will: -: - By directly instantiating the configuration matching your container. For example: - By using the DefaultConfigurationFactorywhich automatically maps the right implementation for the container you're using. For example: Ant Task Maven2 Plugin Note that the standalone configuration is the default for the Maven 2 plugin so specifying only the following would also work:
http://docs.codehaus.org/display/CARGO/Standalone+Local+Configuration?showChildren=false
2015-02-27T06:21:26
CC-MAIN-2015-11
1424936460576.24
[]
docs.codehaus.org
You can instruct AWS Import/Export not to import some of the directories and files on your storage device. This is a convenient feature that allows you to keep a directory structure intact, but avoid uploading unwanted files and directories. Use the ignore option to specify directories, files, or file types on your storage device to ignore. Use standard Java regular expressions to specify naming patterns. For information about Java regular expressions, go to. The following examples show Java regular expressions that are commonly used in a manifest. The following example uses the ignore option with two Java regular expressions to exclude files with suffix ending with a tilde and .swp. ignore: - .*~$ - .*\.swp$ The following ignore option excludes all the files on the storage device with the . psd extension. ignore: - \.psd$ - \.PSD$ The log report includes all ignored files, including the SIGNATURE file you added at the root of your storage device. The following ignore option specifies that the backup directory at the root of your storage device will be excluded from the import. ignore: - ^backup/ Important When specifying a path that includes a file separator, for example, images/myImages/sampleImage.jpg, make sure to use a forward slash, “/”, and not a back slash. The following ignore option ignores all the content in the images/myImages directory. ignore: - ^images/myImages/ Many storage devices include recycle bins. You may not want to upload the recycle bin in the import process. To skip the recycle bin on Windows computers you specify the following ignore option. The regular expression in the first line applies to NTFS file systems formatted for Windows Vista and Windows 7. The regular expression in the second line applies to NTFS file systems on Windows 2000, Windows XP and Windows NT. And the regular expression in the third line applies to the FAT file system. ignore: - ^\$Recycle\.bin/ - ^RECYCLER/ - ^RECYCLED/ The Java regular expression in the following ignore statement prevents the lost+found directory from being uploaded. ignore: - ^lost\+found/
http://docs.aws.amazon.com/AWSImportExport/latest/DG/IgnoreTips.html
2015-02-27T05:58:49
CC-MAIN-2015-11
1424936460576.24
[]
docs.aws.amazon.com
uang Prabang Province [53] Ministry of Agriculture [54] Ministry of Education [55] Ministry of Energy and Mines [56] National Authority of Science and Technology [58] Main government website [60] Ministry of Education [61] Ministère des Finances et du Budget [62] Bicentenary Celebrations [63] 200+ Government Websites [64] Ministry of Education [65] National Parks Singapore [66] University Grants Commission [68] City of Johannesburg [69] Ministry of Science and Technology [70] Mae Hong Son Province [71] National Science and Technology Development Agency [72] National Electronics and Computer Technology Center [73] The National Telecommunications Commission [74] The Secondary Education Service Area 20 [76] National Research Counil of Thailand [77] Department of Medical Sciences [78] Chiangmai Provincial Labour Protection and Welfare Office [79] Main government website [80] Ministry of Commerce [81] Main country website [82] President of Tunisia [83]] Governorship of Kocaeli - EU Projects Coordination Center [84] Governorship of Ordu [85] Serious Organised Crime Agency [88] UK Foreign Office Awards ceremony [89] UK Parliamentary Press Gallery [90] AFPIMS Resource Center [91] Interagency Resources Management Conference [92] NASA Laser Vegetation Imaging Sensor [93] NASA Virtual Magnetospheric Observatory [94] Standard Labor Data Collection and Distribution Application [95] US Commission on International Religious Freedom [96] NOAA Large Marine Ecosystems of the World [97] US Group on Earth Observations [98] US Department of Agriculture - Coastal Commission [99] US Bankruptcy Court District of Kansas [100] US Army Architecture [101] Commission on the Status of Women [103] Orange County Comptroller [107] City of Atlanta Geographic Information System (GIS) [109] Counsil of State Court Judges [111] Department of Labor [112] City and County of Honolulu's Department of Transportation Services (DTS) [113] Iowa Department of Education [114] Iowa Insurance Division - Flood Awareness [115] Iowa College Student Aid Commission [116] Burns Paiute Reservation [124] Hualapai Tribal Nation [125] Department of Health and Human Services [126] Division of Mental Health and Development Services [127] Oklahoma Department of Commerce [142] Borough of Greencastle [143] Texas Commission on the Arts [144] Department of Human Services News [146] Washington State Criminal Justice Training Commission [148] State of Wyoming Department of Agriculture [151]
https://docs.joomla.org/index.php?title=Government_Websites_Using_Joomla&oldid=38223
2015-02-27T06:49:53
CC-MAIN-2015-11
1424936460576.24
[]
docs.joomla.org
Multipart form data (usually written multipart/form-data) is an encoding format devised as a means of encoding an HTTP form for posting up to a server. It is often used to encode files for upload to web servers. The format is formally defined in RFC-7578 : Returning Values from Forms: multipart/form-data. A HTTP POST message can include a HTTP content-type header of type multipart/form-data to indicate that the body is encoded in this way and the header must contain a directive to define the 'boundary' string used to separate the 'parts' of the body. content-type: multipart/form-data; boundary="9871z7t355g08e925" The boundary is any string of ASCII characters which is unlikely to appear in the data carried in the body. The quotes around the boundary string are only required if it contains special characters, such as colon (:). Each part of the body is introduced by the boundary separator. The sequence is: CRLF, -- (two hyphens), boundary string e.g. --9871z7t355g08e925 Each part has one or more headers which indicate the encoding of that part and which follow immediately after the boundary separator. The content-disposition header is mandatory and must have a value of form-data ; it may also include a directive to specify the name of the field on the form e.g. --9871z7t355g08e925content-disposition: form-data; name="user_name" When the form field is for a file to upload, it's also common practice to include a filename directive, which suggests the name to use to store the file data on the receiving system. --9871z7t355g08e925content-disposition: form-data; name="profile_picture"; filename="my_passport_photo.jpg" Another common (though optional) part header is content-type which is used to indicate the media-type of that part. e.g. for text content --9871z7t355g08e925content-disposition: form-data; name="first_name"content-type: text/plain e.g. for binary content --9871z7t355g08e925content-disposition: form-data; name="details"; filename="config01.txt"content-type: application/octet-stream Binary data carried within multipart/form-data formatted messages is not encoded in any way, it is the raw sequence of binary bytes; it is only the presence of the boundary separator which terminates the sequence. The end of a part is indicated by the occurrence of the next boundary separator. The final part of the body is terminated with a boundary separator which is immediately followed by two hyphens. The sequence is: CRLF, -- (two hyphens), boundary string, -- e.g. --9871z7t355g08e925-- To implement Multipart Form Data support in Unifi we perform the following steps: Set up a scripted REST API which takes text and attachments and generates a response that looks like the multipart body that we wish to ultimately send We call that API (as a loopback REST call to the same instance) with parameters which define the text and attachments (and boundary string) that we want to work with The API writes the combination of text and attachment data as a stream into its response We save the response from this loopback API into a temporary attachment (which will contain the exact multipart body that we wanted) We take the temporary attachment and stream that to the ultimate destination (we will need to know the boundary string that was embedded in the multipart body and include that in the content-type header of the request that we are sending) The logic to support this implementation is in UnifiMultipartHelper which contains helper methods for both the client code which wants to generate multipart data and for the REST API server side which generates the body as a response. If you need the UnifiMultipartHelper script, please request it via our contact page. var mp = new UnifiMultipartHelper();// We need a record to link the temporary attachment to.// (Default is the the sys_user record of the calling user)// For Unifi this is likely to be the HTTPRequest record//// Let's hang it off a record we created earlier and stored in 'grHost'//mp.setHostRecord(grHost);// Alternatively, we could call setHostDetails( table_name, sys_id )// When adding an attachment we need to specify two or three things// - the name of the form field that the target is expecting// - the sys_id of the attachment to add// - (optional) the file name; if not supplied it will be taken from the attachmentmp.addAttachment('file','0e329292398101808125852108518');// We could add more text or attchment parts here// Generate the temporary attachmentmp.createBody();// We can now use the generated attachment along with the boundary string// to send the multipart body to the other systemvar req = new sn_ws.RESTMessageV2();req.setHttpMethod('POST');// getContentType returns the content-type string including the boundary// e.g. multipart/form-data; boundary="xxxxxxxxxxxx"req.setRequestHeader('Content-Type',mp.getContentType());// getBodyId returns the sys_id of the multipart attachmentreq.setRequestBodyFromAttachment(mp.getBodyId());req.setEndpoint( /* the url */ );req.setAuthenticationProfile('basic',/* the auth profile */);var resp = req.execute();// Once we have sent the body we can delete the temporary attachmentmp.deleteBody();
https://docs.sharelogic.com/unifi/attachments/multipart-form-data
2021-07-24T01:47:28
CC-MAIN-2021-31
1627046150067.87
[]
docs.sharelogic.com
Llyong Topa Country: - Taiwan - History - Colonization - Indigenous people Introduction In the Dabao River basin, where the Llyong Topa indigenous community used to be, stands a "ghost temple" where hundreds of Atayal people were buried. They fell victims to Japanese government's first wave of indigenous genocide during its colonial rule of Taiwan. The massacre occurred much earlier than the Wushe Incident (1930), yet it is still rarely known to the world. The filmmaker visited the mountains over a hundred times, seeking clues to fill in the blanks of history. Director Statement The Atayal term “IIyong Topa”, stands for “the Dabao river commonwealth” in Chinese. A hundred years ago, by the Dabao river in Sanxia District, New Taipei City, there lived an Atayal tribe named Dabao. In the years between 1903 and 1907, Japan, via its power as an empire, invaded IIyong Topa by strategic advancement of the “defense frontier”. Historical records marked Dabao tribe as “wiped out”. Awards 2021 Taiwan International Documentary Fesitval - Taiwan Competition Team - Director - Executive Production
https://docs.tfi.org.tw/en/film/6644
2021-07-24T00:35:05
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.tfi.org.tw/sites/default/files/styles/film_banner/public/image/film/%E6%8B%89%E6%B5%81%E6%96%97%E9%9C%B803.png?itok=TCAk7x02', None], dtype=object) array(['https://docs.tfi.org.tw/sites/default/files/styles/film_poster/public/film/img2/%E6%8B%89%E6%B5%81%E6%96%97%E9%9C%B801.png?itok=l8Ovbsqs', None], dtype=object) ]
docs.tfi.org.tw
Viewing and exporting exceptions You can view the list of exceptions in the system and details of an exception on the Exception Management page. You can use various filtering options to limit the data to be displayed on the page. In addition, you can export the exceptions in comma-separated value (CSV) format. By default, this page displays the exceptions created by the currently logged-in user as well as the exceptions created by other users that are applicable to the currently logged-in user. When you upgrade to version 3.0.01 or later of TrueSight Vulnerability Management, the vulnerabilities that are excluded in previous versions are converted to exceptions in version 3.0.01 or later. Converted exceptions appear with the following name, upgraded_<vulnerabilityId>. If an exception with the same name exists in the version 3.0.01 or later, the converted exception appears with the following name, upgraded_<vulnerabilityId>_1, if only one security group has excluded that vulnerability. If another security group has also excluded that vulnerability, the exception appears with the following name, upgraded_<vulnerabilityId>_2. The count in the exception name suffix (_<n>) increases by 1 as the number of security groups increase which have excluded that vulnerability. Each vulnerability is converted into one exception and that exception is applied to all the assets in TrueSight Vulnerability Management. A converted exception has the following other attributes: - Start Date appears as the date on which you upgrade to version 3.0.01 or later. - End date appears as 01/01/9999. - Justification appears as Upgraded exception This topic contains the following sections: To view the list of exceptions - Select TrueSight Vulnerability Management > Exception Management. The Exception Management page appears. (optional) Perform the following actions on the exceptions list: (optional) Perform the following actions on an exception: Filtering exceptions Filters let you limit the data displayed on this page using different criteria, as described in the following sections. By default, all the exceptions in the system are status You can filter the exceptions displayed on the Exception Management page with the Status filter at the top of the page. You can select one of the following statuses: - ACTIVE - ENABLED - DISABLED - EXPIRED Filtering by CVE IDs You can filter the exceptions displayed on the Exception Management page with the CVE filter at the top of the page. This filter is populated only if the exceptions are defined in the system for CVE IDs. You can select multiple CVE IDs. Filtering by tags You can filter the exceptions displayed on the Exception Management page with the Tags filter at the top of the page. The Tags filter is populated only if you have tags defined in the system. You can select multiple tags. Filtering by assets You can filter the exceptions displayed on the Exception Management page with the Assets filter at the top of the page. This filter is populated only if the exceptions are defined in the system for assets. You can select one of the following values: - With All Assets: When you choose this option, the exceptions for which you had selected the All option under Assets are shown. - With Selected Asset Names: When you choose this option, the exceptions for which you had selected the Selected option under Assets are shown. When you choose the With Selected Asset Names option, another filter appears where you can further select assets for which you want to see the exceptions. Exception status and operations The operations that you can perform on an exception depend on the status of the exception, as shown in the following table: Exporting exceptions You can export the exceptions in your system to a CSV formatted file. Exported data is stored in a ZIP file. You can also limit the data to be displayed on the pages by using various filters and then export the limited data. After exporting, you can open the file in a spreadsheet and then manipulate the data in any way you want. To export the contents of the dashboard Click Export, at top right. Using your browser, you can open the file or save it locally. Data in the Start Date and End Date columns is exported according to the browser timezone.
https://docs.bmc.com/docs/vulnerabilitymanagement/31/viewing-and-exporting-exceptions-846042593.html
2021-07-24T00:19:24
CC-MAIN-2021-31
1627046150067.87
[]
docs.bmc.com
AddContact The AddContact method creates a new contact. It creates a new row in the data set containing the information displayed in the Edit Contact Information GUI panel. It then propagates the data set changes to Caché. The event handler for the Create button clicks invokes AddContact. The event handler is already coded for you. Here are some more details about the AddContact functionality: It creates a new row for the Contacts table in the data set. It assigns the values from the GUI elements to the new row's fields. It adds the new row to the data set. It invokes Update on conAdapter. This is the CacheDataAdapter object that connnects the data set to the Provider.Contact table. Update propagates the changes to the data set data to Caché. It also reloads the Caché data into the data set. It retrieves the value of the ID field from the new row and displays it on the GUI. After Update is invoked, this value is available in the new row. It invokes DisplayTreeView to redisplay the data in the GUI's tree. Add the method body to the AddContact stub in PhoneForm.cs. private void AddContact() { DataRow newRow = ds.Tables["Contacts"].NewRow(); newRow["Name"] = txtConName.Text; newRow["ContactType"] = comboBox1.SelectedItem.ToString(); ds.Tables["Contacts"].Rows.Add(newRow); conAdapter.Update(ds, "Contacts"); txtConId.Text = newRow["ID"].ToString(); DisplayTreeView(); }
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=TCMP_AddingAContactRel
2021-07-24T02:23:54
CC-MAIN-2021-31
1627046150067.87
[]
docs.intersystems.com
You need to be aware of certain SMB server and volume requirements when creating SQL Server over SMB configurations for nondisruptive operations. This is enabled by default. The application servers use the machine account when creating an SMB connection. Because all SMB access requires that the Windows user successfully map to a UNIX user account or to the default UNIX user account, ONTAP must be able to map the application server's machine account to the default UNIX user account. Additionally, SQL Server uses a domain user as the SQL Server service account. The service account must also map to the default UNIX user. If you want to use automatic node referrals for access to data other than SQL server database files, you must create a separate SVM for that data. This privilege is assigned to the SMB server local BUILTIN\Administrators group. To provide NDOs for application servers using continuously available SMB connections, the volume containing the share must be an NTFS volume. Moreover, it must always have been an NTFS volume. You cannot change a mixed security-style volume or UNIX security-style volume to an NTFS security-style volume and directly use it for NDOs over SMB shares. If you change a mixed security-style volume to an NTFS security style volume and intend to use it for NDOs over SMB shares, you must manually place an ACL at the top of the volume and propagate that ACL to all contained files and folders. Otherwise, virtual machine migrations or database file exports and imports where files are moved to another volume can fail if either the source or the destination volumes were initially created as mixed or UNIX security-style volumes and later changed to NTFS security style. Although the volume containing the database files can contain junctions, SQL Server does not cross junctions when creating the database directory structure. The volume on which the SQL Server database files reside must be large enough to hold the database directory structure and all contained files residing within the share.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cifs-hypv-sql/GUID-D11D79DF-8018-40F1-A43E-A673BC7C5891.html?lang=en
2021-07-24T00:44:31
CC-MAIN-2021-31
1627046150067.87
[]
docs.netapp.com
Problem A user opens up Insert ServiceNow Record, Update ServiceNow Record, Get ServiceNow Record, or Insert ServiceNow Record and while they can choose an object, the property list is blank. 解决方案 Below is the list of API calls the activity pack makes: Design Time vs. Run Time The first two permissions are required only for RPA developers working with the design-time experience. The accounts used by robots to run the process do not need them. Detailed Explanation As an example /api/now/table/incident?sysparm_action=getRecords&sysparm_limit=1 is the API we call for the incident table. You will want to check with your ServiceNow system administrator that you have the correct permission/access to retrieve data from the table. Your ServiceNow administrator should also be able to check the underlying/more granular permissions; these may be different but the overall idea is the same. As for configuring roles this can vary, i.e. ServiceNow administrators may opt to adjust the default role settings. For example if we want to read meta data on the Incident table we'll need access to both the sys_db_object and the Incident table. Each table contains Access Controls and each access control contains Roles. These roles need to be configured for the user to read from both the tables. Updated 4 months ago
https://docs.uipath.com/activities/lang-zh_CN/docs/servicenow-troubleshooting-permissions-issues
2021-07-24T01:32:20
CC-MAIN-2021-31
1627046150067.87
[]
docs.uipath.com
Configuring TrueSight Automation Console for high availability You can configure TrueSight Automation Console in the high availability (HA) cluster. HA ensures that your TrueSight Automation Console is always up and running on at least one node. Whenever a TrueSight Automation Console node fails, another node in the HA setup takes over and starts managing the TrueSight Automation Console tasks. Prerequisites To configure an HA setup for TrueSight Automation Console, make sure that the following prerequisites are met: - The following software components are provisioned externally: - An HA proxy load balancer to switch operations between the TrueSight Automation Console nodes. To configure the HA proxy, see Configuring application clusters. An external Redis Server (5.05 or later version) for user session management and distributed caching Important Make sure to use a non-TLS and non-cluster Redis Server. - An external PostgreSQL database. For the supported versions, see System requirements. You can also use the containerized PostgreSQL database provided by BMC. - TrueSight Automation Console is installed on all the nodes that you want to use for HA. For instructions, see Installing. Configuration steps Do the following: - Log in to one of the TrueSight Automation Console nodes used for HA. - Navigate to the following path where the configure_ha.sh script is available: <installation_dir>/utils - Run the script: ./configure_ha.sh - Provide the following inputs: - Installation directory of the TrueSight Automation Console Application Server - Host name (FQDN or IP address) of the external Redis Server - Port of the external Redis Server - Host name or IP address of the load balancer - Enter yto confirm the inputs or enter rto provide the inputs again. After confirmation, the script saves these configuration updates and restarts the Application Server. - Repeat these steps on the other TrueSight Automation Console nodes that are used for HA. How does this address the single point of failure that is the PostgreSQL database?
https://docs.bmc.com/docs/TruesightAutomationConsole/configuring-truesight-automation-console-for-high-availability-1007084211.html
2021-07-24T02:22:00
CC-MAIN-2021-31
1627046150067.87
[]
docs.bmc.com
Technical Debt At Unleash we care deeply about code quality. Technical debt creeps up over time and slowly builds to the point where it really starts to hurt. At that point it's too late. Feature toggles that have outlived their feature and are not cleaned up represent technical dept that should be cleaned up and removed from your code. In order to assist with removing unused feature toggles, Unleash provides a technical debt dashboard in the management-ui. You can find it by opening up the sidebar in the management ui and clicking on the reporting menu item. The dasboard includes a health report card, and a list of toggles that can be filtrated on different parameters. #Report card The report card includes some statistics of your application. It lists the overall amount of your active toggles, the overall amount of stale toggles, and lastly, the toggles that Unleash believes should be stale. This calculation is performed on the basis of toggle types: - Release - Used to enable trunk-based development for teams practicing Continuous Delivery. Expected lifetime 40 days - Experiment - Used to perform multivariate or A/B testing. Expected lifetime 40 days - Operational - Used to control operational aspects of the system's behavior. Expected lifetime 7 days - Kill switch - Used to to gracefully degrade system functionality. (permanent) - Permission - Used to change the features or product experience that certain users receive. (permanent) If your toggle exceeds the expected lifetime of it's toggle type it will be marked as potentially stale. Your overall health rating is calculated based on the total amount of toggles and how many stale and potentially stale toggles you have in your project. One thing to note is that the report card and corresponding list are showing stats related to the currently selected project. If you have more than one project, you will be provided with a project selector in order to swap between the projects. #Toggle list The toggle list gives an overview over all of your toggles and their status. In this list you can sort the toggles by their name, last seen, created, expired, status and report. This will allow you to quickly get an overview over which toggles may be worth deprecating and removing from the code.
https://docs.getunleash.io/user_guide/technical_debt/
2021-07-24T02:29:20
CC-MAIN-2021-31
1627046150067.87
[array(['/assets/images/reporting-8804e152f1110dc2d6cfc1a61f62fb61.png', 'Technical debt'], dtype=object) array(['/assets/images/reportcard-3462057c3adc9253f295419a19a466fb.png', 'Report card'], dtype=object) array(['/assets/images/togglelist-9fe2bfab60662e6706e9327fbcad1664.png', 'Toggle list'], dtype=object) ]
docs.getunleash.io
Use Traps Agent for Windows Use the Traps console to view the agent status, initiate a connection to the server, view and send logs, view security events that occurred on the endpoint, and change the display language of the Traps console. Traps™ agent 6.1 installs in the C:\Program Files (x86)\Palo Alto Networks\Trapsfolder.AdvancedtabC:\Program Files\Palo Alto Networks\Trapsfrom the home page of the Traps console. If the agent successfully establishes a connection with the server, the Connection status changes to Connected. - View and send logs. - View logs—Open Log Fileto view logs generated by the Traps agent. The logs display in your default text editor in chronological order with the most recent logs at the bottom. - Send logs—Send Support Fileto collect Traps logs and send them to Traps management service. The logs help you to analyze any recent security events and Traps issues that you encounter. - View recent security events that occurred on your endpoint. - ClickAdvanced, if necessary, to display additional actions that you can perform from the Traps console. - ClickEvents.For each event, the Traps console displays the localTimethat an event occurred, the name of theProcessthat exhibited malicious behavior, theModulethat triggered the event, and the mode specified for that type of event (Termination or Notification). - Change the display language for the Traps console.The Traps console is localized in the following languages: English, German, French, Spanish, Chinese (traditional and simplified), and Japanese. - ClickAdvanced, if necessary, to display additional actions that you can perform from the Traps console. - ClickSettings. - Select the display language for Traps (default is English). - Configure proxy communication.This topic describes how to use Traps with both user and system proxy configurations. You can also configure an application-specific proxy for Traps in Traps 6.1.2 and later releases thenetshcommand. - <protocol>is either http (unsecure) or https (secure) depending on which protocol you use for proxy communication. - <proxyserver>is the IP address or FQDN for your proxy server. - <port>is the port number used for communication with the proxy server. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/cortex/cortex-xdr/6-1/cortex-xdr-agent-admin/traps-agent-for-windows/use-the-traps-agent-for-windows.html
2021-07-24T02:50:09
CC-MAIN-2021-31
1627046150067.87
[]
docs.paloaltonetworks.com
Chen Uen Country: - Taiwan - Manga - Arts - Animation Introduction Chen Uen is the most celebrated artist in the field of comic art in Asia. His work presented a unique aesthetic style that blends art forms of Chinese ink painting and western painting skills. His agility and skillfulness in using various materials have made him unparalleled, which is regarded as “Chen Uen aesthetic.” This film focuses on Chen Une’s art career, depicting Chen’s aesthetics of life through representing his works and interviewing related persons. Centered on Chen Uen and his artworks, this film gives a brief overview of comic industries in greater China and Japan, trying to explore and carry forth Chen’s legacy. Director Statement In March of 2017, Chen Uen passed away at his drawing desk. He had created a fantastic world of comic with his pen, and the aesthetic of comic art had been brought to a new level. Start from Taiwan in 80s, he is the first foreign artist to get comic award in the kingdom of manga, Japan. After 2000, he set foot in Hong Kong and then established a unique aesthetic of online game art in China in the following decade. Chen Uen was an artist who wandered in the wild of visual art across time and space. Since Chen Uen had gone, making this film is like doing a big jigsaw puzzle. From memories of over 50 interviewees, from his works, from footages and pictures, we try to find the ‘truth,’ while gradually see his loneliness, his pride, his courage, his frustration, and his contradiction, which are all true faces of the artist. So, what’s important after all? In the end, we realized we are making inquiries to life. It's a film of life. Awards 2020 Golden Horse Awards - Best Documentary Nomination Team - Director - Film Producer
https://docs.tfi.org.tw/en/film/6646
2021-07-24T00:22:28
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.tfi.org.tw/sites/default/files/styles/film_banner/public/image/film/%E5%8A%87%E7%85%A71%20%282%29.jpg?itok=WV96PI_5', None], dtype=object) array(['https://docs.tfi.org.tw/sites/default/files/styles/film_poster/public/film/img2/%E5%8D%83%E5%B9%B4%E4%B8%80%E5%95%8F_%E6%B5%B7%E5%A0%B1%20%E7%84%A1%E4%B8%8A%E6%98%A0%E6%97%A5%E6%9C%9F_S.jpg?itok=0h30sthI', None], dtype=object) ]
docs.tfi.org.tw
Contents: Contents: - Inputsintl(StartDate, EndDate) Output: Returns the number of working days between StartDate and EndDate. Syntax and Arguments networkdaysintl(date1,date2[,str_workingdays][. Usage Notes:: Usage Notes: array_holiday An array containing the list of holidays, which are factored in the calculation of working days. Values in the array must be in either of the following formats: ['2020-12-24','2020-12-25'] ['2020/12/24','2020/12/25'] Usage Notes: Tip: For additional examples, see Common Tasks. Examples Tip: For additional examples, see Common Tasks. Example - Date diffing functions.
https://docs.trifacta.com/display/r076/NETWORKDAYSINTL%20Function
2021-07-24T01:19:07
CC-MAIN-2021-31
1627046150067.87
[]
docs.trifacta.com
UiPath.Box.Activities.File.UnlockFile The Unlock File activity uses the Box UpdateFile API to unlock a locked file (File Id). After unlocking Unlock Booleanvalue that reports the current file lock status. This field supports only Booleanvariables. The Booleanvalues are: - True - file is locked. - False - file is unlocked. 2 个月前更新
https://docs.uipath.com/activities/lang-zh_CN/docs/box-unlock-file
2021-07-24T00:55:29
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/52ea7c7-UnlockFile_MSC.png', 'UnlockFile_MSC.png'], dtype=object) array(['https://files.readme.io/52ea7c7-UnlockFile_MSC.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
. Pre-requisites¶ The server will minimally need to have Java 8 or greater, Grails, git, ant, a servlet container e.g. tomcat7+, jetty, or resin. An external database such as PostgreSQL (9 or 10 preferred) is generally used for production, but instructions for MySQL or the H2 Java database (which may also be run embedded) are also provided. To build the system natively JDK8 is required (typically OpenJDK8). To run the war, Java 8 or greater should be fine. Important note: The default memory for Tomcat and Jetty is insufficient to run Apollo (and most other web apps).You should increase the memory according to these instructions. Other possible build settings for JBrowse: Ubuntu / Debian sudo apt-get install zlib1g zlib1g-dev libexpat1-dev libpng-dev libgd2-noxpm-dev build-essential git python-software-properties python make RedHat / CentOS sudo apt-get install zlib zlib-dev expat-dev libpng-dev libgd2-noxpm-dev build-essential git python-software-properties python make It is recommended to use the default version of JBrowse or better (though it does not work with JBrowse 2 yet). There are additional requirements if doing development with Apollo. Install node and yarn¶ Node versions 6-12 have been tested and work. nvm and ``nvm install 8``` is recommended. npm install -g yarn Install jdk¶ Build settings for Apollo specifically. Recent versions of tomcat7 will work, though tomcat 8 and 9 / .profile / .zshrc export JAVA_HOME=`/usr/libexec/java_home -v 1.8` # OR If you need to have multiple versions of java (note #2222), you will need to specify the version for tomcat. In tomcat8 on Ubuntu you’ll need to set the /etc/default/tomcat8 file JAVA_HOME explicitly: JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 Download Apollo from the latest release under source-code and unzip.Test installation by running ./apollo run-local and see that the web-server starts up on setup for production continue onto configuration below after install ., you can also run via docker. Furthermore, the apollo-config.groovy has different groovy environments for test, development, and production modes. The environment will be selected automatically selected depending on how it is run, e.g: apollo deploy.
https://genomearchitect.readthedocs.io/en/latest/Setup.html
2021-07-24T01:06:33
CC-MAIN-2021-31
1627046150067.87
[]
genomearchitect.readthedocs.io
Routing Table Warning: Make sure to read and understand how to Create Luos containers before reading this page. The routing table is a feature of Luos allowing every containerSoftware element run by Luos that can communicate with other containers. It can be a driver or an app. (Initially called a module) to own a "map" (or topology) of the entire network of your device. This map allows containers to know their physical position and to search and interact with other containers easily. This feature is particularly used by apps containers to find other containers they need to interact with. The routing table is shared by the container which launches the detection to other containers, but only apps containers store the routing table internaly. Detection The routing table is automatically generated when a network detections is initiated by a container. It is then shared with other containers at the end of the detection. A detection can be initiated by any container, but driver containers should not be able to run it; this kind of features should be only used with app containers by including routingTable.h and using this routing table API. To run a detection, type: RoutingTB_DetectContainers(app); where app is the container_t pointer running the detection. A non-detected container (not in the routing table) has a specific ID of 0. At the beginning of the detection, Luos erases each container's ID in the network, so all of them will have the ID 0 during this operation. You can use it on your containers code to act consequently to this detection if you need it (for example, a container can monitor its ID to detect if a detection has been made and if it has to reconfigure its auto-update). Then the container running the detection will have the ID 1 and the other containers will have an ID between 2 and 4096, depending on their position from the container detector. The IDs are attributed to the containers according to their position from the detector container and to the branch they are in. The ID attribution begins first to the PTPA port, then PTPB, etc. When each container in the network has an attributed ID, the detection algorithm proceeds to the creation of the routing table and shares it with every containers (saved only one time per node). Sometimes, multiple containers in the network can have the same alias, which is not allowed to prevent container confusion. In this case, detection algorithm will add a number after each instance of this alias on the routing table. Warning: Be careful that during a detection, a container can change ID depending on the container running this detection. Do not consider your container's ID fixed. Also, be aware that every containers remove their auto-update configuration during the detection to prevent any ID movement. Modes As explained in this page, nodesHardware element (MCU) hosting and running Luos and hosting one or several containers. can host multiple containers. To get the topology of your device, the routing table references physical connexions between your nodes and lists all the containers in each one of them. The routing table is a table of a routing_table_t structure containing nodes or containers information. The maximum number of containers and nodes are managed by the precompilation constant MAX_containerS_NUMBER (set to 40 by default). routing_table_t routing_table[MAX_CONTAINERS_NUMBER]; The routing table structure has two modes: container entry mode and node entry mode. typedef struct __attribute__((__packed__)) { entry_mode_t mode; union { struct __attribute__((__packed__))// CONTAINER mode entry { uint16_t id; // Container ID. uint16_t type; // Container type. char alias[MAX_ALIAS_SIZE]; // Container alias. }; struct __attribute__((__packed__))// NODE mode entry { // Watch out, this structure has a lot of similarities with the node_t struct. // It is similar to allow copy of a node_t struct directly in this one // but there is potentially a port_table size difference so // do not replace it with node_t struct. struct __attribute__((__packed__)) { uint16_t node_id : 12; // Node id uint16_t certified : 4; // True if the node have a certificate }; uint16_t port_table[(MAX_ALIAS_SIZE + 2 + 2 - 2) / 2]; // Node link table }; uint8_t unmap_data[MAX_ALIAS_SIZE + 2 + 2]; }; } routing_table_t; container entry mode This mode allows routing_table to contain: - id: container's unique id - type: container's type - alias: container's alias For more information, please refer to the containers page of this documentation. Node entry mode This mode gives physical information of your devices. The node_id is the unique number that you can use to identify each one of your nodes. At the beginning (or when a reset detection is perfomed), all node IDs are set to 0. When the RoutingTB_DetectContainers API is called, Luos assigns a unique ID to nodes and containers in your system topology. The certified Luos node can be certified for your system by including Luos licencing number in your product (feature in progress). The port_table allows sharing of topological information of your network. Each element of this table corresponds to a physical Luos port of the node and indicates which node is connected to it by sharing a node's id. Here is an example: As shown on this image, elements of the port_table indicate the first or last container id of the connected node through a given port. Specific values taken by port_table: - 0: this port is waiting to discover who is connected with. You should never see this value. - 0x0FFF: this port is not connected to any other Node. Note: Routing tables can be easily displayed using Pyluos through a USB gate. Please refer to the Pyluos routing table section for more information. Search tools The routing table library provides the following search tools to find containers and nodes' information into a Luos network: Management tools Here are the management tools provided by the routing table library:
https://docs.luos.io/pages/embedded/containers/routing-table.html
2021-07-24T01:20:52
CC-MAIN-2021-31
1627046150067.87
[array(['../../../_assets/img/routing-table.png', 'Routing table'], dtype=object) ]
docs.luos.io
How To Generate Meeting Time Slots via Meeting Scheduler¶ Tip Also see this Revenue Grid blog article for special insights will add activities created this way immediately to your Salesforce. Tip See this article to learn how to use the Time Slots feature in RI Chrome Extension) which you want to initiate a meeting with or create a new email message by clicking the New email or the Reply button in MS Outlook ribbon and specify one or multiple recipients in the To, Cc, or BCC fields. 2. In Revenue Inbox sidebar, click the Time slots icon in the Smart actions bottom toolbar. 3. On the first page of Revenue Inbox’s Meeting Scheduler, fill in the details: - Meeting subject. This field is prefilled with the subject of related email message; you can modify it according to your preferences - Organizers. Specify users from your Org to be assigned the meeting’s organizer(s) - Location and duration - In the Location picklist you can select a location retrieved from your MS Exchange’s time zone in the Slots in drop-down list at the bottom of the dialog (by default the recipient’s time zone will be set). You can quickly find the needed time zone by its abbreviation, e.g. HST/HDT; AKDT; AKST; PDT; MST/MDT; CDT; CST; EDT; AST; SST; ChST; EST - the Exchange/Office 365 calendar and in Salesforce. Please note that it will not be included into the text of the invitation email generated by Meeting Scheduler For user friendliness considerations, some advanced meeting settings are hidden under the Advanced tab: selected in MS Outlook. You can add more attendees by entering their email addresses/Contact names in this field or remove attendees from the invitation by clicking the ( x ) next to their email addresses/Contact names. You can drag and drop items between the fields. Note that in the Attendees list external ones (not belonging to your Org) are listed on top, the internal (in-Org) ones are listed below them After populating all required fields, click Next. Tip You can hide the advanced Time slots dialog’s fields and controls which you do not use when creating meetings, just click the v (collapse) icon next to the Advanced options section title. 4. Next, Revenue Inbox will read your MS Outlook or Google calendar data and build daily tables of your unoccupied time slots. - Open your preferred date by clicking the Date selector at the top of the dialog; you can also shift between days by clicking the arrow icons on either side of the selector >>> Click to see a screenshot <<< \date-selector.png) - Pick meeting time spans that suit you best by clicking on a free slot and dragging the cursor down; selected time spans can be longer than the meeting Duration specified at the previous step but cannot be shorter. The Tip Note that you can select multiple time slots on different dates: after you have picked slots on one date and then switch to another date using the calendar control at the top, all previously selected slots are kept. Click Next after selecting the needed meeting slots to proceed Automatic Parsing of In-Org Mandatory Attendees’ Availability¶ For your convenience, if any of the meeting’s attendees are your colleagues (their email addresses recognized as internal), their parsed calendars will also be shown at this step, so you will be able to select meeting time slots also suiting their calendars: Note In the latest updates of Revenue Inbox if you select several adjacent time slots, they do not get merged and several different slot links are generated. Additionally, please note that time slots can no longer be selected within a day-long span reserved for all-day calendar events, including tentative, busy, out of office ones. However, free and working elsewhere (non-mandatory) all-day events do not impose this limitation and are no longer indicated in Meeting Scheduler’s calendar. The same handling patterns are also applied for non-all-day meetings and appointments. Also note that slots marked as Tentative in your calendar are parsed as Available both on the slots selection page and the booking page of RI Meeting Scheduler. 5. Click Finish in the upper right corner of the dialog to insert generated time slots links into your email message. >>> Click to see a screenshot <<< \file-5h5nXy6qg7.png) Tip If you want to set a custom Time slots message template, refer to this article. How to Select a Time screenshots (New View) <<< \new-selection.png) \new-confirmation.png) If you toggle the switch New View in the upper right corner of the Booking window, you will see the old rendition of the table: >>> Click to see old interface screenshots (Old View) <<< _10<< Important The time slot links are valid for two weeks; they expire after the meeting time passes or 14 days after they are generated. A meeting can be scheduled for any date in the future, but the recipients must reserve their preferred time slots within 14 days. This validity message with the links you generated will be sent by the Add-In directly. Calendar slots Status parsing by Meeting Scheduler¶ When RI parses occupied slots data from Time Slots creation dialog, click the Location field and enter MS Teams (or another custom name configured for your Org) or select this item in the recent rooms picklist >>> Click to see a screenshot <<< \MS-Teams.png) 2. Complete Time Slots links generation and send them Time Slots email;_12<<\Teams-link.png) We would love to hear from you
https://docs.revenuegrid.com/ri/fast/articles/How-to-Send-Meeting-Time-Slots-%28Adaptive-view%29/
2021-07-24T02:25:26
CC-MAIN-2021-31
1627046150067.87
[array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b55de9f0428631d7a89343a.png', None], dtype=object) array(['../../assets/images/Using-SmartCloud-Connect/How-To-s/How-to-Send-Meeting-Time-Slots-%28Adaptive-view%29/file-TQ6mfnlRuu.png', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b5601a12c7d3a03f89ce202.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b6c57550428631d7a89cf75.png', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['..\\..\\assets\\images\\d33v4339jhl8k0cloudfrontnet\\docs\\assets\\57398d2e903360669faf1f0a\\images\\5b5719420428631d7a893e3d.png', None], dtype=object) array(['..\\..\\assets\\images\\d33v4339jhl8k0cloudfrontnet\\docs\\assets\\57398d2e903360669faf1f0a\\images\\5b561ae40428631d7a8936b5.png', None], dtype=object) array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b6c58880428631d7a89cf8d.png', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['..\\..\\assets\\images\\Using-SmartCloud-Connect\\How-To-s\\How-to-Send-Meeting-Time-Slots-(Adaptive-view', None], dtype=object) array(['../../assets/images/faq/fb.png', None], dtype=object)]
docs.revenuegrid.com
WCAG, Section 508, WAI-ARIA There are several standards, policies and principles that govern how accessible applications and components are created. This article offers an overview of them. For a list of the accessibility compliance levels support provided by the Telerik UI for Blazor components see the Telerik UI for Blazor Accessibility Compliance article. In this article you will find information on the general topics of accessibility: - Standards and Policies - Technical Specificatns: - Rehabilitation Act of 1973 Section 508 (Latest Amendment) - Telerik UI for Blazor Accessibility Compliance W3C Web Content Accessibility Guidelines (WCAG) 2.1 Specificatns - Telerik UI for Blazor Accessibility Compliance: See the Keyboard Support in Telerik UI for Blazor article for more details on using the Telerik components with the keyboard.
https://docs.telerik.com/blazor-ui/accessibility/wcag-section-508-wai-aria
2021-07-24T02:39:51
CC-MAIN-2021-31
1627046150067.87
[]
docs.telerik.com
Before setting up your local repository, you must have met certain requirements. Selected an existing server, in or accessible to the cluster, that runs a supported operating system. Enabled network access from all hosts in your cluster to the mirror server. Ensured that the mirror server has a package manager installed such as yum (for RHEL, CentOS, Oracle, or Amazon Linux), zypper (for SLES), or apt-get (for Debian and Ubuntu). Optional: If your repository has temporary Internet access, and you are using RHEL, CentOS, Oracle, or Amazon Linux as your OS, installed yum utilities: yum install yum-utils createrepo After meeting these requirements, you can take steps to prepare to set up your local repository. Steps Create an HTTP server: On the mirror server, install an HTTP server (such as Apache httpd) using the instructions provided on the Apache community website. Activate the server. Ensure that any firewall settings allow inbound HTTP access from your cluster nodes to your mirror server. On your mirror server, create a directory for your web server. For example, from a shell window, type: - For RHEL/CentOS/Oracle/Amazon You next must set up your local repository, either with no Internet access or with temporary Internet access. More Information httpd.apache.org/download.cgi
https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-installation/content/getting_started_setting_up_a_local_repository.html
2021-07-24T01:27:19
CC-MAIN-2021-31
1627046150067.87
[]
docs.cloudera.com
已完成 Ballet in Tandem Ballet in Tandem Director : Year : 2021 Country: - Taiwan Running Time : 143 min - Dance - Education Introduction Ballet in Tandem is a triptych of Taiwanese dancers’ tireless pursuit in classical ballet despite overwhelming odds. Director Statement Ballet in Tandem is a feature-length documentary exploring the state of ballet in Taiwan. As we follow the interwoven journeys of the three Taiwanese dancers who have dedicated themselves to the art form, their joys and pathos, successes and failures, dreams and disillusions will compel the viewers to challenge their collective assumptions about ballet and rethink the relationship between "culture" and "body" with a more open attitude. Team - Director
https://docs.tfi.org.tw/en/film/6648
2021-07-24T02:31:46
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.tfi.org.tw/sites/default/files/styles/film_banner/public/image/film/MVI_1206.jpg?itok=uJqRUsca', None], dtype=object) array(['https://docs.tfi.org.tw/sites/default/files/styles/film_poster/public/film/img2/%E8%88%9E%E5%BE%91%E9%83%AD%E8%93%89%E5%AE%89457%20copy.jpg?itok=g3VeajqG&c=34f9c19c344c2ff9b5ad2f1603b20a96', None], dtype=object) ]
docs.tfi.org.tw
LifeKeeper can protect the pluggable database (“PDB”) as well as the Oracle database server provided that the Oracle database supports the Oracle Multitenant architecture and protects the container database (“CDB”), Checking CDB and PDB - Oracle resources must be created to protect the PDB. Also, the protected Oracle resource must be a CDB. You can check whether it is a CDB or not after connecting to the database using the following command. SQL> select CDB from V$DATABASE; - To protect the PDB, the PDB must be mounted inside the CDB. You can check whether the PDB is mounted by using the following command after connecting to the database. SQL> show pdbs; Creating Oracle PDB Resources - From the LifeKeeper GUI menu, select Edit, then select Server. From the drop-down menu, select Create Resource Hierarchy. A dialog box appears displaying all the recognized Recovery Kits installed in the cluster in a dropdown list. Select Oracle Pluggable Database from the dropdown list. Click Next to proceed to the next dialog box. Note: If the Back button is active in a dialog box, you can return to the previous dialog box by clicking it. This is especially useful when you encounter errors and need to correct the information you entered earlier. At any stage of the hierarchy creation process, clicking Cancel will cancel the entire creation process. - You will be prompted to enter the following information. If the Back button is active in a dialog box, you can return to the previous dialog box. This is useful when you encounter errors and need to correct the information you entered earlier. You can click Cancel at any time to cancel the entire creation process. - Click Next. Create Resource Wizard appears and the Oracle PDB resource hierarchy is created. LifeKeeper verifies the input data. If a problem is detected, an error message appears in the information box. - A message saying that the Oracle PDB resource hierarchy has been successfully created and that the hierarchy must be extended to another server in the cluster to provide failover protection is displayed. Click Next. - Click Continue. The Pre-extend Wizard is launched. See Step 2 in “Extending the Oracle PDB Resource Hierarchy” for details on extending the resource hierarchy to another server. Extending Oracle PDB Resources - Select Extend Resource Hierarchy from Resource in the Edit menu. The Pre-Extend Wizard is displayed. If you are not familiar with advanced operations, click Next. If you understand the default values for extending the LifeKeeper resource hierarchy and do not need to enter and confirm them, click Accept Defaults. - Enter the following details in the Pre-Extend Wizard. Note: The first two fields appear only when you start the operations from Extend in the Edit menu. - When the pre-extending checking is successful message is displayed, click Next. - Depending on the hierarchy to extend, a series of information boxes will be displayed showing the resource tags to be extended (some cannot be edited). - Confirm that the tag name is correct in Extend Wizard and click Extend. - When the message “Hierarchy extend operations completed” is displayed, click Next Server if you want to extend the hierarchy to another server, or click Finish. - When the message “Hierarchy Verification Finished” is displayed, click Done. Changing the PDB to Protect After creating the hierarchy, change the PDB to protect using the following steps. - From the LifeKeeper GUI, right-click the Oracle PDB resource hierarchy and select Change Protection PDB. - Select the PDB you want to protect (you can select more than one PDB). - Click Next to change the settings. - Click Done to finish. Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.5.0/en/topic/oracle-multitenant-pluggable-database
2021-07-24T02:28:15
CC-MAIN-2021-31
1627046150067.87
[]
docs.us.sios.com
Where are Ancestris users ? Ancestris users, who are we ? Hard to say. Passionate people about Genealogy. People who want to leave something useful and interesting for their descendants. Currently, the best option for you to make sure your genealogy will be able to transfer safely to next generations is to use Ancestris. We do not want to depend on any software. We do not want our lifelong efforts to be locked in a software or a computer that can no longer be accessed or that we no longer pay for. We are all promoters of free genealogy. Where are we? An Ancestris users map was a great idea. For us at Ancestris, this is also a great recognition of our efforts to bring you this software, and we feel very proud every time we look at this map! The map displays all users that wanted their name to appear on the map. Worldwide Ancestris Users Map How to make your name appear on the map? The more we are referenced on the map, the better for all users. To make your name appear on the map, please click on the map where you live. It will automatically copy the coordinates into the request form. Then click on the button that says "Please add me to the map!". A form will appear. Just fill it in with the following information. - Your first and last name - Your geo coordinates. It should be already filled in from the previous right-click on the map. - Your city and country - Your email - necessary so we can reply to you. - You operating system (Linux, MacOS, Windows) Press "Send request". It will send us an email that we will check and process. We will reply to you when your name has been added. What can the user community bring to you? All of us are volunteers and we benefit from each others help. We help you. You help the others. We constitute a great community and welcome new users. We have a positive spirit and are happy like this. Feel free to post questions to the Forum or the Discussion list. We are usually very reactive. Once you become a user, it would be great for you to be able to share your experience with other users. Have a look at the forum from time to time and you will see how easy it is to just help others.
https://docs.ancestris.org/books/user-community/page/where-are-ancestris-users-
2021-07-24T00:40:51
CC-MAIN-2021-31
1627046150067.87
[]
docs.ancestris.org
This section provides an overview of how to perform mathematical operations between columns. Before you begin, you should verify that the data types of the two columns match. Check the icon in the upper left of each column to verify that they match. To change the data type, you can:. You can express mathematical operations using numeric operators or function references. The following two examples perform the same operation, creating a third column that sums the first two. Numeric Operators: Math Functions: For more information, see Numeric Operators. For more information, see Math Functions.). To create a new column in which a math operation is performed on two other columns, use the New Formula transformation. The following multiplies Qty and UnitPrice to yield Cost: If you need to work with more than two columns, numeric operators allow you to reference any number of columns and static values in a single expression. However, you should be careful to avoid making expressions that are too complex, as they can be difficult to parse and debug. If you are concatenating string-based content between multiple columns, use the Merge Columns transformation The following creates a third column with a dash between the values of the two source columns: You can use aggregate functions to perform mathematic operations on sets of rows. Aggregated rows are collapsed and grouped based on the functions that you apply to them. See Aggregate Functions.
https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=165275061
2021-07-24T01:32:37
CC-MAIN-2021-31
1627046150067.87
[]
docs.trifacta.com
Introduction This guide will walk you through creating a project and running a workflow from scratch. The goal is to automatically detect ships in the harbour of Leixões (Portugal) using SPOT satellite images. For more detailed explanations, please check out the UP42 video tutorials. Create a project and a workflow 1] In order to access the UP42 console, you first need to sign up by following the steps from the article Create an UP42 account. After signing up, create your first project by clicking on Start a Project. 2] Provide a name to your project and add a description (if applicable). Click on Save. 3] In order to take advantage of the UP42 geospatial data and algorithms, you need to build a workflow by clicking Create Workflow. For more information about projects, workflows and jobs, please check the page Core concepts. Add blocks 4] In the UP42 platform, a workflow consists of data blocks and processing blocks. The first block is always a data block. This data block can be followed by one or more processing blocks. In this example, a workflow based on the detection of ships from high-resolution SPOT images (1.5 m spatial resolution) will be shown. The first step is to select the data block from which ships will be extracted. Click on Add data. 5] Browse for the data block SPOT 6/7 Display (Streaming). This data block consumes 3 UP42 credits per tile, which is the equivalent of 0.03 Euro/Dollars. Click on this block and read its description, where additional details are provided. Click on Add Block. 6] The next block that follows the data block is a processing block. Click on Add processing. 7] Browse for the processing block Raster Tiling. This block consumes 0 UP42 credits per megabyte (MB). Click this block and read its description, where additional details are provided. Click on Add Block. 8] The final block that follows this processing block is another processing block. Click the plus sign after the previously added processing block. 9] Browse for the processing block Ship Detection. This block consumes 300 UP42 credits per square kilometers, which is the equivalent of 3 Euro/Dollars. Click this block and read its description, where additional details are provided. Click on Add Block. All the data and processing blocks are listed in our UP42 Marketplace. Congratulations, you successfully created a workflow! The next step is generating the outputs defined by the workflow via job configuration and job run. To proceed, please go to Running your first job.
https://docs.up42.com/getting-started/guides/first-workflow/
2021-07-24T02:06:36
CC-MAIN-2021-31
1627046150067.87
[array(['/static/ba1a3a71bafce10a4e1347f3b69502a1/1cfc2/spot_image_overlayed_ships.png', 'Overlayed ships and original SPOT image in Leixões, Portugal Overlayed ships and original SPOT image in Leixões, Portugal'], dtype=object) array(['/static/e2215d0972532d17d2e071eef5fbf260/82158/step03_welcome.png', 'step03 welcome step03 welcome'], dtype=object) array(['/static/9e575731edd9248c08bdf608dca8f65a/82158/step04_startProject.png', 'step04 startProject step04 startProject'], dtype=object) array(['/static/e56a9f14a696dfe9fa9aa8962d72602d/82158/step05_createWorkflow.png', 'step05 createWorkflow step05 createWorkflow'], dtype=object) array(['/static/45768bb7005d9a400c8c9b6844c1f1a2/82158/step06_addDataBlock.png', 'step06 addDataBlock step06 addDataBlock'], dtype=object) array(['/static/045b7f402ba630a4acfefbb53b99caf3/82158/step07_selectSPOTDataBlock.png', 'step07 selectSPOTDataBlock step07 selectSPOTDataBlock'], dtype=object) array(['/static/6f3b6229f2770b211f2118597010e267/8ce52/step08_clickAddBlock_SPOT.png', 'step08 clickAddBlock SPOT step08 clickAddBlock SPOT'], dtype=object) array(['/static/a4eca300606ff18a6e182bb0c7a03467/82158/step09_addProcessingBlock.png', 'step09 addProcessingBlock step09 addProcessingBlock'], dtype=object) array(['/static/704db734d2ca614bbb8adb92e4076caa/82158/step10_selectRasterTiling.png', 'step10 selectRasterTiling step10 selectRasterTiling'], dtype=object) array(['/static/abea7e2654d98c0c63ebbddd045eb992/8ce52/step11_clickAddBlock_RasterTiling.png', 'step11 clickAddBlock RasterTiling step11 clickAddBlock RasterTiling'], dtype=object) array(['/static/f399c93d157690b9316255e84a4d5a18/82158/step12_addProcessingBlockFinal.png', 'step12 addProcessingBlockFinal step12 addProcessingBlockFinal'], dtype=object) array(['/static/7821016ffc8ac0f51a7c825f4c4aa0d2/82158/step13_selectShipDetection.png', 'step13 selectShipDetection step13 selectShipDetection'], dtype=object) array(['/static/e81833a3e331a774602735a82a035cb0/8ce52/step14_clickAddBlock_ShipDetection.png', 'step14 clickAddBlock ShipDetection step14 clickAddBlock ShipDetection'], dtype=object) ]
docs.up42.com
Frequently Asked Questions nighttime observations during the descending orbits. Each satellite has its own time of observing an area, which is roughly the same time every day. For the satellites we use, these overpasses are either around 01:30 solar time or 06:00 solar time. The data acquired at that time can be regarded as a snapshot of the soil conditions at that time. Therefore, the measurements we provide are representative for the time of overpass. What are the depths of the measurements?¶ The sensors we use measure the signal originating from the top layer of the soil. Typically, this is up to 10 cm deep, though the strongest contribution is received from the most upper layer. Commonly assumed is a depth of roughly 5 centimeter, but in reality the depth of this measurement varies slightly with moisture content. If the soil is drier, the sensor can see deeper into the soil. Can you measure anything deeper than the top layer of the surface?¶ Not directly. A direct measurement is only possible of the top layer of the soil. However, there are ways of translating this surface measurement into something representative for deeper layers as well. This always requires some sort of model, which is inherently limited to its assumptions. A very simple, yet powerful way to do is the Derived Root Zone Soil Moisture calculation that VanderSat delivers. Through this calculation, one can approximate the water content for the root zone up to 50 cm deep. See also Derived Root Zone Soil Moisture.. See also Soil Moisture. Where can I find how your soil moisture retrieval works?¶ VanderSat’s retrieval algorithm is based on the Land Parameter Retrieval Model (LPRM), which has been extensively described in the scientific literature (see Scholar Search). VanderSat has taken this well-tested method and uses its patented algorithm to go to a much higher spatial resolution. See also High resolution satellite soil moisture. What is your accuracy and precision?¶ The precision of the VanderSat soil moisture data is about 0.001 \(m^3/m^3\). Accuracy is lower at approximately 0.03 \(m^3/m^3\). This is similar to properly installed in-situ sensors (see e.g. this paper ). This said, there are no independent measurements at the scale we are observing (e.g. no gravimetric measurements at 100x100m) so the true accuracy remains largely unknown. Is it a modelled product or an observation?¶ The brightness temperatures are direct observations by the satellite. These are used in combination with a dielectric mixing model (LPRM, see Where can I find how your soil moisture retrieval works?) to retrieve soil moisture. As such, the final product is based on measurements but includes some modeling.. You can use the built-up area data flag to mask out cities and other urban areas if required. See How to retrieve the data flags?. Can you measure (illegal) irrigation using your Soil Moisture product?¶ In many cases this is not possible as precise irrigation means that most irrigated water is used by the vegetation. Large scale irrigation can be detected but at that scale it is usually very obvious that irrigation is being applied. Can you detect non-revenue water?¶ It does depend on a number of things but in general we can only detect this when the leakage is very large and obvious. Furthermore we need to manualy tune our data for this application and need historical data of confirmed leakages to test. If you are interested we can setup a proof of concept together. Is vegetation on top of the soil an issue for your soil moisture retrievals?¶ In general this is not a problem. The LPRM model is able to seperate the influence on the microwave signal of the soil and vegetation components separately. Very dense vegetation (e.g. tropical rain forest) is one of the cases in which the soil moisture estimate becomes impossible or less reliable. If that is the case we flag the affected pixels, see How to retrieve the data flags?. How come your pixel size is so much smaller compared to the other soil moisture products?¶ This is due to the fact that VanderSat is using a patented algorithm () that uses each individual footprint to retrieve the soil moisture from the L2B data of the microwave satellites. Nobody else is doing this.
https://docs.vandersat.com/data_products/soil_moisture/faq_measurement.html
2021-07-24T01:21:41
CC-MAIN-2021-31
1627046150067.87
[]
docs.vandersat.com
duplicity.dup_threading module¶ Duplicity specific but otherwise generic threading interfaces and utilities. (Not called “threading” because we do not want to conflict with the standard threading module, and absolute imports require at least python 2.5.) - class duplicity.dup_threading. Value(value=None)[source]¶ A thread-safe container of a reference to an object (but not the object itself). In particular this means it is safe to:value.set(1) But unsafe to:value.get()[‘key’] = value Where the latter must be done using something like: - def _setprop(): - value.get()[‘key’] = value with_lock(value, _setprop) Operations such as increments are best done as:value.transform(lambda val: val + 1) acquire()[source]¶ Acquire this Value for mutually exclusive access. Only ever needed when calling code must perform operations that cannot be done with get(), set() or transform(). transform(fn)[source]¶ Call fn with the current value as the parameter, and reset the value to the return value of fn. During the execution of fn, all other access to this Value is prevented. If fn raised an exception, the value is not reset. Returns the value returned by fn, or raises the exception raised by fn. duplicity.dup_threading. async_split(fn)[source]¶ Splits the act of calling the given function into one front-end part for waiting on the result, and a back-end part for performing the work in another thread. Returns (waiter, caller) where waiter is a function to be called in order to wait for the results of an asynchronous invokation of fn to complete, returning fn’s result or propagating it’s exception. Caller is the function to call in a background thread in order to execute fn asynchronously. Caller will return (success, waiter) where success is a boolean indicating whether the function suceeded (did NOT raise an exception), and waiter is the waiter that was originally returned by the call to async_split(). duplicity.dup_threading. interruptably_wait(cv, waitFor)[source]¶ cv - The threading.Condition instance to wait on test - Callable returning a boolean to indicate whetherthe criteria being waited on has been satisfied. Perform a wait on a condition such that it is keyboard interruptable when done in the main thread. Due to Python limitations as of <= 2.5, lock acquisition and conditions waits are not interruptable when performed in the main thread. Currently, this comes at a cost additional CPU use, compared to a normal wait. Future implementations may be more efficient if the underlying python supports it. The condition must be acquired. This function should only be used on conditions that are never expected to be acquired for extended periods of time, or the lock-acquire of the underlying condition could cause an uninterruptable state despite the efforts of this function. There is no equivalent for acquireing a lock, as that cannot be done efficiently. Example: Instead of: cv.acquire() while not thing_done:cv.wait(someTimeout) cv.release() do:cv.acquire() interruptable_condwait(cv, lambda: thing_done) cv.release() duplicity.dup_threading. require_threading(reason=None)[source]¶ Assert that threading is required for operation to continue. Raise an appropriate exception if this is not the case. Reason specifies an optional reason why threading is required, which will be used for error reporting in case threading is not supported. duplicity.dup_threading. thread_module()[source]¶ Returns the thread module, or dummy_thread if threading is not supported. duplicity.dup_threading. threading_module()[source]¶ Returns the threading module, or dummy_thread if threading is not supported. duplicity.dup_threading. threading_supported()[source]¶ Returns whether threading is supported on the system we are running on.
http://duplicity.readthedocs.io/en/latest/duplicity.dup_threading.html
2018-05-20T19:20:25
CC-MAIN-2018-22
1526794863684.0
[]
duplicity.readthedocs.io
View a searcher Searchers specify where to locate information for a contextual search, such as knowledge and catalog items. About this task Searcher records are read-only, but you should be aware of the searchers available so you can select the correct one when defining contextual searches. Procedure Navigate to Contextual Search > Searchers, then open a record. Use the Search Resources related list to inspect the sources which define the information areas to search. For example, incident deflection uses the knowledge and catalog searcher which includes knowledge and catalog resources. What to do next Search resources can also contain properties, refining the resource further. For example, the knowledge search resource contains a Sort order property to specify that the search results are returned sorted by relevance.
https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/contextual-search/task/t_ViewASearcher.html
2018-05-20T19:18:48
CC-MAIN-2018-22
1526794863684.0
[]
docs.servicenow.com
Describing events with actions and event properties Interana analyzes event data. The events are described by a combination of actions and event properties. Actions You can easily understand what is being done in each event by referring to a specific attribute (the action). You can define actions from an existing attribute in the raw data or you can define relevant actions from across multiple columns in unstructured raw data through the use of event properties. Event properties You can access your data (like actors and event attributes) with cleaned up names and values that are familiar to you when you compose queries. You can then align the resulting names of your data with your day-to-day in your business, rather than the technical names represented in the code. This makes it easier for everyone who isn't familiar with every piece of data in your system. You can build event properties that clean up your data by transforming names and existing values post-ingest. The ability to do this after ingesting data alleviates the need to anticipate and address all problems at ingest. It also means that you don't have to re-ingest data to make changes. Data transformation sometimes necessitates bucketing various values into a new value, or performing an arithmetic function on a value. When constructing event properties, you can see examples from the underlying raw data (this feature will be added in the future) and the expected values output. After building new, cleaned up event properties, you can hide other event properties so the incomplete data isn't used in a query or object definition. This helps ensure that users always refer to the correct data. Raw and manual event properties We differentiate between two types of event properties: - A raw event property is any information in your raw data that adds more definition to actions. - A manual event property is one that you have created through a defined value and function.
https://docs.interana.com/3/Getting_Started/Describing_events_with_actions_and_event_properties
2018-05-20T20:13:42
CC-MAIN-2018-22
1526794863684.0
[]
docs.interana.com
!dbgprint The !dbgprint extension displays a string that was previously sent to the DbgPrint buffer. !dbgprint DLL Additional Information For information about DbgPrint, KdPrint, DbgPrintEx, and KdPrintEx, see Sending Output to the Debugger. Remarks The kernel-mode routines DbgPrint, KdPrint, DbgPrintEx, and KdPrintEx send a formatted string to a buffer on the target computer. The string is automatically displayed in the Debugger Command window on the host computer unless such printing has been disabled. Generally, messages sent to this buffer are displayed automatically in the Debugger Command window. However, this display can be disabled through the Global Flags (gflags.exe) utility. Moreover, this display does not automatically appear during local kernel debugging. For more information, see The DbgPrint Buffer. The !dbgprint extension causes the contents of this buffer to be displayed (regardless of whether automatic printing has been disabled). It will not show messages that have been filtered out based on their component and importance level. (For details on this filtering, see Reading and Filtering Debugging Messages.)
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-dbgprint
2018-05-20T20:14:44
CC-MAIN-2018-22
1526794863684.0
[]
docs.microsoft.com
TOC & Recently Viewed Recently Viewed Topics Basic NNM VM Configuration The first step in the process is to install a NNMNessus Network Monitor VM that is attached to the virtual switch's span port. Tenable's VM Appliance can be used for this purpose. The Tenable Appliance VM and its documentation can be downloaded from the Tenable Support Portal and installed as many times as your license allows. During configuration, ensure that the configured networking ports include the monitoring port(s) of the virtual switch. Under the NNM configuration, confirm that the monitored port(s) include the ports configured for mirroring.
https://docs.tenable.com/nnm/deployment/Content/VM/Basic_NNM_VM_Configuration.htm
2018-05-20T19:29:30
CC-MAIN-2018-22
1526794863684.0
[]
docs.tenable.com
Manage the Datadog Agent and integrations using configuration management tools: The configuration files and folders for the Agent are located at: /var/log/datadog/directory c:\programdata\Datadog\logsdirectory The Datadog logs do a rollover every 10MB. When a rollover occurs, one backup is kept (e.g. agent.log.1). If a previous backup exists, it is overwritten on the rollover. Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/agent/basic_agent_usage/
2018-05-20T19:40:15
CC-MAIN-2018-22
1526794863684.0
[]
docs.datadoghq.com
Layer and Column Types A column is also known as a layer. There are several types of layers you can add in the Timeline and Xsheet view. Each layer is indicated by an icon to help you differentiate them. Some layers are represented differently in the Xsheet view. Drawing Layer The most common layer type is the drawing layer. Any time you need to create a vector drawing or import a symbol or image, you can use a drawing layer. You can also create bitmap artwork on a drawing layer. Bitmap. Camera. Effect Layer. Colour-Card Layer. Group Layer A Group layer can be used to organize You can drag and drop other layers onto a Group layer and then collapse the Group layer to hide these other layers from view. Peg Layer. Quadmap Layer. Sound Layer You can import sound files to add dialog and sound effects to your project. The sound layer will be added to your Timeline and Xsheet view when you import a sound file in your scene. :
https://docs.toonboom.com/help/harmony-15/premium/timing/layer-column-type.html
2018-05-20T19:13:07
CC-MAIN-2018-22
1526794863684.0
[array(['../Resources/Images/_ICONS/PRO/icons/timeline/drawing_37x37.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/bitmap_38x38.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/camera_element_43x43.png', 'Camera icon'], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/effect.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/inputmodule.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/group_element_38x38.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/peg.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/peg.png', None], dtype=object) array(['../Resources/Images/_ICONS/PRO/icons/timeline/soundlayer.png', None], dtype=object) ]
docs.toonboom.com
Background Kantega SSO allows users to sign in using their SAML identity providers or using Kerberos tickets from their Active Directory Domain. However, user accounts must still exist in JIRA, Confluence or Bitbucket. The traditional on-premise solution is to set up an Active Directory User Directory in the Atlassian application, and use that to sync users accounts and group memberships over LDAP. Cloud providers such as Microsoft Azure, Google G Suite or Okta typically do not offer LDAP syncing. So how do you get user accounts synced to your Atlassian products? Kantega SSO version 3 introduces the new Cloud connectors feature, which solves exactly this challenge. syncronized, you can preview the users, groups and group memberships: When you're happy with the setup, you enable the Crowd User Directory. This makes user accounts and groups available in your application. Questions? Feel free to reach out to our support team if you have any questions or want a demo.
https://docs.kantega.no/display/KA/Cloud+connectors
2018-05-20T19:12:53
CC-MAIN-2018-22
1526794863684.0
[]
docs.kantega.no
Creating Custom Resolutions.
https://docs.toonboom.com/help/harmony-14/advanced/project-creation/create-custom-resolution.html
2018-05-20T19:55:31
CC-MAIN-2018-22
1526794863684.0
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/GettingStarted/HAR11_new_resolution1.png', None], dtype=object) ]
docs.toonboom.com
General Topics¶ Accessing Logs¶ Most logs are found in ~/logs/frontend and ~/logs/user. Note To prevent log files from becoming too large, logs are rotated daily at 3:00 a.m. server time. Each existing log is renamed by appending a number which indicates how many days old the log is. For example, error_website_name.log is named error_website_name.log.1 initially. Each subsequent day the log is incremented by one, until the log has aged seven days, upon which it is deleted. Front-end Logs¶ ~/logs/frontend stores logs associated with website entries and shared Apache. There are four types of logs which appear in ~/logs/frontend: - Website access logs – Logs of the filename form beginning access_website_name.log record attempts to access a website. Such logs record: - originating IP address, - date and time, - HTTP method used, - the specific URL accessed, - and the user-agent which made the request. - Website error logs – Logs of the filename form beginning error_website_name.log record error events associated with websites. Such logs record a date and time and an error message. - Shared Apache access logs – Logs of the filename form beginning access_website_name_php.log record access attempts for all shared Apache-based applications reached through website_name. This includes all shared Apache-based applications, such as Trac, Subversion, and Static/CGI/PHP applications. Such logs record: - originating IP address, - date and time, - HTTP method used, - the specific URL accessed, - and the user-agent which made the request. - Shared Apache error logs – Logs of the filename form beginning error_website_name_php.log record error events for all shared Apache-based applications reached through website_name. This includes all shared Apache-based applications, such as Trac, Subversion, and Static/CGI/PHP applications. Such logs record a date and time and an error message. For example, suppose you have a Website entry with this configuration: mysite / --> htdocs (Static/CGI/PHP) /blog/ --> wp (WordPress) /testing/ --> fancyapp (Django) mysite‘s access logs are stored in files beginning with access_mysite.log, while mysite‘s error logs are stored in files beginning with error_mysite.log. access_mysite_php.log records htdocs and wp‘s errors but not fancyapp‘s errors. fancyapp‘s error logs are stored elsewhere; see User Logs for details. User Logs¶ ~/logs/user stores logs associated with many applications, including popular applications such as Django. There are two types of logs which appear in ~/logs/user: - Access logs – Logs of the filename form access_app_name.log record attempts to access the named application. The format of such logs vary with the type of long-running server process associated with the application, but typically these logs record originating IP address, date and time, and other details. - Error logs – Logs of the filename form error_app_name.log record error events associated with the named application. Typically, these logs record error messages and their date and time. Note Some older installed applications may not store their logs in ~/logs/user. If you can’t find an application’s logs in ~/logs, look in the application’s directory (~/webapps/app_name). For example: - Django: ~/webapps/app_name/apache2/logs/ - Rails: ~/webapps/app_name/logs/ Monitoring Memory Usage¶ To see a list of processes and how much memory they’re using: - Open an SSH session to your account. - Enter ps -u username -o rss,command, where username is your WebFaction account name and press Enter. The first column is the resident set size, the amount of memory in use by the process. The second column is the process’s command along with any arguments used to start it. Repeat these steps as needed for any additional SSH users you have created. Example Memory Usage¶ For example, consider a user, johndoe, monitoring his memory usage. [johndoe@web100 ~]$ ps -u johndoe -o rss,comm RSS COMMAND 896 ps -u johndoe -o rss,command 23740 /usr/local/apache2-mpm-peruser/bin/httpd -k start 23132 /usr/local/apache2-mpm-peruser/bin/httpd -k start 1588 sshd: johndoe@pts/1 1472 -bash The first row displays the column headers: RSS COMMAND RSS stands for resident set size. RSS is the physical memory used by the process in kilobytes. COMMAND is the process’s command along with any arguments used to start it. The next four rows show a Rails application running with Passenger and nginx: The PassengerNginxHelperServer and Passenger processes are the passenger component of the application, which handle, for example, executing Ruby code. The two nginx processes are the web server component of the application, which respond to the incoming HTTP requests. Altogether these processes are consuming 10,704KB or slightly more than 10 megabytes (MB). The next row is the ps process itself: 896 ps -u johndoe -o rss,command This is the command that’s checking how much memory is in use. The next three rows represent a running Django application: Although there are three processes, this is one ordinary Django application. These are the Apache processes used to respond to HTTP requests and to run Django itself. Together these processes are consuming 18,668KB or slightly more than 18MB of memory. Finally, the last two lines show us johndoe‘s connection to the server: 1588 sshd: johndoe@pts/1 1472 -bash These processes are the SSH service and the Bash prompt, which allow johndoe to interact with the server from afar. They use relatively little memory, 3,060KB or under 3MB. In total, johndoe is using less than 32MB of memory, which is well under the limit for his plan, so he’s not at risk of having his processes terminated and having to find ways to reduce his memory consumption. If johndoe‘s processes had exceeded his plan limits by a small amount, he would receive a warning message. If his processes had exceeded his plan limits by a large amount, his processes would be terminated and he would receive a notification. Reducing Memory Usage¶ Once you’ve identified where your memory is going you can take steps to reduce your overall memory consumption. Typically, you can think of the memory your applications require in terms of base memory and additional memory. Base memory consumption is the amount of memory a piece of software requires at startup, before handling any requests. Unfortunately, little can be done about base memory consumption. Aside from switching to a different application or modifying the software, some amount of memory must be consumed to start and run the software as expected. The biggest gains in conserving memory typically come from reducing additional memory consumption. Once your software is up and running, varying amounts of additional memory will be consumed to store your data and process requests. Software might consume more or less memory based on factors such as: - the number and duration of threads or processes, - the size, number, and frequency of database queries, - the size or complexity of objects retained in memory, or - the total number of concurrent requests or sessions. Because there are so many possible ways for an application to consume memory, there isn’t a “one size fits all” solution for reducing memory consumption, but there are a number of common strategies: - Serve static files out-of-process. For application types which rely on an additional server, such as Django (Apache) and Ruby on Rails (nginx), serve static files, such as style sheets, JavaScript, and images, with a separate Static-only application. - Plug memory leaks. If the software you are using contains a memory leak, it will attempt to consume more and more memory without releasing already consumed but unneeded memory. If code under your control is leaking memory, make sure memory is deallocated or references are eliminated when particular objects or data are no longer needed. If some library or executable is leaking memory out of your control, periodically restarting the software may contain the application’s memory consumption. - Use fewer threads or processes. If your software relies on multiple threads or processes, try reconfiguring the software to use a more conservative number of them. - Complete recommended maintenance activity. Some software may have maintenance steps which, if left unfinished, may cause increased memory consumption or deteriorated performance. - Don’t keep unnecessary data in memory. If certain data is not frequently accessed or is inexpensive to retrieve, try to not keep it in memory. For example, the data associated with an infrequently accessed page may not be needed until the page is actually requested. - Avoid making database queries that return too much information. Not only are such queries slower, but the resulting data will consume additional memory. For example, if the result of some query must be filtered, it may be possible for some memory consumption to be eliminated by writing more specific database queries. - Profile your memory consumption. There may be tools available which work with your programming language to help you identify how memory is being consumed. For example, Guppy for Python features the heapy memory inspection module and Xdebug is a popular profiler for PHP. Setting File Permissions¶ See also You can also grant access to directories with the control panel. See Granting Permissions to Additional Users for details. Warning Use caution when granting other users access to your files and directories. In many cases, granting access to other users affords them the privileges of your own account. This can put you at risk of unauthorized, malicious activity, like deleting files or sending spam on your behalf. To minimize risk, grant access only to people you trust and make certain they use the same precautions as you do, like choosing strong passwords. You can use the command-line tools getfacl and setfacl to manage access control lists (ACLs). Related to chmod, chown, and chgrp, getfacl and setfacl allow you to grant permissions to files and directories to specific users. The most common case for using ACLs is to grant permissions to another SSH/SFTP user created with the control panel. On all servers except those >= web500, getfacl and setfacl can record up to 124 permissions per file or directory. On servers >= web500, getfacl and setfacl can record up to 25 permissions per file or directory. The following subsections show you how to complete common tasks with getfacl and setfacl. To see complete documentation for getfacl or setfacl: - Open an SSH session to your account. - Enter man getfacl or man setfacl and press Enter. Reviewing Permissions¶ To review the permissions for a file or directory: - Open an SSH session to your account. - Enter getfacl path, where path is the path to the file or directory see the permissions of, and press Enter. For example, getfacl /home/demo returns the ACL for user demo‘s home directory: # file: . # owner: demo # group: demo user::rwx user:apache:r-x user:nginx:r-x group::--x mask::r-x other::--- Removing Access to Others (Global Access)¶ To prohibit read, write, and execute access for all other users (users which are neither yourself or otherwise specified in the ACL): Open an SSH session to your account. Enter setfacl -m o::---,default:o::--- path, where path is the path to a file or directory, and press Enter. Note If path is a directory, then you can also change permissions recursively: enter setfacl -R -m o::---,default:o::--- path and press Enter. Granting Access to Specific Users¶ See also You can also grant access with the control panel. See Granting Permissions to Additional Users for details. You can grant users access to individual files or directories, or a whole application. Applications¶ To grant another user access to an entire application: Open an SSH session to your account. Allow the other user account to locate directories that it has access to within your home directory. Enter setfacl -m u:secondary_username:--x $HOME, where secondary_username is the other user’s username, and press Enter. Remove the other user’s default access to the applications in your $HOME/webapps directory. Enter setfacl -m u:secondary_username:--- $HOME/webapps/* and press Enter. Note The above command only affects applications that are currently installed. If you create new applications, then run the command again, or run setfacl -m u:secondary_username:--- $HOME/webapps/new_app, where new_app is the name of the new application. Grant the user read, write, and execute access to the application’s files and directories. Enter setfacl -R -m u:secondary_username:rwx $HOME/webapps/application, where application is the name of the application to which the other user is to have access, and press Enter. Grant the user read, write, and execute access to any files and directories created in the application in the future. Enter setfacl -R -m d:u:secondary_username:rwx $HOME/webapps/application and press Enter. Set your account’s group as the owner of any new files in the application’s directory. Enter chmod g+s $HOME/webapps/application and press Enter. Grant your account full access to files in the application directory, including any files created in the future by the secondary user. Enter setfacl -R -m d:u:primary_username:rwx $HOME/webapps/application and press Enter. The other user is granted access to the application. Tip For convenience, the secondary user can add a symlink from their home directory to the application directory. To create the symlink: - Open an SSH session to the secondary user account. - Enter ln -s /home/primary_username/webapps/application $HOME/application, where primary_username is the name of the account which owns the application, and press Enter. Files and Directories¶ To grant another user read, write, or execute access (or combinations thereof) to a single file or directory: Open an SSH session to your account. Enter setfacl -m u:username:permissions path, where - username is the username of the user to be granted permissions, - permissions is a combination of r, w, or, x for read, write, and execute, respectively, and - path is the path from the current directory to the desired file, and press Enter. Additionally, if path is a directory and u is prepended with default: or d:, then any new files created by that or any other user within the directory will default to those permissions. The other user is granted access to the path specified. Scheduling Tasks with Cron¶ You can use Cron to automatically run commands and scripts at specific times. To review the contents of your crontab: - Open an SSH session to your account. - Enter crontab -l and press Enter. To edit your crontab: Open an SSH session to your account. Enter crontab -e and press Enter. Your crontab will open in your default text editor (as specified by the EDITOR environment variable). Note vi is the default editor. To use a different editor to modify your crontab, set the EDITOR environment variable. For example, to use the nano text editor, enter EDITOR=nano crontab -e and press Enter. Make any desired changes to your cron schedule. Warning Cron jobs run under different conditions than scripts run from a shell. When you prepare your cron activity, do not rely on a specific starting directory. Use absolute paths where possible. Likewise, do not rely on environment variables like PATH. Changes to the environment by .bashrc and other “dot” files are unlikely to be available to your cron task. Note You can receive the output of cron jobs by setting the MAILTO and MAILFROM variables, redirecting output to mail, or logging to a file. To send the output of all cron jobs to a single email address, set the MAILTO and MAILFROM variables. On a new line before any jobs, insert MAILFROM=sender where sender is the sender address for cron error messages. To set the destination address, on a new line before any jobs (for example, immediately after the line containing MAILFROM), insert MAILTO=recipient where recipient is the destination address. For example, these cron jobs run every 5 minutes and send both output to [email protected] using [email protected] as their email address: [email protected] [email protected] */5 * * * * ps -u $USER -o pid,command */5 * * * * du -sh $HOME If you want to specify job-specific subject lines and recipient or sender addresses, you can redirect the output of a cron job to mail, For example, this cron job runs once an hour and sends all output of sample_command to a recipient with a custom subject line, recipient address, and sender address: 0 * * * * example_command 2>&1 | mail -s "Example report subject" -S [email protected] [email protected] Instead of email, you can store the output from a cron job by redirecting it to a log file. For example, this cron job records cron is running every 20 minutes to ~/logs/user/cron.log: */20 * * * * echo "cron is running" >> $HOME/logs/user/cron.log 2>&1 See also Linux Cron Guide from Linux Help Save and close the file. Troubleshooting¶ Unfortunately, things don’t always work as planned. Here you will find troubleshooting hints for general errors and problems sometimes seen among WebFaction users. Error: Site not configured¶ The error Site not configured has several common causes and solutions: Cause: You recently created a new website record in the control panel. Solution: Wait up to five minutes for your website record to be fully configured. Cause: You created a new website record but did not include the current subdomain, such as www. Solution: Modify the website record to use the intended subdomain. Cause: You created a new domain entry in the control panel, but did not create a corresponding site website record. Solution: Create a site record which references your newly created domain entry. Cause: You accessed a website with the wrong protocol—in other words, a protocol other than the one configured in the website entry. For example, you accessed a website configured for HTTPS via HTTP or vice-versa. Resolution: Reenter the URL with the HTTP or HTTPS as the protocol or modify the website record to use the intended protocol. See also Cause: You attempted to access the site by the website’s IP address. Resolution: Accessing websites by IP addresses is not supported. Please use the URL with the domain name selected in the control panel, typically one you supply or of the form username.webfactional.com. Cause: You account has been disabled because of a billing problem, TOS or AUP violation, or some other problem concerning your account. Resolution: If WebFaction disables your account, we will contact you via your account contact email address to let you know the reason and corrective action required. If you suspect that your account has been disabled and you have not been contacted, please open a support ticket. Error: Not Found¶ The Not Found and There is no application mounted at the root of this domain error appears when you visit a domain at the root URL path (/), but no application is mounted there, but other applications are mounted elsewhere. Some causes for this error include: - You recently modified website record to include an application at the root URL path, but opened the URL before your changes were activated in the web server and DNS configuration. Please wait a moment and refresh. - You accessed a website with a protocol other than the one configured in the website record. For example, you added an application to the root of a website configured to use HTTPS, but opened an HTTP URL in your browser. Reenter the URL with the correct protocol, or modify your website records so that the application is available with the indented protocol. - There is no application mounted at that domain’s root. Please revisit the control panel and add the application to the website record’s root URL path, verifying that root URL path is correct (for example, verify that there are no unexpected characters after /). Error: 403 Forbidden¶ A 403 Forbidden error typically occurs when file system permissions prevent the web server from accessing the page or running a script. Please see Setting File Permissions for more information on changing file permissions. Error: 502 Bad Gateway¶¶ A 504 Gateway Timeout error occurs when an application takes too long to respond to an incoming request. This error is often caused by an application receiving heavy traffic. This error can also be caused by an application running too slowly; optimizations may be required to avoid the error. For more information, please see the documentation for your application type. Table Of Contents - General Topics - Accessing Logs - Monitoring Memory Usage - Reducing Memory Usage - Setting File Permissions - Scheduling Tasks with Cron - Troubleshooting Search the documentation Example: "configure email" or "create database" PDF download document as PDF
https://docs.webfaction.com/software/general.html
2018-05-20T19:50:51
CC-MAIN-2018-22
1526794863684.0
[array(['_static/images/next.png', 'Next'], dtype=object) array(['_static/images/prev.png', 'Previous'], dtype=object) array(['_static/images/pdf.png', None], dtype=object)]
docs.webfaction.com
. To run the functional tests: $ phpunit tests/Functional Tests are organized in groups: one for each reverse proxy supported. At the moment groups are: varnish and nginx. To run only the varnish functional tests: $ phpunit --group=varnish For more information about testing, see Testing Your Application. Building the Documentation¶ First install Sphinx and install enchant (e.g. sudo apt-get install enchant), then download the requirements: $ pip install -r doc/requirements.txt To build the docs: $ cd doc $ make html $ make spelling
http://foshttpcache.readthedocs.io/en/latest/contributing.html
2018-05-20T19:10:36
CC-MAIN-2018-22
1526794863684.0
[]
foshttpcache.readthedocs.io
Abstract/Final Modifiers Abstract/Final components/functions Whilst Lucee already supports interfaces, interfaces are not well adopted in the developer community because they are only used to do "sign a contract" when you implement them. Abstract and Final modifiers are a much more intuitive and flexible way to do the same and more. Abstract Abstract component / functions cannot be used directly, you can only extend them. AContext.cfc abstract component { abstract function getFile(); final function getDirectory() { return getDirectoryFromPath(getFile()); } } It is not possible to create an instance of this component (e.g. new AContext()), because this component has been defined as abstract. You can only "extend" this component (e.g. component extends="AContext" {}). This is therefore like an interface but it contains working code. As you can see, we can define a generic method in the "abstract" component, so every component that is extending this component needs to implement this method or has to be an "abstract" component itself. Only "abstract" components can contain "abstract" functions. Final The "final" modifier is the opposite to the "abstract" modifier and means you can not extend a component / function. This would be used when you do not want to allow code to override your component or function. Unlike "abstract" a function can be "final" even if the component is not "final". Context.cfc final component extends="AContext" { function getFile() { return getCurrentTemplatePath(); } } Here we are extending the component "AContext" from above and implementing the required "getFile" function. In contrast to "abstract", a "final" method also can be defined in a non-final component. component { final function getFile() { return getCurrentTemplatePath(); } } Tag syntax Modifiers can also be used within tags, for example: <cfcomponent modifier="abstract"> <cffunction name="time" modifier="final"> <cfreturn now()> </cffunction> </cfcomponent>
http://docs.lucee.org/guides/lucee-5/abstract-final.html
2018-05-20T19:41:25
CC-MAIN-2018-22
1526794863684.0
[]
docs.lucee.org
use activity over time, use a Time view for this query. Then build the sentence for the query. This example is using the music project (and its user ID column), but you would select the column that represents actors in your project. - SHOW - count unique user ID actors - FOR - all user ID actors - SPLIT BY - none - OVER - (enter a 1-week time range) - Click Show time resolution to toggle the trailing window option, and use the default value: every auto (1 day) trailing auto (1 day) - This is the default value, but it's useful to know what it is and how to change it. You can hover over any point to see the count of users for that day: Tip: Samples view Click Samples to show some of the data used in the query.
https://docs.interana.com/3/3.0_Cookbook/Measuring_daily_active_users_(DAU)
2018-05-20T19:52:02
CC-MAIN-2018-22
1526794863684.0
[array(['https://docs.interana.com/@api/deki/files/3010/dau_time_query.png?revision=1', 'dau_time_query.png'], dtype=object) array(['https://docs.interana.com/@api/deki/files/3011/dau_samples_view.png?revision=1', 'dau_samples_view.png'], dtype=object) ]
docs.interana.com
TOC & Recently Viewed Recently Viewed Topics Upload the Private Key Before You Begin To send traffic through the proxy and have it decrypted by the Gigamon appliance, set the browser, device, or application proxy settings to allow internet access by the proxy's IP and port number. KEYS tab appears. - In the upper right corner, click the Install button. The SSL Key page appears. - In the Alias box, enter a name for the SSL key. - In the Key Upload Type row, select Private Key. - In the Key row, select Choose File. Navigate to and upload the SSL key file. - In the upper right corner of the page, click Save.
https://docs.tenable.com/nnm/deployment/Content/GigamonUploadTheProxysPrivateKey.htm
2018-05-20T19:28:44
CC-MAIN-2018-22
1526794863684.0
[]
docs.tenable.com
An error message appears stating that the virtual machine fails to power on. Problem The virtual machine fails to power on. Cause The virtual machine might fail to power on because of insufficient resources or because there are no compatible hosts for the virtual machine. Solution If the cluster does not have sufficient resources to power on a single virtual machine or any of the virtual machines in a group power-on attempt, check the resources required by the virtual machine against those available in the cluster or its parent resource pool. If necessary, reduce the reservations of the virtual machine to be powered-on, reduce the reservations of its sibling virtual machines, or increase the resources available in the cluster or its parent resource pool.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.troubleshooting.doc/GUID-3B19434B-14C1-42E3-9FE1-87C2A1492D62.html
2018-05-20T19:32:09
CC-MAIN-2018-22
1526794863684.0
[]
docs.vmware.com
(cubicweb.web.views.urlpublishing) Associate. You can write your own URLPathEvaluator class to handle custom paths. For instance, if you want /my-card-id to redirect to the corresponding card’s primary view, you would write: class CardWikiidEvaluator(URLPathEvaluator): priority = 3 # make it be evaluated *before* RestPathEvaluator def evaluate_path(self, req, segments): if len(segments) != 1: raise PathDontMatch() rset = req.execute('Any C WHERE C wikiid %(w)s', {'w': segments[0]}) if len(rset) == 0: # Raise NotFound if no card is found raise PathDontMatch() return None, rset On the other hand, you can also deactivate some of the standard evaluators in your final application. The only thing you have to do is to unregister them, for instance in a registration_callback in your cube: def registration_callback(vreg): vreg.unregister(RestPathEvaluator) You can even replace the cubicweb.web.views.urlpublishing.URLPublisherComponent class if you want to customize the whole toolchain process or if you want to plug into an early enough extension point to control your request parameters: class SanitizerPublisherComponent(URLPublisherComponent): """override default publisher component to explicitly ignore unauthorized request parameters in anonymous mode. """ unauthorized_form_params = ('rql', 'vid', '__login', '__password') def process(self, req, path): if req.session.anonymous_session: self._remove_unauthorized_params(req) return super(SanitizerPublisherComponent, self).process(req, path) def _remove_unauthorized_params(self, req): for param in req.form.keys(): if param in self.unauthorized_form_params: req.form.pop(param) def registration_callback(vreg): vreg.register_and_replace(SanitizerPublisherComponent, URLPublisherComponent) handle path of the form: <publishing_method>?parameters... handle path with the form: <eid> tries to find a rewrite rule to apply URL rewrite rule definitions are stored in URLRewriter objects handle path with the form: <etype>[[/<attribute name>]/<attribute value>]* handle path with the form: <any evaluator path>/<action> (cubicweb.web.views.urlrewrite) Here, the rules dict maps regexps or plain strings to callbacks that will be called with inputurl, uri, req, schema as parameters. SimpleReqRewriter is enough for a certain number of simple cases. If it is not sufficient, SchemaBasedRewriter allows to do more elaborate things. Here is an example of SimpleReqRewriter usage with plain string: from cubicweb.web.views.urlrewrite import SimpleReqRewriter class TrackerSimpleReqRewriter(SimpleReqRewriter): rules = [ ('/versions', dict(vid='versionsinfo')), ] When the url is <base_url>/versions, the view with the __regid__ versionsinfo is displayed. Here is an example of SimpleReqRewriter usage with regular expressions: from cubicweb.web.views.urlrewrite import ( SimpleReqRewriter, rgx) class BlogReqRewriter(SimpleReqRewriter): rules = [ (rgx('/blogentry/([a-z_]+)\.rss'), dict(rql=('Any X ORDERBY CD DESC LIMIT 20 WHERE X is BlogEntry,' 'X creation_date CD, X created_by U, ' 'U login "%(user)s"' % {'user': r'\1'}), vid='rss')) ] When a url matches the regular expression, the view with the __regid__ rss which match the result set is displayed. Here is an example of SchemaBasedRewriter usage: from cubicweb.web.views.urlrewrite import ( SchemaBasedRewriter, rgx, build_rset) class TrackerURLRewriter(SchemaBasedRewriter): rules = [ (rgx('/project/([^/]+)/([^/]+)/tests'), build_rset(rql='Version X WHERE X version_of P, P name %(project)s, X num %(num)s', rgxgroups=[('project', 1), ('num', 2)], vid='versiontests')), ]
https://docs.cubicweb.org/book/devweb/views/urlpublish.html
2017-03-23T00:11:29
CC-MAIN-2017-13
1490218186530.52
[]
docs.cubicweb.org
RunTask. Request Syntax Copy to clipboard { "cluster": " string", "count": number, "group": " string", "overrides": { "containerOverrides": [ { "command": [ " string" ], "environment": [ { "name": " string", "value": " string" } ], "name": " string" } ], "taskRoleArn": " string" }, "placementConstraints": [ { "expression": " string", "type": " string" } ], "placementStrategy": [ { "field": " string", "type": " string" } ], "startedBy": " - group The name of the task group to associate with the task. The default value is the family name of the task definition (for example, family:my-family-name). Type: String run time). Type: array of PlacementConstraint objects Required: No - placementStrategy The placement strategy objects to use for the task. You can specify a maximum of 5 strategy rules per task. Type: array of PlacementStrategy objects - taskDefinition The familyand revision( family:revision) or full Amazon Resource Name (ARN) of the task definition to run. If a revisionis not specified, the latest ACTIVErevision is used. Type: String Required: Yes Response Syntax Copy to clipboard { "failures": [ { "arn": "string", "reason": "string" } ], "tasks": [ { "clusterArn": "string", "containerInstanceArn": "string", "containers": [ { "containerArn": "string", "exitCode": number, "lastStatus": "string", "name": "string", "networkBindings": [ { "bindIP": "string", "containerPort": number, "hostPort": number, "protocol": "string" } ], "reason": "string", "taskArn": "string" } ], "createdAt": number, "desiredStatus": "string", "group": "string", "lastStatus": "string", "overrides": { "containerOverrides": [ { "command": [ "string" ], "environment": [ { "name": "string", "value": "string" } ], "name": "string" } ], "taskRoleArn": "string" }, "startedAt": number, "startedBy": "string", "stoppedAt": number, "stoppedReason": "string", "taskArn": "string", "taskDefinitionArn": "string", "version": number } ] } request runs the latest ACTIVE revision of the hello_world task definition family in the default cluster. Sample Request Copy to clipboard Copy to clipboard:
http://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html
2017-03-23T00:24:43
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
, Twilio comes with a catch. App developers are locked in to their cloud and are left with no option when it comes to choosing their own carriers or their own hosting solution. This is prohibitive for some users who have since been scouting for an open source alternative. Therein lay the premise for Plivo – a 100% FOSS alternative to Twilio. While the code base for Plivo is in no way related to Twilio, Plivo APIs are consistent with Twilio – a nod to Twilio’s simplicity. Plivo APIs make it possible to develop apps like IVRs, Voicemails, Auto Attendants in minutes. Also, developers who are familiar with the Twilio environment can port their existing applications to Plivo with minimal code modification. Essentially, Plivo is a Communications Framework to rapidly build voice based apps, to make or receive calls, using your existing web development skills and your existing infrastructure. The Plivo Framework is written using Python, gevent and Flask. Plivo works out of the gate with the next generation carrier-grade telephony platform FreeSWITCH, which comes bundled with rock solid stability, extreme flexibility and scalability. Also FreeSWITCH’s native support for Skype, SIP, H.323, Google Talk and Google Voice means that applications developed using Plivo can ride on these capabilities too. (Interestingly, Twilio doesn’t offer the ability to use any of these directly.) “We hope to democratize telephony app development by releasing Plivo as an open source framework. Plivo is a fully-featured platform that can be run on your own servers or anywhere you choose to. You can also configure your own carriers to originate and terminate calls. In essence, what Plivo offers is control.” said Michael & Venky, Lead Developers at Plivo Team. “Whether you’re a seasoned telephony engineer or a web developer, we’d love to see you experiment with Plivo. The only limit is your imagination.” The Plivo Team have put up full documentation on the framework; and installers & helpers are provided for developers (language no barrier) to get started. All of this is released under the same OSI-approved license (MPL 1.1) that FreeSWITCH uses.
http://docs.plivo.org/2011/05/26/launch-of-plivo-an-open-source-alternative-to-twilio/
2017-03-23T00:18:17
CC-MAIN-2017-13
1490218186530.52
[]
docs.plivo.org
How to use VisiOmatic¶ The main window¶ The Figure below shows the main window of the VisiOmatic web interface in its default configuration. The main window contains a “slippy map” carrying the current image layer and optional vector overlays, plus a series of widgets. One navigates through the image and its overlays by “dragging” the map to the desired position. On computers this is done by clicking and holding the left button while moving the mouse, or by using the keyboard arrow keys. On touch devices one must press and move a finger throughout the screen. At the top left, two magnifier buttons can be used to zoom in ( ) or out ( ). One can also zoom using the mouse wheel or the -/+ keys on computers, or with a pinch gesture on touch devices. The user can switch to/from full screen mode by clicking the button (third from the top). The button (last from the top) opens the Advanced Settings Menu. The coordinates of the center of the current field-of-view (indicated by the cross-shaped reticle) are displayed at the top-right of the main window, in the Coordinate Pane. In some configurations, a drop-down list allows one to switch between equatorial (RA,Dec) and other types of coordinates. Below the coordinates, a Navigation Pane may be offered, offering a larger view of the current image, as well as the current field of view, represented by a shaded orange box. The scale, as measured at the center of the main window, is displayed in the lower left corner. The Coordinates Pane¶ The Coordinates Pane allows the user to: - check the central coordinates of the current field of view - pan to specific coordinates or to a given object. To move to specific coordinates or objects, simply click in the coordinates pane, and enter the desired coordinates or object name. VisiOmatic uses Simbad to parse the input coordinates and object names. According to the Simbad Query-by-Coordinates webpage, the following coordinates writings are allowed: 20 54 05.689 +37 01 17.38 10:12:45.3-45:17:50 15h17m-11d10m 15h17+89d15 275d11m15.6954s+17d59m59.876s 12.34567h-17.87654d 350.123456d-17.33333d 350.123456 -17.33333 while the dictionary of nomenclature for object identifiers can be found here. Advanced Settings¶ The Advanced Settings button gives access to a taskbar with five tabs, from top to bottom and as illustrated in figmix: Channel Mixing, which allows the user to choose which image channels to use for display or color compositing Image Preferences, which gives the user control over the contrast, color saturation, gamma correction and JPEG compression level. There is also a switch for inverting the color map. Catalog Overlays for superimposing multiple catalogs in vector form, e.g., the 2MASS Point Source Catalog [2] or the SDSS Photometric Catalog [3]. Region Overlays, for overlaying Points Of Interest (such as local catalogs) or any local vector data sets in GeoJSON format. Profile Overlays, for plotting image profiles and (pseudo-)spectral energy distributions from the full precision pixel values stored on the server. Documentation, which opens a panel where any web page can be embedded, e.g., an online-manual. Channel Mixing ¶ The Channel Mixing panel has two modes, which can be selected using the radio buttons located at the top of the dialog: the Single Channel (monochromatic) mode, and the Multi-Channel (color composite) mixing mode. In Single Channel mode, one can: - Select an image channel for display - Select a color map among a selection of four - Set the minimum and maximum channel levels In Multi-Channel mode, one can: - Select an image channel to be included in the color mix - Set the color this channel contributes to the mix - Set the minimum and maximum channel levels - Click on a channel name in the active channel list to edit a channel contributing to the current mix, or click on the trashcan button to remove it. Image Preferences ¶ The Image Preference dialog gives access to global image display settings: - Color map inversion (negative mode) - Contrast (scaling factor) - Color saturation (0 for black&white, >1 for exaggerating colors) - Gamma correction (2.2 for linear output on a properly calibrated monitor, higher values for brightening dark regions, lower values for darkening) - JPEG quality percentage. The lower the quality percentage, the more compressed the image and the more artifacts in the rendering. Note Users with a low bandwidth can improve the reactivity of the display by setting a lower JPEG quality percentage. Catalog Overlays ¶ The Catalog Overlay dialog allows the user to download and overlay catalogs in the current field of view. The available list of catalogs and the rendering of catalog sources (marker, cross, circle with magnitude-dependent radius, ellipse, etc.) depends on client settings. In the current default interface all catalogs are queried from the VizieR service at CDS [5]. To query a catalog, move the map and adjust the zoom level to the desired field of view; choose the catalog from the drop-down catalog selector and an overlay color from the color selector, then click the “GO” button. After a few seconds a new overlay with the chosen color should appear, as well as a new entry in the active catalog list below the drop down selector. Each overlay may be turned off or on by clicking on the check-mark in the corresponding entry of the active catalog list, or simply discarded by clicking on the trashcan button. Depending on the implementation in the client, sources may be clickable and have pop-up information windows attached. Region Overlays ¶ The Region Overlay dialog allows the user to overlay Points/Regions Of Interest, such as local catalogs, detector footprints, etc. These POIs/ROIs may be clickable and have information attached for display in pop-up windows. The Region Overlay selection mechanism is exactly the same as that of Catalog Overlays. Profile Overlays ¶ The Profile overlay dialog gives the user the possibility to extract pixel values directly from the scientific data. These data are unaffected by channel mixing, image scaling or compression. The profile option extracts series of pixel values along lines of constant galactic longitude or latitude. The line color is pink by default and can be changed using the color picker. The line itself is positioned on the image by first dragging the map to the desired start coordinate, pressing the “START” button, and dragging the map to the desired end coordinate and pressing the “END” button. After some calculation, a window appears with a plot of the image profile along the selected line. In mono-channel mode, a single line is plotted that corresponds to the currently selected channel. In color mode, all active channels are plotted, with their channel mixing color. On devices equipped with a mouse one can zoom inside the plots by clicking with the left button and selecting the zoom region. Double-click to zoom out. The plot window can be closed by clicking/touching the small cross in the upper right corner, and reopened at any time by clicking on the line. The spectrum option, as its name implied extracts pixel data along the channel axis at a given position. The position is selected by dragging the map to the desired coordinate and clicking on the “Go” button. A circle marker appears with the color that was selected using the color picker (purple by default). After some calculation, a window pops out with the “spectrum” of pixel values at the selected coordinate. Note that the ordering of pixel values follows that of the channels in the data cube. Documentation panel ¶ Clicking on the symbol (which may be located at the bottom of the taskbar) brings up the online documentation panel. A navigation bar is located at the bottom of the panel to facilitate browsing through the provided documentation or website sections and come back to the main page. Finally, a download button (PDF Symbol ) located at the bottom right of the Manual Pane may be present to allow the user saving an entire manual as a PDF file for offline reading.
http://visiomatic.readthedocs.io/en/latest/interface.html
2017-03-23T00:15:33
CC-MAIN-2017-13
1490218186530.52
[]
visiomatic.readthedocs.io
Overview Plivo RESTXML is a small set of specific XML elements that can control a call. This set is referred to as Plivo Elements. Plivo Elements comprises of the following: <Dial>, <GetDigits>, <GetSpeech>, <Hangup>, <Play>, <PreAnswer>, <Record>, <Redirect>, <Speak>, <Wait>, <Conference>. These act as a replacement for complex telephony functions. The elements, can be combined in different ways, to perform complex actions. This is generally used to control an incoming call to the telephony engine. E.g. A customer calls and a dynamic IVR menu needs to be played. OR You may want to record a voicemail for an incoming call to a phone number. The below diagram outlines a typical case where RESTXML is used and how Plivo works in such cases: When an incoming call is received, Plivo looks up the Answer URL and makes a request to that URL. The web application at that URL responds to the request and decides how the call should proceed by returning a RESTXML document with instructions for Plivo. Outgoing calls can also be controlled in the same manner as incoming calls using RESTXML. The initial outbound call is made through a RESTAPI request; and once the call gets answered, Plivo calls the web app on the Answer URL which carries instructions on how the outbound call should be handled. How Plivo communicates with your Web App By default, RESTXML requests to your application are made via POST. However, you can configure Plivo to make its RESTXML requests to your application via HTTP GET or POST either by using the attribute ‘method’ in the Plivo Elements or by changing the config parameter in Plivo.
http://docs.plivo.org/docs/restxml/
2017-03-23T00:13:26
CC-MAIN-2017-13
1490218186530.52
[]
docs.plivo.org
The CubicWeb framework provides the cubicweb.devtools.testlib.CubicWebTC test base class . Tests shall be put into the mycube/test directory. Additional test data shall go into mycube/test/data. It is much advised to write tests concerning entities methods, actions, hooks and operations, security. The CubicWebTC base class has convenience methods to help test all of this. In the realm of views, automatic tests check that views are valid XHTML. See Automatic views testing for details. Most unit tests need a live database to work against. This is achieved by CubicWeb using automatically sqlite (bundled with Python, see) as a backend. The database is stored in the mycube/test/tmpdb, mycube/test/tmpdb-template files. If it does not (yet) exist, it will be built automatically when the test suite starts. Warning Whenever the schema changes (new entities, attributes, relations) one must delete these two files. Changes concerned only with entity or relation type properties (constraints, cardinalities, permissions) and generally dealt with using the sync_schema_props_perms() function of the migration environment do not need a database regeneration step. We start with an example extracted from the keyword cube (available from). from cubicweb.devtools.testlib import CubicWebTC from cubicweb import ValidationError class ClassificationHooksTC(CubicWebTC): def setup_database(self): with self.admin_access.repo_cnx() as cnx: group_etype = cnx.find('CWEType', name='CWGroup').one() c1 = cnx.create_entity('Classification', name=u'classif1', classifies=group_etype) user_etype = cnx.find('CWEType', name='CWUser').one() c2 = cnx.create_entity('Classification', name=u'classif2', classifies=user_etype) self.kw1eid = cnx.create_entity('Keyword', name=u'kwgroup', included_in=c1).eid cnx.commit() def test_cannot_create_cycles(self): with self.admin_access.repo_cnx() as cnx: kw1 = cnx.entity_from_eid(self.kw1eid) # direct obvious cycle with self.assertRaises(ValidationError): kw1.cw_set(subkeyword_of=kw1) cnx.rollback() # testing indirect cycles kw3 = cnx.execute('INSERT Keyword SK: SK name "kwgroup2", SK included_in C, ' 'SK subkeyword_of K WHERE C name "classif1", K eid %(k)s' {'k': kw1}).get_entity(0,0) kw3.cw_set(reverse_subkeyword_of=kw1) self.assertRaises(ValidationError, cnx.commit) The test class defines a setup_database() method which populates the database with initial data. Each test of the class runs with this pre-populated database. The test case itself checks that an Operation does its job of preventing cycles amongst Keyword entities. The create_entity method of connection (or request) objects allows to create an entity. You can link this entity to other entities, by specifying as argument, the relation name, and the entity to link, as value. In the above example, the Classification entity is linked to a CWEtype via the relation classifies. Conversely, if you are creating a CWEtype entity, you can link it to a Classification entity, by adding reverse_classifies as argument. Note the commit() method is not called automatically. You have to call it explicitly if needed (notably to test operations). It is a good practice to regenerate entities with entity_from_eid() after a commit to avoid request cache effects. You can see an example of security tests in the Step 1: configuring security into the schema. It is possible to have these tests run continuously using apycot. Since unit tests are done with the SQLITE backend and this does not support multiple connections at a time, you must be careful when simulating security, changing users. By default, tests run with a user with admin privileges. Connections using these credentials are accessible through the admin_access object of the test classes. The repo_cnx() method returns a connection object that can be used as a context manager: # admin_access is a pre-cooked session wrapping object # it is built with: # self.admin_access = self.new_access('admin') with self.admin_access.repo_cnx() as cnx: cnx.execute(...) self.create_user(cnx, login='user1') cnx.commit() user1access = self.new_access('user1') with user1access.web_request() as req: req.execute(...) req.cnx.commit() On exit of the context manager, a rollback is issued, which releases the connection. Don’t forget to issue the cnx.commit() calls! Warning Do not use references kept to the entities created with a connection from another one! When running tests, potentially generated e-mails are not really sent but are found in the list MAILBOX of module cubicweb.devtools.testlib. You can test your notifications by analyzing the contents of this list, which contains objects with two attributes: Let us look at a simple example from the blog cube. from cubicweb.devtools.testlib import CubicWebTC, MAILBOX class BlogTestsCubicWebTC(CubicWebTC): """test blog specific behaviours""" def test_notifications(self): with self.admin_access.web_request() as req: cubicweb_blog = req.create_entity('Blog', title=u'cubicweb', description=u'cubicweb is beautiful') blog_entry_1 = req.create_entity('BlogEntry', title=u'hop', content=u'cubicweb hop') blog_entry_1.cw_set(entry_of=cubicweb_blog) blog_entry_2 = req.create_entity('BlogEntry', title=u'yes', content=u'cubicweb yes') blog_entry_2.cw_set(entry_of=cubicweb_blog) self.assertEqual(len(MAILBOX), 0) req.cnx.commit() self.assertEqual(len(MAILBOX), 2) mail = MAILBOX[0] self.assertEqual(mail.subject, '[data] hop') mail = MAILBOX[1] self.assertEqual(mail.subject, '[data] yes') It is easy to write unit tests to test actions which are visible to a user or to a category of users. Let’s take an example in the conference cube. class ConferenceActionsTC(CubicWebTC): def setup_database(self): with self.admin_access.repo_cnx() as cnx: self.confeid = cnx.create_entity('Conference', title=u'my conf', url_id=u'conf', start_on=date(2010, 1, 27), end_on = date(2010, 1, 29), call_open=True, reverse_is_chair_at=chair, reverse_is_reviewer_at=reviewer).eid def test_admin(self): with self.admin_access.web_request() as req: rset = req.find('Conference').one() self.assertListEqual(self.pactions(req, rset), [('workflow', workflow.WorkflowActions), ('edit', confactions.ModifyAction), ('managepermission', actions.ManagePermissionsAction), ('addrelated', actions.AddRelatedActions), ('delete', actions.DeleteAction), ('generate_badge_action', badges.GenerateBadgeAction), ('addtalkinconf', confactions.AddTalkInConferenceAction) ]) self.assertListEqual(self.action_submenu(req, rset, 'addrelated'), [(u'add Track in_conf Conference object', u'' u'?__linkto=in_conf%%3A%(conf)s%%3Asubject&' u'__redirectpath=conference%%2Fconf&' u'__redirectvid=' % {'conf': self.confeid}), ]) You just have to execute a rql query corresponding to the view you want to test, and to compare the result of pactions() with the list of actions that must be visible in the interface. This is a list of tuples. The first element is the action’s __regid__, the second the action’s class. To test actions in a submenu, you just have to test the result of action_submenu() method. The last parameter of the method is the action’s category. The result is a list of tuples. The first element is the action’s title, and the second element the action’s url. This is done automatically with the cubicweb.devtools.testlib.AutomaticWebTest class. At cube creation time, the mycube/test/test_mycube.py file contains such a test. The code here has to be uncommented to be usable, without further modification. The auto_populate method uses a smart algorithm to create pseudo-random data in the database, thus enabling the views to be invoked and tested. Depending on the schema, hooks and operations constraints, it is not always possible for the automatic auto_populate to proceed. It is possible of course to completely redefine auto_populate. A lighter solution is to give hints (fill some class attributes) about what entities and relations have to be skipped by the auto_populate mechanism. These are: Warning Take care to not let the imported AutomaticWebTest in your test module namespace, else both your subclass and this parent class will be run. Some test suites require a complex setup of the database that takes seconds (or even minutes) to complete. Doing the whole setup for each individual test makes the whole run very slow. The CubicWebTC class offer a simple way to prepare a specific database once for multiple tests. The test_db_id class attribute of your CubicWebTC subclass must be set to a unique identifier and the pre_setup_database() class method must build the cached content. As the pre_setup_database() method is not garanteed to be called every time a test method is run, you must not set any class attribute to be used during test there. Databases for each test_db_id are automatically created if not already in cache. Clearing the cache is up to the user. Cache files are found in the data/database subdirectory of your test directory. Warning Take care to always have the same pre_setup_database() function for all classes with a given test_db_id otherwise your tests will have unpredictable results depending on the first encountered one. The CubicWebTC class uses the cubicweb.devtools.ApptestConfiguration configuration class to setup its testing environment (database driver, user password, application home, and so on). The cubicweb.devtools module also provides a RealDatabaseConfiguration class that will read a regular cubicweb sources file to fetch all this information but will also prevent the database to be initalized and reset between tests. For a test class to use a specific configuration, you have to set the _config class attribute on the class as in: from cubicweb.devtools import RealDatabaseConfiguration from cubicweb.devtools.testlib import CubicWebTC class BlogRealDatabaseTC(CubicWebTC): _config = RealDatabaseConfiguration('blog', sourcefile='/path/to/realdb_sources') def test_blog_rss(self): with self.admin_access.web_request() as req: rset = req.execute('Any B ORDERBY D DESC WHERE B is BlogEntry, ' 'B created_by U, U login "logilab", B creation_date D') self.view('rss', rset, req=req) Sometimes a small component cannot be tested all by itself, so one needs to specify other cubes to be used as part of the the unit test suite. This is handled by the bootstrap_cubes file located under mycube/test/data. One example from the preview cube: card, file, preview The format is: It is also possible to add a schema.py file in mycube/test/data, which will be used by the testing framework, therefore making new entity types and relations available to the tests. CubicWeb provides some literate programming capabilities. The cubicweb-ctl tool shell command accepts different file formats. If your file ends with .txt or .rst, the file will be parsed by doctest.testfile with CubicWeb’s Migration API enabled in it. Create a scenario.txt file in the test/ directory and fill with some content. Refer to the doctest.testfile documentation. Then, you can run it directly by: $ cubicweb-ctl shell <cube_instance> test/scenario.txt When your scenario file is ready, put it in a new test case to be able to run it automatically. from os.path import dirname, join from logilab.common.testlib import unittest_main from cubicweb.devtools.testlib import CubicWebTC class AcceptanceTC(CubicWebTC): def test_scenario(self): self.assertDocTestFile(join(dirname(__file__), 'scenario.txt')) if __name__ == '__main__': unittest_main() If you want to set up initial conditions that you can’t put in your unit test case, you have to use a KeyboardInterrupt exception only because of the way doctest module will catch all the exceptions internally. >>> if condition_not_met: ... raise KeyboardInterrupt('please, check your fixture.') The pytest utility (shipping with logilab-common, which is a mandatory dependency of CubicWeb) extends the Python unittest functionality and is the preferred way to run the CubicWeb test suites. Bare unittests also work the usual way. To use it, you may: Additionally, the -x option tells pytest to exit at the first error or failure. The -i option tells pytest to drop into pdb whenever an exception occurs in a test. When the -x option has been used and the run stopped on a test, it is possible, after having fixed the test, to relaunch pytest with the -R option to tell it to start testing again from where it previously failed. The base class of CubicWebTC is logilab.common.testlib.TestCase, which provides a lot of convenient assertion methods. A unittest.TestCase extension with some additional methods. An unordered sequence specific comparison. It asserts that actual_seq and expected_seq have the same element counts. Equivalent to: self.assertEqual(Counter(iter(actual_seq)), Counter(iter(expected_seq))) Asserts that each element has the same count in both sequences. Example: - [0, 1, 1] and [1, 0, 1] compare equal. - [0, 0, 1] and [0, 1] compare unequal. joins the object’s datadir and fname mark a generative test as skipped for the <msg> reason return the option value or default if the option is not define sets the current test’s description. This can be useful for generative tests because it allows to specify a description per yield override default unittest shortDescription to handle correctly generative tests First, remember to think that some code run on a client side, some other on the repository side. More precisely: The client interacts with the repository through a repoapi connection. Note These distinctions are going to disappear in cubicweb 3.21 (if not before). A repoapi connection is tied to a session in the repository. The connection and request objects are inaccessible from repository code / the session object is inaccessible from client code (theoretically at least). The web interface provides a request class. That request object provides access to all cubicweb resources, eg: A session provides an api similar to a request regarding RQL execution and access to global resources (registry and all), but also has the following responsibilities: The _cw attribute available on every application object provides access to all cubicweb resources, i.e.: Beware some views may be called with a session (e.g. notifications) or with a request. In the web interface, an HTTP request is handled by a single request, which will be thrown away once the response is sent. The web publisher handles the transaction: Let’s detail the process: This implies several things:
https://docs.cubicweb.org/book/devrepo/testing.html
2017-03-23T00:11:21
CC-MAIN-2017-13
1490218186530.52
[]
docs.cubicweb.org
Feature reference¶ Extension provides some sugar for your tests, such as: Access to context bound objects ( url_for, request, session) without context managers: def test_app(client): assert client.get(url_for('myview')).status_code == 200 Easy access to JSONdata in response: @api.route('/ping') def ping(): return jsonify(ping='pong') def test_api_ping(client): res = client.get(url_for('api.ping')) assert res.json == {'ping': 'pong'} Note User-defined jsonattribute/method in application response class does not overrides. So you can define your own response deserialization method: from flask import Response from myapp import create_app class MyResponse(Response): '''Implements custom deserialization method for response objects.''' @property def json(self): '''What is the meaning of life, the universe and everything?''' return 42 @pytest.fixture def app(): app = create_app() app.response_class = MyResponse return app def test_my_json_response(client): res = client.get(url_for('api.ping')) assert res.json == 42 Running tests in parallel with pytest-xdist. This can lead to significant speed improvements on multi core/multi CPU machines. This requires the pytest-xdistplugin to be available, it can usually be installed with: pip install pytest-xdist You can then run the tests by running: py.test -n <number of processes> Not enough pros? See the full list of available fixtures and markers below. Fixtures¶ pytest-flask provides a list of useful fixtures to simplify application testing. More information on fixtures and their usage is available in the pytest documentation. client - application test client¶ An instance of app.test_client. Typically refers to flask.Flask.test_client. Hint During tests execution the request context has been pushed, e.g. url_for, session and other context bound objects are available without context managers. Example: def test_myview(client): assert client.get(url_for('myview')).status_code == 200 client_class - application test client for class-based tests¶ Example: @pytest.mark.usefixtures('client_class') class TestSuite: def test_myview(self): assert self.client.get(url_for('myview')).status_code == 200 config - application config¶ An instance of app.config. Typically refers to flask.Config. live_server - application live server¶ Run application in a separate process (useful for tests with Selenium and other headless browsers). Hint The server’s URL can be retrieved using the url_for function. from flask import url_for @pytest.mark.usefixtures('live_server') class TestLiveServer: def test_server_is_up_and_running(self): res = urllib2.urlopen(url_for('index', _external=True)) assert b'OK' in res.read() assert res.code == 200 --no-start-live-server - don’t start live server automatically¶ By default the server is starting automatically whenever you reference live_server fixture in your tests. But starting live server imposes some high costs on tests that need it when they may not be ready yet. To prevent that behaviour pass --no-start-live-server into your default options (for example, in your project’s pytest.ini file): [pytest] addopts = --no-start-live-server Note Your should manually start live server after you finish your application configuration and define all required routes: def test_add_endpoint_to_live_server(live_server): @live_server.app.route('/test-endpoint') def test_endpoint(): return 'got it', 200 live_server.start() res = urlopen(url_for('test_endpoint', _external=True)) assert res.code == 200 assert b'got it' in res.read() request_ctx - request context¶ The request context which contains all request relevant information. Hint The request context has been pushed implicitly any time the app fixture is applied and is kept around during test execution, so it’s easy to introspect the data: from flask import request, url_for def test_request_headers(client): res = client.get(url_for('ping'), headers=[('X-Something', '42')]) assert request.headers['X-Something'] == '42' Content negotiation¶ An important part of any REST service is content negotiation. It allows you to implement behaviour such as selecting a different serialization schemes for different media types. HTTP has provisions for several mechanisms for “content negotiation” - the process of selecting the best representation for a given response when there are multiple representations available. —RFC 2616#section-12. Fielding, et al. The most common way to select one of the multiple possible representation is via Accept request header. The following series of accept_* fixtures provides an easy way to test content negotiation in your application: def test_api_endpoint(accept_json, client): res = client.get(url_for('api.endpoint'), headers=accept_json) assert res.mimetype == 'application/json' */* accept header suitable to use as parameter in client. application/json accept header suitable to use as parameter in client. application/json-p accept header suitable to use as parameter in client. Markers¶ pytest-flask registers the following markers. See the pytest documentation on what markers are and for notes on using them.
http://pytest-flask.readthedocs.io/en/latest/features.html
2017-03-23T00:10:00
CC-MAIN-2017-13
1490218186530.52
[]
pytest-flask.readthedocs.io
How Do I Configure an S3 Bucket for Static Website Hosting? You can host a static website on Amazon S3. On a static website, individual web pages include static content and they might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting. For more information, see Hosting a Static Website on Amazon S3 in the Amazon Simple Storage Service Developer Guide. To configure an S3 bucket for static website hosting Sign in to the AWS Management Console and open the Amazon S3 console at. In the Bucket name list, choose the name of the bucket that you want to enable static website hosting for. Choose Properties. Choose Static website hosting. After you enable your bucket for static website hosting, web browsers can access all of your content through the Amazon S3 website endpoint for your bucket. Choose Use this bucket to host. For Index Document, type the name of the index document, which is typically named index.html. When you configure a bucket for website hosting, you must specify an index document. Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. For more information, see Configure a Bucket for Website Hosting in the Amazon Simple Storage Service Developer Guide. (Optional) For Error Document, type the name of a custom error document. If an error occurs, Amazon S3 returns an HTML error document. For 4XX class errors, you can optionally provide your own custom error document, in which you can provide additional guidance to your users. For more information, see Custom Error Document Support in the Amazon Simple Storage Service Developer Guide. (Optional) For Edit redirection rules, describe the rules using XML in the text area if you want to specify advanced redirection rules. For example, you can conditionally route requests according to specific object key names or prefixes in the request. For more information, see Configure a Bucket for Website Hosting in the Amazon Simple Storage Service Developer Guide. Choose Save. Add a bucket policy to the website bucket to grant everyone access to the objects in the bucket. When you configure a bucket as a website, you must make the objects that you want to serve publicly readable. To do so, you write a bucket policy that grants everyone s3:GetObjectpermission. The following example bucket policy grants everyone access to the objects in the example-bucketbucket.Copy to clipboard { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::example-bucket/*" ] } ] } For information about adding a bucket policy, see How Do I Set Bucket Permissions?. For more information, see Permissions Required for Website in the Amazon Simple Storage Service Developer Guide. Note If you choose Disable website hosting, Amazon S3 removes any existing website configuration from the bucket, and the bucket is not accessible from the website endpoint. However, the bucket is still available at the REST endpoint. For a list of Amazon S3 endpoints, see Amazon S3 Regions and Endpoints in the Amazon Web Services General Reference.
http://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html
2017-03-23T00:25:25
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
Bot Integration Bot integration guide Applozic allows bot integration through Dialogflow and other bot platforms. For setting up a bot, visit Click on “Integrate Bot” and follow the steps for bot setup. Once the bot setup is done, you will find a botID under “Integrated Bots”. botID is same the ‘userId’. Initiate chat with bot: Create a Group and add the botID as a member of the group. Once this is done, send a message to the group and bot will start responding to the chat. Updated almost 3 years ago Did this page help you?
https://docs.applozic.com/docs/bot-integration
2022-01-17T00:48:01
CC-MAIN-2022-05
1642320300253.51
[]
docs.applozic.com
Amplify Shared Services Save PDF Selected topic Selected topic and subtopics All content Formats Learn about the request and response data formats. CaseType Classification of case types, as an open-ended enumeration. TargetDate Provided target date for cases of type business service request. Severity Automatically calculated from the urgency and impact provided. An open-ended enumeration. Impact Classification of impact levels, as an open-ended enumeration. Urgency Classification of urgency levels, as an open-ended enumeration. Environment Classification of case environments, as an open-ended enumeration. ListCasesRequest Request format for the List Cases method. ListCasesResponse Response format for the List Cases method. GetCaseResponse Response format for the Get Case method. CreateCaseRequest Request format for the Create Case method. CreateCaseResponse Response format for the Create Case method. CloseCaseRequest Request format for the Close Case method. CloseCaseResponse Response format for the Close Case method. AddNoteRequest Request format for the Add Note method. AddNoteResponse Response format for the Add Note method. GetProductsResponse Response format for the Get Products method. GetAccountsResponse Response format for the Get Accounts method. ErrorResponse Error response format. Miscellaneous Miscellaneous commonly used formats. Last modified August 5, 2021: Update Axway Support Portal API docs with the new authn mechanism utilizing Ampify service accounts (#23) (541bdbf) Related Links
https://docs.axway.com/bundle/ampss-open-docs/page/docs/shared_services/supportapi/formats/index.html
2022-01-17T01:55:50
CC-MAIN-2022-05
1642320300253.51
[]
docs.axway.com
What if I need to delete a security? Click on CAP TABLE and then click on DETAIL. Once you are on the detailed view of your cap table you want to click on the security you wish to delete as they are listed across the top of your cap table. For our example lets say I want to delete 2015 Stock Incentive Plan, I would need to click on 2015 Stock Incentive Plan where it is listed on the top of my cap table. This will then bring a menu on to the page from the right side that will show all of the transactions associated with that security. You want to click on Issue Security as this is the transaction that created the security. This will bring up details of the security. You want to click on ACTIONS, and then click on DELETE to delete your security. You can also delete a security by going to your Summary page. Once there you want to click on the security you wish to delete. This should bring you to a view of all the shareholders for that securit You want to click on ACTIONS in the top right and then click on DELETE SECURITY *** If you have transactions in that security you will be warned on whether or not you want to delete your security as deleting it will delete all the underlying transactions.
https://docs.equity.gust.com/article/129-what-if-i-need-to-delete-a-security
2022-01-17T01:48:58
CC-MAIN-2022-05
1642320300253.51
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f76479e4b05e7f887e97b0/images/581a087ec697915f88a3bee8/file-98L0EmmMmd.png', None], dtype=object) ]
docs.equity.gust.com
Federate Integration with PyHELICS API¶ The Federate Integration Example extends the Base Example to demonstrate how to integrate federates using the HELICS API instead of JSON config files. This tutorial is organized as follows: - - Federate Integration using the PyHELICS API - Computing Environment¶ This example was successfully run on Tue Nov 10 11:16:44 PST 2020 with the following computing environment. Operating System $ sw_vers ProductName: Mac OS X ProductVersion: 10.14.6 BuildVersion: 18G6032 python version $ python Python 3.7.6 (default, Jan 8 2020, 13:42:34) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. python modules for this example $ pip list | grep matplotlib matplotlib 3.1.3 $ pip list | grep numpy numpy 1.18.5 If these modules are not installed, you can install them with $ pip install matplotlib $ pip install numpy helics_broker version $ helics_broker --version 2.4.0 (2020-02-04) helics_cli version $ helics --version 0.4.1-HEAD-ef36755 pyhelics init file $ python >>> import helics as h >>> h.__file__ '/Users/[username]/Software/pyhelics/helics/__init__.py' Example files¶ All files necessary to run the Federate Integration Example can be found in the Fundamental examples repository: The files include: Python program for Battery federate Python program for Charger federate “runner” JSON to enable helics_cliexecution of the co-simulation Federate Integration using the PyHELICS API¶ This example differs from the Base Example in that we integrate the federates (simulators) into the co-simulation using the API instead of an external JSON config file. Integration and configuration of federates can be done either way – the biggest hurdle for most beginning users of HELICS is learning how to look for the appropriate API key to mirror the JSON config style. For example, let’s look at our JSON config file of the Battery federate from the Base Example: { "name": "Battery", "loglevel": 1, "coreType": "zmq", "period": 60, "uninterruptible": false, "terminate_on_error": true, "wait_for_current_time_update": true, "publications":[ ... ], "subscriptions":[ ... ] } We can see from this config file that we need to find API method to assign the name, loglevel, coreType, period, uninterruptible, terminate_on_error, wait_for_current_time_update, and pub/ subs. In this example, we will be using the PyHELICS API methods. This section will discuss how to translate JSON config files to API methods, how to configure the federate with these API calls in the co-simulation, and how to dynamically register publications and subscriptions with other federates. Translation from JSON to PyHELICS API methods¶ Configuration with the API is done within the federate, where an API call sets the properties of the federate. With our Battery federate, the following API calls will set all the properties from our JSON file (except pub/sub, which we’ll cover in a moment). These calls set: name loglevel coreType Additional core configurations period uninterruptible terminate_on_error wait_for_current_time_update h.helicsCreateValueFederate("Battery", fedinfo) h.helicsFederateInfoSetIntegerProperty(fedinfo, h.HELICS_PROPERTY_INT_LOG_LEVEL, 1) h.helicsFederateInfoSetCoreTypeFromString(fedinfo, "zmq") h.helicsFederateInfoSetCoreInitString(fedinfo, fedinitstring) h.helicsFederateInfoSetTimeProperty(fedinfo, h.HELICS_PROPERTY_TIME_PERIOD, 60) ) If you find yourself wanting to set additional properties, there are a handful of places you can look: C++ source code: Do a string search for the JSON property. This can provide clarity into which enumto use from the API. PyHELICS API methods: API methods specific to PyHELICS, with suggestions for making the calls pythonic. Configuration Options Reference: API calls for C++, C, Python, and Julia Federate Integration with API calls¶ We now know which API calls are analogous to the JSON configurations – how should these methods be called in the co-simulation to properly integrate the federate? It’s common practice to rely on a helper function to integrate the federate using API calls. With our Battery/Controller co-simulation, this is done by defining a create_value_federate function (named for the fact that the messages passed between the two federates are physical values). In Battery.py this function is: def create_value_federate(fedinitstring, name, period): fedinfo = h.helicsCreateFederateInfo() h.helicsFederateInfoSetCoreTypeFromString(fedinfo, "zmq") h.helicsFederateInfoSetCoreInitString(fedinfo, fedinitstring) h.helicsFederateInfoSetIntegerProperty(fedinfo, h.HELICS_PROPERTY_INT_LOG_LEVEL, 1) h.helicsFederateInfoSetTimeProperty(fedinfo, h.HELICS_PROPERTY_TIME_PERIOD, period) ) fed = h.helicsCreateValueFederate(name, fedinfo) return fed Notice that we have passed three items to this function: fedinitstring, name, and period. This allows us to flexibly reuse this function if we decide later to change the name or the period (the most common values to change). We create the federate and integrate it into the co-simulation by calling this function at the beginning of the program main loop: fedinitstring = " --federates=1" name = "Battery" period = 60 fed = create_value_federate(fedinitstring, name, period) What step created the value federate? Click for answer This line from the `create_value_federate` function: fed = h.helicsCreateValueFederate(name, fedinfo) Notice that we pass to this API the fedinfo set by all preceding API calls. Dynamic Pub/Subs with API calls¶ In the Base Example, we configured the pubs and subs with an external JSON file, where each publication and subscription between federate handles needed to be explicitly defined for a predetermined number of connections: "publications":[ { "key":"Battery/EV1_current", "type":"double", "unit":"A", "global": true }, {...} ], "subscriptions":[ { "key":"Charger/EV1_voltage", "type":"double", "unit":"V", "global": true }, {...} ] With the PyHELICS API methods, you have the flexibility to define the connection configurations dynamically within execution of the main program loop. For example, in the Base Example we defined five communication connections between the Battery and the Charger, meant to model the interactions of five EVs each with their own charging port. If we want to increase or decrease that number using JSON configuration, we need to update the JSON file (either manually or with a script). Using the PyHELICS API methods, we can register any number of publications and subscriptions. This example sets up pub/sub registration using for loops: num_EVs = 5 pub_count = num_EVs pubid = {} for i in range(0, pub_count): pub_name = f"Battery/EV{i+1}_current" pubid[i] = h.helicsFederateRegisterGlobalTypePublication( fed, pub_name, "double", "A" ) sub_count = num_EVs subid = {} for i in range(0, sub_count): sub_name = f"Charger/EV{i+1}_voltage" subid[i] = h.helicsFederateRegisterSubscription(fed, sub_name, "V") Here we only need to designate the number of connections to register in one place: num_EVs = 5. Then we register the publications using the h.helicsFederateRegisterGlobalTypePublication() method, and the subscriptions with the h.helicsFederateRegisterSubscription() method. Note that subscriptions are analogous to inputs, and as such retain similar properties. Co-simulation Execution¶ In this tutorial, we have covered how to integrate federates into a co-simulation using the PyHELICS API. Integration covers configuration of federates and registration of communication connections. Execution of the co-simulation is done the same as with the Base Example, with a runner JSON we sent to helics_cli. The runner JSON has not changed from the Base Example: { "name": "fundamental_integration", "federates": [ { "directory": ".", "exec": "helics_broker -f 3 --loglevel=warning", "host": "localhost", "name": "broker" }, { "directory": ".", "exec": "python -u Charger.py", "host": "localhost", "name": "Charger" }, { "directory": ".", "exec": "python -u Controller.py", "host": "localhost", "name": "Controller" }, { "directory": ".", "exec": "python -u Battery.py", "host": "localhost", "name": "Battery" } ] } Execute the co-simulation with the same command as the Base Example >helics run --path=fundamental_integration_runner.json This results in the same output; the only thing we have changed is the method of configuring the federates and integrating them. If your output is not the same as with the Base Example, it can be helpful to pinpoint source of the difference – have you used the correct API method? Questions and Help¶ Do you have questions about HELICS or need help? Come to office hours! - Place your question on the github forum!
https://docs.helics.org/en/latest/user-guide/examples/fundamental_examples/fundamental_fedintegration.html
2022-01-17T01:19:46
CC-MAIN-2022-05
1642320300253.51
[array(['../../../_images/fed_int_setup.png', None], dtype=object) array(['../../../_images/fundamental_default_resultbattery.png', None], dtype=object) array(['../../../_images/fundamental_default_resultcharger.png', None], dtype=object) ]
docs.helics.org
Overview HYPR Workforce Access HYPR turns your smartphone into a smart card. By combining public-key encryption with lightning-fast mobile-initiated authentication, HYPR enables passwordless login to workstations through your mobile device. The HYPR Workforce Access Client is designed to ensure each workforce user has a convenient, productive, and secure experience when accessing their Windows workstations. Once deployed, users can unlock their Windows workstations without a password by using their mobile devices. How it works HYPR is utilizing certificate-based authentication to login into Windows user accounts. When the user pairs the mobile device with the computer, virtual smartcard will be created to perform the authentication. User Experience Check out the video below to see how to pair and authenticate with HYPR. Updated 10 months ago
https://docs.hypr.com/installinghypr/docs/passworldess-workforce-access-client-windows
2022-01-17T02:01:17
CC-MAIN-2022-05
1642320300253.51
[array(['https://files.readme.io/1281206-Screen_Shot_2021-04-02_at_12.35.29_PM.png', 'Screen Shot 2021-04-02 at 12.35.29 PM.png'], dtype=object) array(['https://files.readme.io/1281206-Screen_Shot_2021-04-02_at_12.35.29_PM.png', 'Click to close...'], dtype=object) ]
docs.hypr.com
User Variables and Environments Overview In many places in the setup file you can specify the value of a setting by referencing a variable instead of entering data directly. Variables Variables are typically configured for settings in a sampler configuration. This allows the same sampler definition to be used in several places, and the behaviour of the sampler depends on the values of the variables when it is used. Alternatively, the same variable can be referenced by many different samplers (e.g. the sample interval), allowing the user to control their behaviour by altering a single value. Variables have scoping rules which define where they are valid and hold values. This means that although the GSE may suggest the name of a variable to use, the variable referenced may not be resolvable in a particular instance. Gateway produces setup validation warnings for this occurrence, so you should check this output when making setup changes. Environments If the same group of variables are used repeatedly in multiple places, then you may want to use an environment. An environment is a collection of variables, and is configured using the environments top-level section. Each managed entity or type can reference a single environment. This makes the variables contained within the environment accessible by any samplers defined on that entity or type. When a type is added to a managed entity using an addTypes setting, an environment can also be specified for use by the samplers defined within the type. It is also possible to nest environments to allow specialisations. Child environments inherit variables from their parent environment. Operation Define Variables Variables can be defined in managed entities, managed entity groups, types, environments and operating environment. A variable definition consists of the variable name and its value. The value can be one of several different types, including strings, integers, a list of items, an active time reference, amongst others. Where possible, the variable type should fit the intended usage. For example, the sample interval setting takes integer values, so the variable here should be an integer type. Gateway attempts to substitute (for instance) a text value where an integer value is expected, but this does not work for all situations. To define a new variable using the Gateway Setup Editor (GSE): - Navigate to the section you want to make a variable for. - Click on the Add new button for the varsection. - Fill in the variable name and value. Reference Variables To use the value of a variable, configure a setting using a variable name instead of a data value. In the GSE, a setting that supports variables appears with a blue hyperlink stating either data or var. Click on this text to switch between data and variable modes, and supply the data value or variable name respectively. Some settings (typically text-valued settings) also support inline usage of variables, which allow the value of a variable to be inserted in the middle of user-supplied text (i.e. a mix of data and var values). To insert a variable reference, click on the var drop-down to the right of the setting and select a variable to use. (See Inline variable storage). Inline variable storage Inline variable usage allows a mix of user-supplied (text) data and variables. Settings which support this will display a var dropdown menu to the right of the setting, which insert a reference to the selected variable. In the example screenshot below, the process_user variable has been inserted into the path for a log file. Variable Scoping and Resolution Variables have scoping rules which define where they are valid and hold values. This means that although the GSE may suggest the name of a variable to use, the variable referenced may not be resolvable in a particular instance. For example, a sampler which references the variable myVar can be added to managed entities me1 and me2. If me1 defines a value for myVar but me2 does not, then the sampler on me2 will not be able to resolve this reference to a value, and uses an empty value instead. When a variable reference is used in a sampler or sampler include, it is resolved by looking through the following locations in order until the variable can be found. If it is not found then an empty value is substituted (a gateway validation warning is also produced for this occurrence). If the sampler or sampler include is referenced by a type: - The environment specified when the type was added to that managed entity or managed entity group. - Variables defined directly in the type. - The environment referenced by the type. If the sampler or sampler include is referenced in the managed entity or managed entity group, or was not found in the above: - Variables defined in (or inherited by) the managed entity. - The environment referenced by the managed entity. - The operating environment. For a sampler include, if the variable still cannot be resolved: - This process is repeated, using the locations that would be searched when resolving variables for the host sampler configuration. Inherited Variables Managed entities and environments inherit any variables defined in its ancestor sections in the Gateway setup. For example, in the screenshot below, managed entity linux1 inherits the three variables shown, as they are defined in the managed entity group linux hosts which contains the linux1 definition. Inherited variables can also be overridden (a new value given to the variable, replacing the inherited value) by re-defining the variable again with the new value. Additional variables can also be defined as normal. The screenshot above shows the variables defined in the linux1 managed entity. As this entity is part of the linux hosts group, it inherits the three variables dbhost, dbport, and logdir. The additional var entries define a new variable dbuser, and override the inherited logdir variable with a new value. This brings the total number of variables for this managed entity to 4 (3 inherited variables with 1 overridden, and 1 new variable). Configuration environments > environmentGroup Environment groups are used to group sets of environments, to improve ease of setup management. environments > environmentGroup > name Specifies the name of the environment group. Although the name is not used internally by gateway, it is recommended to give the group a descriptive name so that users editing the setup file can easily determine the purpose of the group. environments > environment Each configuration in this section is a named environment that can be accessed from other parts of the configuration. An environment defines a set of variables that can be referred to by users of the environment. environments > environment > name Specifies the name of the environment. Although the name is not used internally by gateway, it is recommended to give the environment a descriptive name so that users editing the setup file can easily determine the class of variables in this environment. environments > environment > var Each var element represents a named variable which can be accessed from its environment. There are several variable types available. environments > environment > var > name The name that is used to identify the variable. The name must be unique within each variable scope. environments > environment > var > activeTime Specifies a variable of type activeTime reference which refers to an active time in the system. environments > environment > var > boolean Specifies a variable of type Boolean which can take a value of true or false. environments > environment > var > double Specifies a variable of type double which can take a double precision floating point numerical value. environments > environment > var > externalConfigFile Specifies a variable which can take the name of an external configuration file. environments > environment > var > integer Specifies a variable which can take a numerical integer value. environments > environment > var > nameValueList Specifies a variable which can take a list of name value pairs. environments > environment > var > nameValueList > item Specifies a single name/value pair in the list. environments > environment > var > nameValueList > item > name Specifies the name in a single name/value pair in the list. environments > environment > var > nameValueList > item > value Specifies the value in a single name/value pair in the list. environments > environment > var > regex Specifies a variable which can take a regular expression. environments > environment > var > string Specifies a variable which can take a string. environments > environment > var > stringList Specifies a variable which can take a list of strings. This can be used in various plugins to define lists of values. It can also be used it the inList() function in rules. (There is no where else in rules that stringLists should be used). You can use a stringList variable in the following cases: realTimeCheckpoints > checkpoint > parents in Message Tracker configuration processParameters in Processes inList in Rules, Actions, and Alerts Adding to existing dataviews in Rules, Actions, and Alerts samplers > sampler > hideRows in Samplers samplers > sampler > hideColumns in Samplers - environments > environment > var > stringList > string Specifies a single string in the string list. environments > environment > var > macro Specifies a variable that takes the value from the gateway itself. The following macros are supported: The values of macros that cannot be resolved because they are not applicable (e.g. the value of samplerName on a Managed entity) will return the name of the macro.
https://docs.itrsgroup.com/docs/geneos/5.10.0/Gateway_Reference_Guide/gateway_user_variables_and_environments.htm
2022-01-17T00:20:44
CC-MAIN-2022-05
1642320300253.51
[array(['../Resources/Images/gateway/gw-user-variables-inherit-variable_677x385.png', None], dtype=object) array(['../Resources/Images/gateway/gw-user-variables-inherit-variable-linux_680x395.png', None], dtype=object) ]
docs.itrsgroup.com
Updating the Loome Agent varies depending on the platform you’re hosting it on, below is a summary of the required action for each platform in order to update the agent. To update Loome Agents running as a Windows service or Linux systemd daemon, re-download the script. Unlike containers ran locally using Docker,”. To update a locally hosted Docker container agent, you will need to have the docker run command handy from the installation of the agent in the first place. With this command you need to do the following: loomesoftware/agentimage This means that the general series of commands for updating the agent would look like: docker pull loomesoftware/agent docker rm example-agent docker run example-agent ... # The rest of the command With tools like Portainer and Amazon Fargate, the process for updating the container is a case of restarting the container and as is the case with Azure Container Instances; the latest version is downloaded in the process. If you require any help with your container hosting service of choice, please contact us at [email protected]
https://docs.loomesoftware.com/monitor/how-to-add-an-agent/how-to-update-your-agent/
2022-01-17T01:01:10
CC-MAIN-2022-05
1642320300253.51
[]
docs.loomesoftware.com
Database Connector Anypoint Connector for Database (Database Connector) enables you to connect with almost any Java Database Connectivity (JDBC) relational database using a single interface for every case. Database Connector allows you to run diverse SQL operations on your database, including Select, Insert, Update, Delete, and even Stored Procedures. Notes: In Mule 3.7 and later, you can specify MEL expressions in connector fields. Additional attributes can be configured dynamically depending on the database configuration you use. For more information, see the Fields That Support MEL Expressions section. Database Connector replaces the JDBC connector. Starting Database Connector in your Mule application: Check that your database engine corresponds to what is described in Database. For more information, see DataSense. DataMapper:, Database Connector The example below illustrates a very simple Mule application in Studio that Database Global Database Connector for Database Engines Supported Out of the Box green plus icon to the right of Connector configuration to create a database global element for this database connector: Studio displays the Choose Global Type window, shown below. Select your supported database engine from the list, for example Oracle.<< Configuring the Global Database Connector, see Database Connector Reference. See also Fields That Support MEL Expressions. Oracle MySQL Derby. Advanced Tab XML Editor. Studio Visual Editor. XML Editor or Standalone If you haven’t already done so, download the driver for your particular database. For example, the driver for a MySQL database is available for download online. Add the driver’s .jarfile to the rootfolder. For details, see the next section. Configuring the Global Database Connector for Generic DB Configuration Studio Visual Editor. a Database Connector Instance Inside a Flow._21<<_22<< If the desired data type is not listed, simply type it into the empty field. XML Editor or Standalone_23<< Example 2 _25<< Example 2 XML Editor Example 1 <db:bulk-execute insert into employees columns (ID, name) values (abc, #[some expression]); update employees set name = "Pablo" where id = 1; delete from employees where id = 2; </db:bulk-execute> Example 2 <db:bulk-execute #[payload] </db:bulk-execute> Tips Installing the database driver: Be sure to install the .jarfile for your database driver in your Mule project, then configure the build path of the project to include the .jar).: <db:oracle-config <db:data-types> <!-- java.sql.STRUCT == 2002—> <db:data-type <!-- java.sql.ARRAY == 2003—> <db:data-type </db:data-types> </db:oracle-config> Struct Type In the case of struct values, the database connector returns java.sql.Struct. In order to obtain the information,: <db:oracle-config </db:oracle-config> <db:data-type <!-- VARCHAR id=12 --> <db:data-type <!-- STRUCT id=2002 --> </db:data-types> ... <db:stored-procedure <db:parameterized-query><![CDATA[CALL storedprocfnc(:INtypename,:OUTtypename);]]></db:parameterized-query> <db:in-param <db:out-param </db:stored-procedure> Example MEL Expression Database URL The following example shows the Mule 3.7 and newer change where you can specify a MEL expression in the Database URL field. See also Fields That Support MEL Expressions. <mule xmlns="" xmlns: <db:derby-config <flow name="defaultQueryRequestResponse"> <inbound-endpoint <set-variable <db:select <db:parameterized-query>select * from PLANET order by ID</db:parameterized-query> </db:select> </flow> </mule>
https://docs.mulesoft.com/db-connector/0.3.7/
2022-01-17T00:47:40
CC-MAIN-2022-05
1642320300253.51
[array(['_images/db-example-flow.png', 'db_example_flow'], dtype=object) array(['_images/db-example-flow.png', 'db_example_flow'], dtype=object) array(['_images/modif-flowchart.png', 'modif_flowchart'], dtype=object) array(['_images/installed-mysql-driver.png', 'installed_mysql_driver'], dtype=object) array(['_images/oracle-global-elem.png', 'oracle_global_elem'], dtype=object) array(['_images/mysql-global-elem.png', 'mysql_global_elem'], dtype=object) array(['_images/derby-global-elem.png', 'derby_global_elem'], dtype=object) array(['_images/config-enable-ds.png', 'config_enable_DS'], dtype=object) array(['_images/advanced-ge.png', 'Advanced_GE'], dtype=object) array(['_images/pack-explorer.png', 'pack_explorer'], dtype=object) array(['_images/global-elem-generic-db-gral-tab.png', 'global_elem-generic_DB-gral_tab'], dtype=object) array(['_images/config-enable-ds.png', 'config_enable_DS'], dtype=object) array(['_images/use-xa-transact.png', 'use_XA_transact'], dtype=object) array(['_images/config-db-connector.png', 'config_db_connector'], dtype=object) array(['_images/select.png', 'select'], dtype=object) array(['_images/insert-w-mel.png', 'insert_w_MEL'], dtype=object) array(['_images/truncate.png', 'truncate'], dtype=object) array(['_images/stored-procedure.png', 'stored_procedure'], dtype=object) array(['_images/bulk.png', 'bulk'], dtype=object) array(['_images/advanced-insert.png', 'advanced_insert'], dtype=object) array(['_images/advanced-select.png', 'advanced_select'], dtype=object) array(['_images/template-with-vars.png', 'template_with_vars'], dtype=object) array(['_images/datatypes-menu.png', 'datatypes_menu'], dtype=object) array(['_images/dllexample.png', 'ddlexample'], dtype=object) array(['_images/dllexample2.png', 'ddlexample2'], dtype=object) array(['_images/bulkex1.png', 'bulkex1'], dtype=object) array(['_images/bulkex2.png', 'bulkex2'], dtype=object)]
docs.mulesoft.com
Introduction to notebooks¶ Jupyter notebooks are a convenient tool for interactive data exploration, rapid prototyping, and producing reports. The Virtual Research Environment provides free JupyterLab instances with persistent storage where you can run notebooks working with Swarm data. Note Sometimes notebooks won’t render directly on the GitHub website (or are slow). Try nbviewer instead (see the “Render” links above). In the case of Swarm_notebooks, the notebooks are stored in the repository without outputs included, so are better viewed at swarm.magneticearth.org (the Jupyter Book link above) Notebooks can be uploaded to JupyterLab using the “Upload” button (which means you must first download the notebooks to your computer from GitHub). To easily access a full repository, open a Terminal and use git: To clone a repository to your working space: git clone (this will clone it into Swarm_notebooks within your current directory) To clear any changes you made and fetch the latest version, from within Swarm_notebooks run: git fetch git reset --hard origin/master The nbgitpuller links above perform a git clone operation for you, and applies updates when you re-click the link using special automatic merging behaviour. Sometimes it may be necessary to perform the git operations directly instead.
https://viresclient.readthedocs.io/en/latest/notebook_intro.html
2022-01-17T01:03:02
CC-MAIN-2022-05
1642320300253.51
[]
viresclient.readthedocs.io
Using a Pre-trained Model¶ Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed further down in this README. - The Python package/language binding The Node.JS package/language binding - The .NET client/language binding Running deepspeech might, see below, require some runtime dependencies to be already installed on your system: sox- The Python and Node.JS clients use SoX to resample files to 16kHz. libgomp1- libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually. libstdc++- Standard C++ Library implementation. Some people have had to install this manually. libpthread- On Linux, some people have had to install libpthread manually. On Ubuntu, libpthreadis part of the libpthread-stubs0-devpackage. Redistribuable Visual C++ 2015 Update 3 (64-bits)- On Windows, it might be required to ensure this is installed. Please download from Microsoft. Please refer to your system’s documentation on how to install these dependencies. CUDA dependency (inference)¶ The GPU capable builds (Python, NodeJS, C++, etc) depend on CUDA 10.1 and CuDNN v7.6. Getting the pre-trained model¶ If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech releases page. Alternatively, you can run the following command to download the model files in your current directory: wget wget There are several pre-trained model files available in official releases. Files ending in .pbmm are compatible with clients and language bindings built against the standard TensorFlow runtime. Usually these packages are simply called deepspeech. These files are also compatible with CUDA enabled clients and language bindings. These packages are usually called deepspeech-gpu. Files ending in .tflite are compatible with clients and language bindings built against the TensorFlow Lite runtime. These models are optimized for size and performance in low power devices. On desktop platforms, the compatible packages are called deepspeech-tflite. On Android and Raspberry Pi, we only publish TensorFlow Lite enabled packages, and they are simply called deepspeech. You can see a full list of supported platforms and which TensorFlow runtime is supported at Supported platforms for inference. Finally, the pre-trained model files also include files ending in .scorer. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model ( .pbmm or .tflite file) to produce transcriptions. We also provide further documentation on the decoding process and how scorers are generated. Important considerations on model inputs¶ The release notes include detailed information on how the released models were trained/constructed. Important considerations for users include the characteristics of the training data used and whether they match your intended use case. For acoustic models, an important characteristic is the demographic distribution of speakers. For external scorers, the texts should be similar to those of the expected use case. If the data used for training the models does not align with your intended use case, it may be necessary to adapt or train new models in order to get good accuracy in your transcription results. The process for training an acoustic model is described in Training Your Own Model. In particular, fine tuning a release model using your own data can be a good way to leverage relatively smaller amounts of data that would not be sufficient for training a new model from scratch. See the fine tuning and transfer learning sections for more information. Data augmentation can also be a good way to increase the value of smaller training sets. Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in External scorer scripts and an overview of how the external scorer is used by DeepSpeech to perform inference is available in CTC beam search decoder. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications. Model compatibility¶ DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can’t re-export it. Using the Python package¶ Pre-built binaries which can be used for performing inference with a trained model can be installed with pip3. You can then use the deepspeech binary to do speech-to-text on an audio file: For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in this documentation. We will continue under the assumption that you already have your system properly setup to create new virtual environments. Create a DeepSpeech virtual environment¶ In creating a virtual environment you will create a directory containing a python3 binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on $HOME/tmp/deepspeech-venv. You can create it using this command: $ virtualenv -p python3 $HOME/tmp/deepspeech-venv/ Once this command completes successfully, the environment will be ready to be activated. Activating the environment¶ Each time you need to work with DeepSpeech, you have to activate this virtual environment. This is done with this simple command: $ source $HOME/tmp/deepspeech-venv/bin/activate Installing DeepSpeech Python bindings¶ Once your environment has been set-up and loaded, you can use pip3 to manage packages locally. On a fresh setup of the virtualenv, you will have to install the DeepSpeech wheel. You can check if deepspeech is already installed with pip3 list. To perform the installation, just use pip3 as such: $ pip3 install deepspeech If deepspeech is already installed, you can update it as such: $ pip3 install --upgrade deepspeech Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows: $ pip3 install deepspeech-gpu See the release notes to find which GPUs are supported. Please ensure you have the required CUDA dependency. You can update deepspeech-gpu as follows: $ pip3 install --upgrade deepspeech-gpu In both cases, pip3 should take care of installing all the required dependencies. After installation has finished, you should be able to call deepspeech from the command-line. Note: the following command assumes you downloaded the pre-trained model. deepspeech --model deepspeech-0.9.3-models.pbmm --scorer deepspeech-0.9.3-models.scorer --audio my_audio_file.wav The --scorer argument is optional, and represents an external language model to be used when transcribing the audio. See the Python client for an example of how to use the package programatically. Using the Node.JS / Electron.JS package¶ You can download the JS bindings using npm: npm install deepspeech - Please note that as of now, we support: Node.JS versions 4 to 13. Electron.JS versions 1.6 to 7.1 TypeScript support is also provided. Alternatively, if you’re using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows: npm install deepspeech-gpu See the release notes to find which GPUs are supported. Please ensure you have the required CUDA dependency. See the TypeScript client for an example of how to use the bindings programatically. Using the command-line client¶ To download the pre-built binaries for the deepspeech command-line (compiled C++) client, use util/taskcluster.py: python3 util/taskcluster.py --target . or if you’re on macOS: python3 util/taskcluster.py --arch osx --target . also, if you need some binaries different than current master, like v0.2.0-alpha.6, you can use --branch: python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "." The script taskcluster.py will download native_client.tar.xz (which includes the deepspeech binary and associated libraries) and extract it into the current folder. Also, taskcluster.py will download binaries for Linux/x86_64 by default, but you can override that behavior with the --arch parameter. See the help info with python util/taskcluster.py -h for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well. Alternatively you may manually download the native_client.tar.xz from the [releases](). Note: the following command assumes you downloaded the pre-trained model. ./deepspeech --model deepspeech-0.9.3-models.pbmm --scorer deepspeech-0.9.3-models.scorer --audio audio_input.wav See the help output with ./deepspeech deepspeech.so, the C++ native client, Python bindings, and KenLM. You need to generate the Dockerfile from the template using: make Dockerfile.build If you want to specify a different DeepSpeech repository / branch, you can pass DEEPSPEECH_REPO or DEEPSPEECH_SHA parameters: make Dockerfile.build DEEPSPEECH_REPO=git://your/fork DEEPSPEECH_SHA=origin/your-branch Third party bindings¶ In addition to the bindings above, third party developers have started to provide bindings to other languages: Asticode provides Golang bindings in its go-astideepspeech repo. RustAudio provide a Rust binding, the installation and use of which is described in their deepspeech-rs repo. stes provides preliminary PKGBUILDs to install the client and python bindings on Arch Linux in the arch-deepspeech repo. gst-deepspeech provides a GStreamer plugin which can be used from any language with GStreamer bindings. thecodrr provides Vlang bindings. The installation and use of which is described in their vspeech repo. eagledot provides NIM-lang bindings. The installation and use of which is described in their nim-deepspeech repo.
https://deepspeech.readthedocs.io/en/v0.9.3/USING.html
2022-01-17T01:14:59
CC-MAIN-2022-05
1642320300253.51
[]
deepspeech.readthedocs.io
Test summary Connect Apple iPad Pro 4th generation (model A2068) and Apple iPhone 11 (model 2111) to the Celona network. Also successfully test Wi-Fi hotspot functionality where required. Here are our how to videos demonstrating Apple iPad Pro, iPhone 11 and iPhone SE in action on a Celona network operating on the CBRS spectrum: Configuration steps Support for CBRS (LTE band 48) within Apple devices is functional right out of the box after a Celona SIM card is installed in the physical SIM slot. However, If you’re experiencing issues, check the steps below to confirm your settings. Step 1. Insert Celona nano SIM. Step 2. Go to Settings> Cellular Data> Network Selectionshould be set to Auto. This is the default setting. Step 3. Go to Settings> Cellular Data> APN Settingsand set all APN to default. This step is required to be able use Wi-Fi Personal Hotspoton the Apple iPhone / iPad with CBRS based LTE connectivity as the backhaul. Step 4. Your iPhone / iPad should now be connected to the Celona network. Dual SIM operation If using a locked device, make sure the eSIM on the device is provisioned with the mobile network operator (MNO) subscription identity and the physical SIM slot is available for CBRS based LTE network connectivity. For more details on how to configure dual SIM operation on Apple devices, check out this article.
https://docs.celona.io/en/articles/4055569-apple-iphone-ipad-on-celona
2022-01-17T01:42:45
CC-MAIN-2022-05
1642320300253.51
[array(['https://downloads.intercomcdn.com/i/o/209876872/f66cd56d57013893c180a218/Apple+iPad+Pro+Settings.png?expires=1620669600&signature=10010544d8e551dfb330e951d8e94aba55dc274a912d9a6af7253dc2437dfbad', 'Apple iPad Pro on CBRS based LTE wireless'], dtype=object) ]
docs.celona.io
When building systems using the request/response pattern, the Reply method exposed by the IMessageHandlerContext or IBus interface is used to reply to the sender of the incoming message. The same Reply method can be used inside a Saga and it is important to understand that it can have a different semantic, otherwise it can lead to unexpected behavior. Replymethod always delivers the message to the sender address of the incoming message. The following diagram details a scenario where two sagas and an integration endpoint utilize the request/response pattern to communicate. The replies are highlighted in red. The reason a call to Reply(new ShipOrder()) sends a message to the Shipment Gateway is that it is invoked in the context of handling the ShipmentReserved message, and the return address of ShipmentReserved is Shipment Gateway. In the context of a Saga it is not always clear at first glance who the sender of a message is. In the above example, when handling the expired ShipmentReservation timeout the sender of the message is the Delivery Manager endpoint. In this case a Reply would be delivered to the Delivery Manager, and that is not necessarily the desired behavior. Calling ReplyToOriginator makes it clear to NServiceBus that the message has to be delivered to the endpoint that was the originator of the saga.
https://docs.particular.net/nservicebus/sagas/reply-replytooriginator-differences
2022-01-17T01:43:39
CC-MAIN-2022-05
1642320300253.51
[array(['reply-replytooriginator-differences.png', 'Sample sequence diagram'], dtype=object) ]
docs.particular.net
Tips for first-time users¶ Ray provides a highly flexible, yet minimalist and easy to use API. On this page, we describe several tips that can help first-time Ray users to avoid some common mistakes that can significantly hurt the performance of their programs. All the results reported in this page were obtained on a 13-inch. Tip 1: Delay ray.get()¶ With Ray, the invocation of every remote operation (e.g., task, actor method) is asynchronous. This means that the operation immediately returns a promise/future, which is essentially an identifier (ID) of the operation’s result. This is key to achieving) print("results = ", results) The output of a program execution is below. As expected, the program takes around 4 seconds: duration = 4.0149290561676025 results = [0, 1, 2, 3] Now, let’s parallelize the above program with Ray. Some first-time users will do this by just making the function remote, i.e., However, when executing the above program one gets: duration = 0.0003619194030761719 results = [ObjectRef(df5a1a828c9685d3ffffffff0100000001000000), ObjectRef(cb230a572350ff44ffffffff0100000001000000), ObjectRef(7bbd90284b71e599ffffffff0100000001000000), ObjectRef(bd37d2621480fc7dffffffff0100000001000000)] When looking at this output, two things jump out. First, the program finishes immediately, i.e., in less than 1 ms. Second, instead of the expected results (i.e., [0, 1, 2, 3]) we get a bunch of identifiers. Recall that remote operations are asynchronous and they return futures (i.e., object IDs) instead of the results themselves. This is exactly what we see here. We measure only the time it takes to invoke the tasks, not their running times, and we get the IDs of the results corresponding to the four tasks. To get the actual results, we need to use ray.get(), and here the first instinct is to just call ray.get() on the remote operation invocation, i.e., replace line 12 with: results = [ray.get(do_some_work.remote(x)) for x in range(4)] By re-running the program after this change we get: duration = 4.018050909042358 results = [0, 1, 2, 3] So now the results are correct, but it still takes 4 seconds, 12 second. Tip 2: Avoid tiny tasks¶ shorter tiny_work() remote: import time import ray ray.init(num_cpus = 4) @ray.remote def tiny_work(x): time.sleep(0.0001) # Replace on a 2018 MacBook Pro notebook. Tip 3: Avoid passing same object repeatedly to remote tasks¶. One example is passing the same large object as an argument repeatedly, as illustrated by the program below: import time import numpy as np import ray ray.init(num_cpus = 4) @ray.remote def no_work(a): return start = time.time() a = np.zeros((5000, 5000)) result_ids = [no_work.remote(a) for x in range(10)] results = ray.get(result_ids) print("duration =", time.time() - start) This program outputs: duration = 1.0837509632110596 2.5((5000, 5000))) result_ids = [no_work.remote(a_id) for x in range(10)] results = ray.get(result_ids) print("duration =", time.time() - start) Running this program takes only: duration = 0.132796049118042 This is 7. Tip 4: Pipeline data processing¶ seconds. Next, assume the results of these tasks are processed by process_results(), which takes 1 sec per result. The expected running time is then (1) the time it takes to execute the slowest of the do_some_work() tasks, plus (2) 4 secondssec, a significant improvement: duration = 4.852453231811523 result = 6 To aid the intuition, Figure 1 shows the execution timeline in both cases: when using ray.get() to wait for all results to become available before processing them, and using ray.wait() to start processing the results as soon as they become available.
https://docs.ray.io/en/master/auto_examples/tips-for-first-time.html
2022-01-17T00:37:42
CC-MAIN-2022-05
1642320300253.51
[]
docs.ray.io
Alarms in Rhino alert the SLEE administrator to exceptional conditions. Application components in the SLEE raise them, as does Rhino itself (upon detecting an error condition). Rhino clears some alarms automatically when the error conditions are resolved. The SLEE administrator must clear others manually. When an alarm is raised or cleared, Rhino generates a JMX notification from the Alarm MBean. Management clients may attach a notification listener to the Alarm MBean, to receive alarm notifications. Rhino logs all alarm notifications. What’s new in SLEE 1.1? While only SBBs could generate alarms in SLEE 1.0, other types of application components can also generate alarms in SLEE 1.1. In SLEE 1.1, alarms are stateful — between being raised and cleared, an alarm persists in the SLEE, where an administrator may examine it. (In SLEE 1.0, alarms could be generated with a severity level that indicated a cleared alarm, but the fact that an error condition had occurred did not persist in the SLEE beyond the initial alarm generation.)
https://docs.rhino.metaswitch.com/ocdoc/books/rhino-documentation/2.6.2/rhino-administration-and-deployment-guide/slee-management/alarms/about-alarms.html
2022-01-17T00:41:56
CC-MAIN-2022-05
1642320300253.51
[]
docs.rhino.metaswitch.com
Developing with AppBuilder# Simplicity Studio® 5 (SSv5)'s AppBuilder is a graphical tool used to create configurations and build Application Framework files. The AppBuilder configuration files indicate which features and functions you would like for your compiled binary image. By using AppBuilder along with an application framework, you can quickly create an application that includes all of the required functionality for the image's purpose. AppBuilder must be used in conjunction with one of the Silicon Labs application frameworks, which are shipped as part of the SDK. All of the code that will be compiled into the binary image is included in the SDK distribution. AppBuilder creates configuration and build files that tell the application framework which portions of the code to include in, and which to exclude from, the compiled binary image. With the exception of a few header (.h) files, AppBuilder does not generate the C source code for the application. All of the source code that ultimately will be included in the binary image is provided in the Application Framework. In general, to develop an application using AppBuilder: Create an application based on one of the AppBuilder examples, such as Z3Switch. Customize the application on the various AppBuilder tabs. Functions can be added by enabling and configuring plugins. Click Generate to create application files. Optionally, add your own code to the application. Once you have finishing customizing your project, build and flash it, and then test and debug it.
https://docs.silabs.com/simplicity-studio-5-users-guide/5.2.1/ss-5-users-guide-developing-with-appbuilder/
2022-01-17T01:42:53
CC-MAIN-2022-05
1642320300253.51
[]
docs.silabs.com
Many of the examples use a sample river flow data set. USGS Surface-Water Data Sets are provided courtesy of the U.S. Geological Survey. To run the examples, you can use the river flow data from Teradata-supplied public buckets. Or you can set up a small object store for the data set. The following instructions explain how to set up the river flow data on your own external object store. Your external object store must be configured to allow Advanced SQL Engine access. When you configure external storage, you set the credentials to your external object store. Those credentials are used in SQL commands. The supported credentials for USER and PASSWORD (used in the CREATE AUTHORIZATION command) and for ACCESS_ID and ACCESS_KEY (used by READ_NOS and WRITE_NOS) correspond to the values shown in the following table: See your cloud vendor documentation for instructions on creating an external object store account. - Create an external object store on a Teradata-supported external object storage platform. Give your external object store a unique name. In the Teradata-supplied examples, the bucket/container is called td-usgs. Because the bucket/container name must be unique, choose a name other than td-usgs. - On Amazon, generate an access ID and matching secret key for your bucket or generate an Identity and Access Management (IAM) user credential. On Azure, generate Account SAS tokens (not Service SAS tokens) for your td-usgs container. On Google Cloud Storage, generate an access ID and matching secret key for your bucket. - Download the sample data from (look for NOS Download Data) to your client/laptop. The ZIP file contains sample river flow data in CSV, JSON, and Parquet data formats. - Copy the sample data to your bucket or container, being careful to preserve the data directory structure. For example, use a location similar to the following: Note, you can use the Amazon S3 or Azure management consoles or a utility like AWS CLI to copy the data to your external object store. For Google Cloud Storage, you can use the gsutil tool to copy the data to your external object store. - Amazon S3: /S3/YOUR-BUCKET.s3.amazonaws.com/JSONDATA - Azure Blob storage and Azure Data Lake Storage Gen2: /az/YOUR-STORAGE-ACCOUNT.blob.core.windows.net/td-usgs/CSVDATA/ - Google Cloud Storage: /gs/storage.googleapis.com/YOUR-BUCKET/CSVDATA/ - In the example code replace the bucket or container (shown as td-usgs, YOUR-BUCKET, or YOUR-STORAGE-ACCOUNT) with the location of your object store. - Replace YOUR-ACCESS-KEY-ID and YOUR-SECRET-ACCESS-KEY with the access values for your external object store. The following steps may require the assistance of your public cloud administrator.
https://docs.teradata.com/r/UG7kfQnbU2ZiX41~Mu75kQ/IJgtRemm~zu5MHEtZ_DCyw
2022-01-17T00:45:46
CC-MAIN-2022-05
1642320300253.51
[]
docs.teradata.com
DataPoint Interface Specifies the appearance of the data point in a chart series. Namespace: DevExpress.Spreadsheet.Charts Assembly: DevExpress.Spreadsheet.v21.2.Core.dll Declaration Remarks To change options for an individual data point, add an item to a Series.CustomDataPoints collection at the required index and use properties of the added DataPoint object. Use the Series.CustomDataPoints property to get access to options applied to all data points in the series. See Also Feedback
https://docs.devexpress.com/OfficeFileAPI/DevExpress.Spreadsheet.Charts.DataPoint
2022-01-17T01:22:07
CC-MAIN-2022-05
1642320300253.51
[]
docs.devexpress.com
Changes compared to 21.11.0 Bug Fixes - Fix an issue with SharePoint site backup. A site might miss backup due to missing fields in MS API response - Fix an issue with displaying the number of the Protected Account incorrect while only selecting shared services of a group account or a team site - Fix an issue with missing new Thai and Danish language entries when logged in to the Comet Server web interface as an end-user - Fix a cosmetic issue with adding the tooltip onto Office365 service icons on Comet Server web interface - Fix a cosmetic issue with not all of service icons being activated while checking the top level item after filtering Office365 resources by User account type on Comet Backup desktop app
https://docs.cometbackup.com/release-notes/21-11-1-himalia-released/
2022-01-17T00:09:49
CC-MAIN-2022-05
1642320300253.51
[]
docs.cometbackup.com
Tracer¶ The Tracer application is one of the HELICS apps available with the library Its purpose is to provide a easy way to display data from a federation It acts as a federate that can “capture” values or messages from specific publications or direct endpoints or cloned endpoints which exist elsewhere and either trigger callbacks or display it to a screen The main use is a simple visual indicator and a monitoring app Command line arguments¶ allowed options: command line only: -? [ --help ] produce help message -v [ --version ] display a version string --config-file arg specify a configuration file to use configuration: --stop arg the time to stop --mapfile arg write progress to a memory mapped file federate --inputdelay arg the input delay on incoming communication of the federate --outputdelay arg the output delay for outgoing communication of the federate -f [ --flags ] arg named flags for the federate also permissible are all arguments allowed for federates and any specific broker specified: the tracer executable also takes an untagged argument of a file name for example helics_app tracer tracer_file.txt --stop 5 Tracers support both delimited text files and JSON files some examples can be found in, they are otherwise the same as options for recorders. Tracer”] JSON configuration¶ Tracers
https://docs.helics.org/en/latest/references/apps/Tracer.html
2022-01-17T00:12:45
CC-MAIN-2022-05
1642320300253.51
[]
docs.helics.org
Used for importing Data from the Facebook Application using OAuth. This connection type supports the following task types: This connection can be used in a Data Migration. A Developer Application can be created here: For the OAuth Connection, the Client ID for Facebook is your App ID. While your Client Secret is your App Secret. Add the Generated Redirect URI to Facebook. Add the following to the Connection String InitiateOauth=OFF;Target=YOUR_TARGET_ID Currently there is no way to refresh the Facebook Token automatically so you will need to reauthorize your Facebook Connection every 90 Days.
https://docs.loomesoftware.com/integrate/online/connections/connection-types/facebook/
2022-01-17T01:23:55
CC-MAIN-2022-05
1642320300253.51
[]
docs.loomesoftware.com
Security The Cloud Manager REST API provides robust security based on token authentication and authorization. All NetApp Cloud Central Services, including Cloud Manager, use OAuth 2.0 for authorization. OAuth 2.0 is an open standard implemented by several authorization providers including Auth0. Connecting and communicating with a secure REST endpoint is a two-step process: Acquire a JWT (JSON Web Token) access token from the trusted OAuth 2.0 token endpoint Make a REST API call to the target endpoint with the access token in the Authorization: Bearerrequest header Authorization can be performed in a federated or non-federated environment. The type of authorization environment you have determines which token and procedure to use. You must use a valid token to access the API based on the authorization mode. Users with federated authorization need to create a token using Create a user token with federated authentication. Non-federated user can optionally use this type of token. Users with non-federated authorization need to create a token using Create a user token with nonfederated authentication.
https://docs.netapp.com/us-en/cloud-manager-automation/cm/security.html
2022-01-17T02:24:13
CC-MAIN-2022-05
1642320300253.51
[]
docs.netapp.com
intentcan be created with a fixed set of economics and any other parameters, without reference a quote. This intentcan then be shared with the user for them to fulfil. In the payment example this means passing the intent_idto the initialization object of the widget. default_transactionin the widget initialization. ONE_TO_ONE_ATTEMPTED. ONE_TO_ONE_ATTEMPTED intent_id) ONE_TO_MANY partner_*fields will appear in each transaction created with the intent unless overridden in the widget configuration.) funding_settlementis not set to allow the user to choose how to fund the transaction.
https://docs.partners.liquid.com/e-commerce/intent
2022-01-17T01:26:19
CC-MAIN-2022-05
1642320300253.51
[]
docs.partners.liquid.com
deletescattercastendpoints removes endpoints for shut-down nodes. A node’s endpoint cannot be deleted while in use. This means that the node must be shut down and have left the cluster before a delete can be issued. A node that has been deleted cannot rejoin the cluster unless it is re-added, and the new scattercast endpoints file copied over. Copying an older scattercast endpoints file will not work, as the cluster uses versioning to protect against out-of-sync endpoints files.
https://docs.rhino.metaswitch.com/ocdoc/books/rhino-documentation/2.6.2/rhino-administration-and-deployment-guide/rhino-configuration/cluster-membership/scattercast-management/delete-scattercast-endpoint-s.html
2022-01-17T00:55:57
CC-MAIN-2022-05
1642320300253.51
[]
docs.rhino.metaswitch.com