content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The web time clock provides a really simple way for your employees to punch in/out at work. You can run the web time clock from a desktop computer or a tablet. Similar to an "old school" time clock you can setup the web time clock as a kiosk at your business. Not all schedules have the web time clock enabled. As a manager you can disable the web time clock add-on. You can control what add-ons are enabled for you schedule on the add-ons page. Punching In/Out (For Employees) The web time clock is very simple to use. You can only do two things: 1) clock in and 2) clock out. To access the time clock just click the Time Clock link at the top of your page. You will be redirected to the time clock page that looks like this: To clock in you simply enter your email or employee ID. (Employee IDs can be set for employees on the Settings -> Employees page.) If you are not already clocked in you will see a pop-up form like the one below. Also, if you are scheduled for one or more shifts they will be displayed in this pop-up. You can click on these shifts to quickly select the position and location you are clocking into. Once you click the Clock In button you will see a confirmation that you have been clocked in. When you are done with your shift you just enter your email or employee ID again. The system will recognize that you are already clocked in and you will see a pop-up like the one below. In this pop-up you can see some basic details about the shift you just worked. Once you click the Clock Out button you will see a confirmation that you have been clocked out. Whitelist IP Addresses By default, the web time clock can be accessed from any computer. In fact, you can share the link to your web time clock and it can be accessed by anyone without even having to login. This is a great way to allow employees to clock in from anywhere quickly. However, you may want to lock down the physical location(s) that your employees can punch in/out from. This can be done by whitelisting specific IP addresses. On the add-on page for the web time clock you will see settings like this: If the whitelist IP addresses setting is enabled the time clock can only be used from IP addresses that are defined in your settings. If you try to access the time clock from another IP address you will see an error message like this:
http://docs.zoomshift.com/time-clock-and-timesheets/web-time-clock
2018-02-18T00:55:40
CC-MAIN-2018-09
1518891811243.29
[array(['https://images.contentful.com/7m65w4g847me/32McqPZDjakKiyAIQG8ia4/1aa5d762cdea8b43ff51f2aa71d8fb3c/web-time-clock-1.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/1NNdCA6If26yIsAgUiqsyy/e2c47f58c3a8b4cba11b9f030a75d82f/web-time-clock-2.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/4BGxoT0bz2QkAKyCI2GWaE/0b8c0702ac2ffe920b7bcf13083ab3e2/web-time-clock-3.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/k1r8cmOvsIyYUYSywso6G/02f503578feae184027db10298a5cb9b/web-time-clock-4.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/1WIgijk1K0KkweuGiioAOK/c98f18734ad58c382f001f1d75da83d0/web-time-clock-5.png', None], dtype=object) ]
docs.zoomshift.com
Syntax <cue fade time> - A decimal number of seconds (optionally using decimal digits for fractions of seconds) 0means no fade time (or the channels are set immediately without fading) - Optionally may include a slash ( /) which indicates a split (up/down) fade - Optionally may include a dash ( -) which indicates a delayed fade Abbreviation FA Description Setting The Cue Fade Time Use the Fade command to set the cue fade time for the active playback fader. This time is used to crossfade the channels of the next cue whenever the Go command is executed. The cue fade time is automatically set by cues being loaded into the playback fader, but the Fade command can be used to override the cue fade time. Using Split Fade Times A split fade time is used when it is desired to have channels that are fading up occur at a different rate than channels fading down. To specify a split fade time, use a slash character in between two fade times. For instance, the command Fade 3.5/7.5 will cause any channel that is fading up to occur in 3.5 seconds, and any channel that is fading down to occur in 7.5 seconds. Using Fade Delays Normally, whenever a cue is executed, the fade begins immediately. A delay can be inserted that would cause the fade to be delayed before starting to change value. To specify a fade delay, use a delay time and dash character before the fade time. For instance, the command Fade 5.5-10 will cause the fade to delay 5.5 seconds before beginning a 10 second fade. Using Both Fade Delays And Split Fade Timing Both fade delays and split fades can be combined. For instance, the command Fade 1-2/3-4 would cause any channels fading up to be delayed 1 second before fading over 2 seconds, while the downward fading channels would be delayed 3 seconds before fading over 4 seconds. Determining The Current Cue Fade Time Use the Fade command with the question mark (?) to return the current cue fade time. A cue fade time such as 7.21 or 12/3 will be returned. Examples Fade 1 Sets the cue fade time to 1 second. Fade 1.35/7.2 Sets the cue fade time to 1.35 seconds for upward fading channels and 7.2 seconds for downward fading channels. Cue 22 Fade 5 Go Loads cue 22, then overrides it’s fade time to 5 seconds before executing it.
http://docs.interactive-online.com/cs2/1.0/en/topic/fade
2019-06-16T03:11:44
CC-MAIN-2019-26
1560627997533.62
[]
docs.interactive-online.com
The word transaction can have several different meanings in the software industry. This document explains how the term is used in New Relic APM and how transactions are reported. What is a transaction? At New Relic, a transaction is defined as one logical unit of work in a software application. Specifically, it refers to the function calls and method calls that make up that unit of work. For New Relic APM, it will often refer to a web transaction, which represents activity that happens from when the application receives a web request to when the response is sent. When you install New Relic APM in a supported system, it begins automatically reporting web requests and other important functions and methods. To supplement the default level of monitoring, you can set up custom instrumentation to report additional transactions. Some frameworks do not have a natural concept of a transaction. In other words, there are no predefined pathways that can easily be recognized or monitored as transactions. To define transactions in such frameworks, you can use custom instrumentation. To see how all your applications, services, containers, cloud services, hosts, and other entities work together, use New Relic One. Types of transactions Cumulative transaction data appears in New Relic APM on the Transactions page. The two main categories of transactions are web and non-web: - Web: Transactions are initiated with an HTTP request. For most organizations, these represent customer-centric interactions and thus are the most important transactions to monitor. - Non-web: Non-web transactions are not initiated with a web request. They can include non-web worker processes, background processes, scripts, message queue activity, and other tasks. Transaction segments The individual functions and calls that make up a transaction are called segments. For example external service calls and database calls are segments, and both have their own UI pages in New Relic APM. The APM Transactions page displays aggregate transaction segment data. - To add segments to a transaction, use custom instrumentation. - To see the segments of a specific transaction, use transaction traces. Transaction naming For supported frameworks, transaction names can come from various sources, such as the name given to the transaction by the framework, function names detected during the transaction, or a web request's URL. For transactions that produce many names with a similar format, New Relic consolidates those into general transaction categories. For example, a transaction might be displayed as /user/*/control_panel, where the * represents different user names. To rename transactions or adjust how names are consolidated, use custom instrumentation. Monitoring transactions Here are some other ways you can use New Relic APM to monitor transactions: Transactions in New Relic Insights Most APM users have access to New Relic Insights. Transactions are available in Insights with an in-depth set of default attributes attached. Using these attributes, you can run queries and create custom charts that New Relic APM does not provide by default.
https://docs.newrelic.com/docs/apm/transactions/intro-transactions/transactions-new-relic-apm
2019-06-16T02:53:09
CC-MAIN-2019-26
1560627997533.62
[]
docs.newrelic.com
On some deployments, such as ones where restrictive firewalls are in place, you might need to manually configure a firewall to permit OpenStack service traffic. To manually configure a firewall, you must permit traffic through the ports that each OpenStack service uses. This table lists the default ports that each OpenStack service uses: To function properly, some OpenStack components depend on other, non-OpenStack services. For example, the OpenStack dashboard uses HTTP for non-secure communication. In this case, you must configure the firewall to allow traffic to and from HTTP. This table lists the ports that other OpenStack components use: On some deployments, the default port used by a service may fall within the defined local port range of a host. To check a host's local port range: $ sysctl -a | grep.
https://docs.openstack.org/juno/config-reference/content/firewalls-default-ports.html
2019-06-16T02:56:25
CC-MAIN-2019-26
1560627997533.62
[]
docs.openstack.org
Export Support More often than not, it is a good idea to be able to preserve the data that the different controls show and edit during the life cycle of the application even after the application is closed. There are different ways to save this information and various approaches can be adopted depending on the type of the content. Built-In Export Several of the controls in the Telerik UI for WPF suite come with built-in export capabilities. Among those are RadDiagram, RadGridView and RadRichTextBox. To learn more about these abilities take a look at the Export article in the desired control's documentation. Document Processing Integration The Telerik UI for WPF suite includes the Telerik Document Processing libraries specifically designed for import, export and document editing. RadPdfProcessing: Supports export to PDF. RadSpreadProcessing: Supports export to XLSX, CSV, PDF and plain text (TXT). The libraries give you the ability to create a document from scratch and export it to its supported file formats. This means you can export practically any control either by exporting it to an image and adding the image to the resulting document or by creating an appropriate for the context structure (for example, a table when exporting RadGridView to DOCX). There are several controls that already provide sample code which you can use as the base of export functionality, you could take a look at them in the Telerik XAML SDK repository: - RadDiagram Export to PDF - RadPivotGrid Export to XLSX, DOCX, HTML and PDF - RadPdfProcessing Export UI Element to PDF Export XAML UI Elements to PDF The API of RadPdfProcessing is designed to resemble XAML and this allows you easy conversion of UI elements to PDF by converting any XAML primitive to a PDF instruction. For base of such conversion you can use the RadPdfProcessing Export UI Element to PDF example which demonstrates how to export several of the controls in the Telerik UI for WPF suite, including a combination of several controls in the same view. The code operates with a set of renderers deriving from the base UIElementRendererBase - TextBlockRenderer, BorderRenderer, etc. This allows separation, since each concrete render is responsible for drawing the element it is intended for without dependencies to the other renderers, and gives you the ability to extend the sample code to fit your precise needs if you need to. Take a look at the source code of the example on GitHub and the documentation of the relevant FixedDocumentEditor class. Export Images With ExportExtensions Some controls can be exported directly using the ExportExtensions class which is part of the Telerik.Windows.Controls assembly. It allows you to export in several image formats listed below: Image formats Png: Portable Netwok Graphic. Use ExportToImage(FrameworkElement, Stream) method. Bmp: Bitmap file. Use ExportToImage(FrameworkElement, Stream, BitmapEncoder) where the encoder is of type BmpBitmapEncoder. Xps: XML Paper Specification file. Use ExportToXpsImage(FrameworkElement, Stream) method to export content as an XPS image. This approach is convenient for controls which have a size that allows direct export on one page, such as a RadGauge for example. Example 1 demonstrates how to export RadGauge to PNG file format. The physical path to the image is provided run-time via SaveFileDialog: Example 1: Export Control to PNG private void Button_Click(object sender, RoutedEventArgs e) { string extension = "png"; SaveFileDialog dialog = new SaveFileDialog() { DefaultExt = extension, Filter = "Png (.png)|.png" }; if (dialog.ShowDialog() == true) { using (Stream stream = dialog.OpenFile()) { Telerik.Windows.Media.Imaging.ExportExtensions.ExportToImage( this.radGauge, stream, new Telerik.Windows.Media.Imaging.PngBitmapEncoder()); } } } Private Sub Button_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) Dim extension As String = "png" Dim dialog As New SaveFileDialog() With {.DefaultExt = extension, .Filter = "Png (.png)|.png"} If dialog.ShowDialog() = True Then Using stream As Stream = dialog.OpenFile() Telerik.Windows.Media.Imaging.ExportExtensions.ExportToImage(Me.radGauge, stream, New Telerik.Windows.Media.Imaging.PngBitmapEncoder()) End Using End If End Sub Exporting a control to an image requires that the control is measured and arranged. Otherwise, unexpected results may occur.
https://docs.telerik.com/devtools/silverlight/common-information/common-export-support
2019-06-16T03:11:19
CC-MAIN-2019-26
1560627997533.62
[array(['images/Common_Export_Support_01.png', 'Common Export Support'], dtype=object) ]
docs.telerik.com
Assist. - Assistive technology service means any service that directly assists an infant or toddler with a disability in the selection, acquisition, or use of an assistive technology device. The term includes: - The evaluation of the needs of an infant or toddler with a disability, including a functional evaluation of the infant or toddler with a disability in the child’s customary environment; - purchasing, leasing, or otherwise providing for the acquisition of assistive technology devices by infants child’s family; and, - training or technical assistance for professionals (including individuals providing education or rehabilitation services), or other individuals who provide services to, or are otherwise substantially involved in the major life functions of, infants and toddlers with disabilities.
https://okabletech-docs.org/homepage/at-ta-document-part-c/02-what-are-assistive-technology-devices-and-services/
2019-06-16T03:05:40
CC-MAIN-2019-26
1560627997533.62
[]
okabletech-docs.org
- Should AT be CONSIDERED for all infants and toddlers with disabilities? Yes, AT can promote a child’s participation in family activities and routines. Professionals should work with the child and his/her family to identify the activities and routines the child does or would like to do. Discuss how the child participates in activities and routines and what families feel children are learning. Often, AT can help children participate more fully in the activity/routine, or the activity itself may provide a context for learning. If an IFSP team considers the need for AT and determines that more information is needed, then an AT Assessment may need to be completed. - Is AT required for all infants and toddlers who have an IFSP? No, the decision regarding the need for AT must be made on an individual basis by the IFSP team. - Who makes the decision if an infant or toddler needs assistive technology devices or services? The IFSP team makes the decision based on assessment results. Decision-making is a team process that should reflect multidisciplinary involvement. The IFSP team should include the parent and persons with experience in providing AT devices and services. The team must include the resource coordinator and other team members as appropriate. - What are critical components of an AT assessment? An assistive technology assessment should be a systematic process to ensure that decisions regarding the selection of AT devices are based on information regarding the child’s abilities, needs, environments, activities, and routines. The AT assessment process includes a team approach, assessment of daily activities and routines, and is ongoing in nature. Although most AT assessments are not standardized, the assessment process should be replicable and use a framework for effective decision-making. - What is the role of parents in the assessment process? Parents provide information about the child’s developmental need, as well as their goals and outcomes. If parents believe their child would benefit from AT they should discuss this with other members of the IFSP team. Parents should request an assessment if they are unsure whether their child could benefit from AT, or to determine what type of AT would be most helpful.
https://okabletech-docs.org/homepage/at-ta-document-part-c/24-common-questions-about-assistive-technology-devices-and-services/
2019-06-16T02:55:22
CC-MAIN-2019-26
1560627997533.62
[]
okabletech-docs.org
Syntax <fade time> - A decimal number of seconds (optionally using decimal digits for fractions of seconds) 0means no fade time (or the channels/values are set immediately without fading) - Optionally may include a slash ( /) which indicates a split (up/down) fade - Optionally may include a dash ( -) which indicates a delayed fade Abbreviation T Description Setting The Global Fade Time Use the Time command to set the global fade time. This time is used to crossfade channels or values whenever the At command is executed. The global fade time is used when setting channels, or a playback’s submaster value. Using Split Fade Times A split fade time is used when it is desired to have channels that are fading up occur at a different rate than channels fading down. To specify a split fade time, use a slash character in between two fade times. For instance, the command Time 3.5/7.5 will cause any channel that is fading up to occur in 3.5 seconds, and any channel that is fading down to occur in 7.5 seconds. Using Fade Delays Normally, whenever a channel level is set, the fade begins immediately. A delay can be inserted that would cause the fade to be delayed before starting to change value. To specify a fade delay, use a delay time and dash character before the fade time. For instance, the command Time 5.5-10 will cause the fade to delay 5.5 seconds before beginning a 10 second fade. Using Both Fade Delays And Split Fade Timing Both fade delays and split fades can be combined. For instance, the command Time 1-2/3-4 would cause any channels fading up to be delayed 1 second before fading over 2 seconds, while the downward fading channels would be delayed 3 seconds before fading over 4 seconds. Determining The Current Global Fade Time Use the Time command with the question mark (?) to return the current global fade time. A fade time such as 7.21 or 12/3 will be returned. Examples Time 1 Sets the global fade time to 1 second. Time 1.35/7.2 Sets the global fade time to 1.35 seconds for upward fading channels and 7.2 seconds for downward fading channels. Channel 1>10 Time 5 At 50 Selects channels 1 thru 10, then sets the fade time to 5 seconds, then sets the channels to 50%. Playback 1 Time 3.5 At 25 Selects playback 1, then sets the fade time to 3.5 seconds, then sets the playback’s submaster to 25%.
http://docs.interactive-online.com/cs2/1.0/en/topic/time
2019-06-16T03:10:49
CC-MAIN-2019-26
1560627997533.62
[]
docs.interactive-online.com
You can use System Manager to update a cluster nondisruptively to a specific ONTAP version. In a nondisruptive update, you have to select an ONTAP image, validate that your cluster is ready for the update, and then perform the update. During a nondisruptive update, the cluster remains online and continues to serve data. As part of planning and preparing for the cluster update, you have to obtain the version of the ONTAP image to which you want to update the cluster from the NetApp Support Site, select the software image, and then perform a validation. The pre-update validation verifies whether the cluster is ready for an update to the selected version. If the validation finishes with errors and warnings, you have to resolve the errors and warnings by performing the required remedial actions, and then verify that the cluster components are ready for the update. For example, during the pre-update validation, if a warning is displayed that offline aggregates are present in the cluster, you must navigate to the aggregate page, and then change the status of all of the offline aggregates to online. When you update the cluster, either the entire cluster is updated or the nodes in a high-availability (HA) pair are updated. As part of the update, the pre-update validation is run again to verify that the cluster is ready for the update. A rolling update or batch update is performed, depending on the number of nodes in the cluster. A rolling update is performed for a cluster that consists of two or more nodes. This is the only update method for clusters with less than eight nodes. A batch update is performed for a cluster that consists of eight or more nodes. In such clusters, you can perform either a batch update or a rolling update. This is the default update method for clusters with eight or more nodes.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-950/GUID-DE582BCC-ACC6-450B-A2CC-D19B8E481E2F.html
2019-06-16T03:48:43
CC-MAIN-2019-26
1560627997533.62
[]
docs.netapp.com
Migrate content into or out of RBS (SharePoint Foundation 2010) Applies to: SharePoint Foundation 2010 This article describes 2 a content database by using Windows PowerShell Verify that you meet the following minimum requirements: See Add-SPShellAdmin. On the Start menu, click All Programs. Click Microsoft SharePoint 2010 Products. Click SharePoint 2010 Management Shell. At the Windows PowerShell command prompt, type the commands in the following steps. To obtain the content database RBS settings object: $cdb=Get-SPContentDatabase <ContentDbName> $rbs=$cdb.RemoteBlobStorageSettings Where <ContentDbName> is the name of the content database. To view a list of all RBS providers that are altogether and back into SQL Server inline storage, set this value to (). Migrate the data from RBS to the new provider or to SQL Server: $rbs.Migrate() See Also Concepts Set a content database to use RBS (SharePoint Foundation 2010)
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-foundation-2010/ff628255(v=office.14)
2019-06-16T03:13:57
CC-MAIN-2019-26
1560627997533.62
[]
docs.microsoft.com
OpenStack Networking (neutron) allows you to create and attach interface devices managed by other OpenStack services to networks. Plug-ins can be implemented to accommodate different networking equipment and software, providing flexibility to OpenStack architecture and deployment. It includes the following components: Plug and unplug ports, create networks or subnets, and provide IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware NSX product. The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent. OpenStack Networking mainly interacts with OpenStack Compute to provide networks and connectivity for its instances. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/ocata/install-guide-ubuntu/common/get-started-networking.html
2019-06-16T02:34:35
CC-MAIN-2019-26
1560627997533.62
[]
docs.openstack.org
Using the Amazon S3 Adapter for Snowball Following, you can find an overview of the Amazon S3 Adapter for Snowball, which allows you to programmatically transfer data between your on-premises data center and the Snowball using Amazon S3 REST API actions. This Amazon S3 REST API support is limited to a subset of actions, meaning that you can use the subset of supported Amazon S3 AWS CLI commands or one of the AWS SDKs to transfer data. If your solution uses the AWS SDK for Java version 1.11.0 or newer, you must use the following S3ClientOptions: disableChunkedEncoding()– Indicates that chunked encoding is not supported with the adapter. setPathStyleAccess(true)– Configures the adapter to use path-style access for all requests. For more information, see Class S3ClientOptions.Builder in the Amazon AppStream SDK for Java. Topics Starting the Amazon S3 Adapter for Snowball To use the Amazon S3 Adapter for Snowball, start it in a terminal on your workstation and leave it running while transferring data. Note. Before you start the adapter, you need the following information: The Snowball's IP address – Providing the IP address of the Snowball when you start the adapter tells the adapter where to send your transferred data. You can get this IP address from the E Ink display on the Snowball itself. The job's manifest file – The manifest file contains important information about the job and permissions associated with it. Without it, you won't be able to access the Snowball. It's an encrypted file that you can download after your job enters the WithCustomerstatus. The manifest is decrypted by the unlock code. You can get the manifest file from the console, or programmatically from calling a job management API action. The job's unlock code – The unlock code a string of 29 characters, including 4 dashes. It's used to decrypt the manifest. You can get the unlock code from the AWS Snowball Management Console, or programmatically from the job management API. Your AWS credentials – Every interaction with the Snowball is signed with the AWS Signature Version 4 algorithm. For more information, see Signature Version 4 Signing Process. When you start the Amazon S3 Adapter for Snowball, you specify the AWS credentials that you want to use to sign this communication. By default, the adapter uses the credentials specified in the home directory/.aws/credentials file, under the [default] profile. For more information on how this Signature Version 4 algorithm works locally with the Amazon S3 Adapter for Snowball, see Authorization with the Amazon S3 API Adapter for Snowball. Once you have the preceding information, you're ready to start the adapter on your workstation. The following procedure outlines this process. To start the adapter Open a terminal window on the workstation with the installed adapter. Navigate to the directory where you installed the snowball-adapter- operating_systemdirectory. Navigate to the bin subdirectory. Type the following command to start the adapter: ./snowball-adapter -i. Snowball IP address-m path to manifest file-u 29 character unlock code Note If you don't specify any AWS credentials when starting the adapter, the default profile in the file is used. home directory/.aws/credentials The Amazon S3 Adapter for Snowball is now started on your workstation. Leave this terminal window open while the adapter runs. If you're going to use the AWS CLI to transfer your data to the Snowball, open another terminal window and run your AWS CLI commands from there. Getting the Status of a Snowball Using the Adapter You can get a Snowball’s status by initiating a HEAD request to the Amazon S3 Adapter for Snowball. You receive the status response in the form of an XML document. The XML document includes storage information, latency information, version numbers, and more. You can't use the AWS CLI or any of the AWS SDKs to retrieve status in this. However, you can easily test a HEAD request over the wire by running a curl command on the adapter, as in the following example. curl -H " Authorization Header" -X HEAD Note When requesting the status of a Snowball, you must add the authorization header. For more information, see Signing AWS Requests with Signature Version 4. An example of the XML document that this request returns follows. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Status xsi: <snowballIp>192.0.2.0</snowballIp> <snowballPort>8080</snowballPort> <snowballId>012EXAMPLE01</snowballId> <totalSpaceInBytes>77223428091904</totalSpaceInBytes> <freeSpaceInBytes>77223428091904</freeSpaceInBytes> <jobId>JID850f06EXAMPLE-4EXA-MPLE-2EXAMPLEab00</jobId> <snowballServerVersion>1.0.1</snowballServerVersion> <snowballServerBuild>2016-08-22.5729552357</snowballServerBuild> <snowballAdapterVersion>Version 1.0</snowballAdapterVersion> <snowballRoundTripLatencyInMillis>1</snowballRoundTripLatencyInMillis> </Status> Unsupported Amazon S3 Features for Snowball Using the Amazon S3 Adapter for Snowball, you can programmatically transfer data to and from a Snowball with Amazon S3 API actions. However, not all Amazon S3 transfer features and API actions are supported for use with a Snowball device. For more information on the supported features, see the following: Any features or actions not explicitly listed in these topics are not supported. For example, the following features and actions are not supported for use with Snowball: TransferManager – This utility transfers files from a local environment to Amazon S3 with the SDK for Java. Consider using the supported API actions or AWS CLI commands with the adapter instead. GET Bucket (List Objects) Version 2 – This implementation of the GET action returns some or all (up to 1,000) of the objects in a bucket. Consider using the GET Bucket (List Objects) Version 1 action or the ls AWS CLI command.
https://docs.aws.amazon.com/snowball/latest/ug/using-adapter.html
2019-06-16T03:43:27
CC-MAIN-2019-26
1560627997533.62
[]
docs.aws.amazon.com
Density Plot You can show the distribution of the data by the curved lines. This chart is a variation of a Histogram, but this can show the smoother distributions by smoothing out the noise. > - Bandwidth Adjustment - Decides the smoothness of lines. You can assign a numeric value. If you assign a larger number, the curve will be simpler. - Number of Equally Spaced Points - How many data points used to draw the line. If you assign a larger number, the curve will be smoother. - Bandwidth - Smoothing bandwidth option. - Kernel - Smoothing kernel option. Take a look at Layout Configuration on how to configure the layout and format.
https://docs.exploratory.io/viz/densityplot.html
2019-06-16T03:12:10
CC-MAIN-2019-26
1560627997533.62
[array(['images/density1.png', None], dtype=object)]
docs.exploratory.io
Released on: Monday, May 20, 2019 - 15:30 New Features serverless_modefeature flag is now enabled by default. Added instrumentLoadedModulemethod to the API, allowing end-users to manually apply an instrumentation to a loaded module. Useful for cases where some module needs to be loaded before newrelic Improvements Removed older versions of Cassandra from versioned tests For debug/test runs, shimmer will now cleanup the __NR_shim property on instrumented methods. This leftover property did not result in any negative behaviors but cleaning up for thoroughness and to prevent potential confusion. Fixes recordMiddlewarepromise parenting for certain cases where child segments are created within resolving middleware next()promises.
https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-590
2019-06-16T03:14:11
CC-MAIN-2019-26
1560627997533.62
[]
docs.newrelic.com
Breaking: #78581 - Hook getFlexFormDSClass no longer called¶ See Issue #78581 Description¶ With the deprecation of BackendUtility::getFlexFormDS() the hook getFlexFormDSClass is no longer called and there is no substitution available. Impact¶ The hook is no longer called and flex form field manipulation by extensions does not happen anymore. Affected Installations¶ Extensions that extension flex form data structure definitions and use the hook getFlexFormDSClass for that purpose. Migration¶ Method BackendUtility::getFlexFormDS() has been split into the methods FlexFormTools->getDataStructureIdentifier() and FlexFormTools->parseDataStructureByIdentifier(). Those two new methods now provide four hooks to allow manipulation of the flex form data structure location and parsing. The methods and hooks are documented well, read their description for a deeper insight on which combination is the correct one for a specific extension need. The new hooks are very powerful and must be used with special care to be as future proof as possible. Since the old hook is used by some widespread extensions, the core team prepared a transition for some of them beforehand: - EXT:news: The extension used the old hook to only remove a couple of fields from the flex form definition. This has been moved over to a “FormEngine” data provider: news - EXT:flux: Flux implements a completely own way of locating and pointing to the flex form data structure that is needed in a specific context. The default core resolving does not work here. Flux now implements the hooks getDataStructureIdentifierPreProcessand parseDataStructureByIdentifierPreProcessto specify an own “identifier” syntax and to resolve that syntax to a data structure later: flux - EXT:gridelements: Similar to flux, gridelements has a own logic to choose which specific data structure should be used. However, the data structures are located in database row fields, so the “record” syntax of the core can be re-used to refer to those. gridelements uses the hook getDataStructureIdentifierPreProcesstogether with a small implementation in parseDataStructureByIdentifierPreProcessfor a fallback scenario: gridelements - EXT:powermail: Powermail allows extending and changing existing flex form data structure definition depending on page TS. To do that, it now implements hook getDataStructureIdentifierPostProcessto add the needed pid to the existing identifier, and then implements hook parseDataStructureByIdentifierPostProcessto manipulate the resolved data structure: powermail
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.5/Breaking-78581-HookGetFlexFormDSClassNoLongerCalled.html
2019-06-16T04:17:29
CC-MAIN-2019-26
1560627997533.62
[]
docs.typo3.org
Impedance¶ See also Unit Systems and Conventions Create Function¶ pandapower. create_impedance(net, from_bus, to_bus, rft_pu, xft_pu, sn_mva, rtf_pu=None, xtf_pu=None, name=None, in_service=True, index=None)¶ Creates an per unit impedance element - INPUT: net (pandapowerNet) - The pandapower network in which the element is created from_bus (int) - starting bus of the impedance to_bus (int) - ending bus of the impedance r_pu (float) - real part of the impedance in per unit x_pu (float) - imaginary part of the impedance in per unit sn_mva (float) - rated power of the impedance in kVA OUTPUT:impedance id Electric Model¶ The impedance is modelled as a longitudinal per unit impedance with \(\underline{z}_{ft} \neq \underline{z}_{tf}\) : The per unit values given in the parameter table are assumed to be relative to the rated voltage of from and to bus as well as to the apparent power given in the table. The per unit values are therefore transformed into the network per unit system: where \(S_{N}\) is the reference power of the per unit system (see Unit Systems and Conventions). The asymetric impedance results in an asymetric nodal point admittance matrix: Result Parameters¶ net.res_impedance
https://pandapower.readthedocs.io/en/v2.0.1/elements/impedance.html
2019-06-16T03:47:42
CC-MAIN-2019-26
1560627997533.62
[array(['../_images/impedance.png', 'alternate Text'], dtype=object)]
pandapower.readthedocs.io
Build on Ubuntu Server Download Ubuntu Server LTS and install it on a system with at least 4GB of memory allocated. If you just want to run a Haplo server with a minimum of fuss, go straight to the Production server documentation. Due to the fact that different Ubuntu releases ship with different names and versions of packages, the instructions differ slightly between releases. Please select the Ubuntu version you’re using from the list below:
https://docs.haplo.org/platform/build/ubuntu
2019-06-16T03:22:43
CC-MAIN-2019-26
1560627997533.62
[]
docs.haplo.org
All content with label amazon+aws+infinispan+installation+listener. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, jbossas, lock_striping, nexus, guide, schema, cache, s3, grid, jcache, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, custom_interceptor, setup, clustering, eviction, gridfs, concurrency, out_of_memory, examples, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, mvcc, notification, tutorial, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, resteasy, cluster, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala,, hot_rod more » ( - amazon, - aws, - infinispan, - installation, - listener ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+aws+infinispan+installation+listener
2019-06-16T03:53:46
CC-MAIN-2019-26
1560627997533.62
[]
docs.jboss.org
All content with label amazon+configuration+import+infinispan+jsr-107+listener+query. Related Labels: expiration, publish, datagrid, coherence, server, replication, transactionmanager, dist, release, deadlock, archetype, jbossas, lock_striping, guide, schema, cache, s3, grid, test, jcache, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, examples, index, events, hash_function, batch, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, xml, jbosscache3x, read_committed, distribution, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, adaptor, permission, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, installation, client, migration, non-blocking, filesystem, jpa, tx, gui_demo, eventing, snmp, client_server, testng, infinispan_user_guide, standalone, repeatable_read, hotrod, webdav, snapshot, docs, consistent_hash, batching, jta, faq, 2lcache, as5, docbook, jgroups, lucene, locking, rest, hot_rod more » ( - amazon, - configuration, - import, - infinispan, - jsr-107, - listener, - query ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+configuration+import+infinispan+jsr-107+listener+query
2019-06-16T03:55:08
CC-MAIN-2019-26
1560627997533.62
[]
docs.jboss.org
All content with label as5+coherence+gridfs+infinispan+installation+loader+lock_striping+nexus+store. Related Labels: expiration, publish, datagrid, server, replication, transactionmanager, dist, release, query, deadlock, archetype, jbossas, guide, schema, listener, cache, amazon, s3, grid, test, jcache, api, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, examples, jboss_cache, import, events, hash_function, configuration, batch, buddy_replication, write_through, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster,, - coherence, - gridfs, - infinispan, - installation, - loader, - lock_striping, - nexus, - store ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+coherence+gridfs+infinispan+installation+loader+lock_striping+nexus+store
2019-06-16T03:40:23
CC-MAIN-2019-26
1560627997533.62
[]
docs.jboss.org
All content with label async+grid+hot_rod+hotrod+infinispan+jboss_cache+jbossas+listener+release+scala+user_guide+xml. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, intro, archetype, pojo_cache, lock_striping, nexus, guide, schema, cache, amazon, s3, memcached,, hash_function, configuration, batch, buddy_replication, loader, xa, pojo, write_through, cloud, remoting, mvcc, notification, tutorial, presentation, murmurhash2,, snapshot, webdav, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, jgroups, lucene, locking, rest more » ( - async, - grid, - hot_rod, - hotrod, - infinispan, - jboss_cache, - jbossas, - listener, - release, - scala, - user_guide, - xml ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/async+grid+hot_rod+hotrod+infinispan+jboss_cache+jbossas+listener+release+scala+user_guide+xml
2019-06-16T03:28:45
CC-MAIN-2019-26
1560627997533.62
[]
docs.jboss.org
All content with label client+dist+distribution+gridfs+import+infinispan+query+xaresource. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, rehash, recovery, transactionmanager, release, partitioning,, setup, eviction, concurrency, out_of_memory, jboss_cache, index, configuration, hash_function, batch, buddy_replication, loader, colocation, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, xml, read_committed, jbosscache3x, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, permission, websocket, transaction, async, build, hinting, searchable, demo, scala, installation,, - dist, - distribution, - gridfs, - import, - infinispan, - query, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/client+dist+distribution+gridfs+import+infinispan+query+xaresource
2019-06-16T03:46:55
CC-MAIN-2019-26
1560627997533.62
[]
docs.jboss.org
Open. See for a detailed list of features and support across the hypervisors. The following hypervisors are supported: Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/mitaka/config-reference/compute/hypervisors.html
2019-06-16T03:24:49
CC-MAIN-2019-26
1560627997533.62
[]
docs.openstack.org
Contents Now Platform Capabilities Previous Topic Next Topic Corporate style guide Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Corporate style guide When you build a CMS website, you design the look and feel based on guidelines in the corporate style guide. A corporate style guide provides detailed information for designing any corporate interface, including corporate websites. Corporate design team Many organizations have a web development team that designed the corporate website. Contact this team and involve the designers early in the planning, as they provide help and give their approval to the interface you design. Without approval, there is the risk of having to redesign the entire site because it does not adhere to the organizational guidelines. Corporate style guide A corporate style guide takes the guesswork out of designing the CMS website. The example style guide shown is defined down to the pixel. Creating a site with the style guide makes it easy to create clean CSS and HTML. Without the style guide, building the site can take a great deal of time. Design considerations Some modifications to the base design for forms may be necessary. The content area of any CMS design should be no smaller than 860px, or service catalog forms are clipped. The sample style guide entry specifies the content area to be 576px, which clips service catalog forms. Figure 1. Example style guide entry On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/content-management/concept/c_LeverageTheCorporateStyleGuide.html
2019-06-16T03:31:39
CC-MAIN-2019-26
1560627997533.62
[]
docs.servicenow.com
Javascript Coding Standards¶ Formatting¶ All JavaScript documents must use two spaces for indentation. This is contrary to the OKFN Coding Standards but matches what’s in use in the current code base. Coding style must follow the idiomatic.js style but with the following exceptions. Note Idiomatic is heavily based upon Douglas Crockford’s style guide which is recommended by the OKFN Coding Standards. White Space¶ Two spaces must be used for indentation at all times. Unlike in idiomatic whitespace must not be used _inside_ parentheses between the parentheses and their Contents. // BAD: Too much whitespace. function getUrl( full ) { var url = '/styleguide/javascript/'; if ( full ) { url = '' + url; } return url; } // GOOD: function getUrl(full) { var url = '/styleguide/javascript/'; if (full) { url = '' + url; } return url; } Note See section 2.D.1.1 of idiomatic for more examples of this syntax. Quotes¶ Single quotes should be used everywhere unless writing JSON or the string contains them. This makes it easier to create strings containing HTML. jQuery('<div id="my-div" />').appendTo('body'); Object properties need not be quoted unless required by the interpreter. var object = { name: 'bill', 'class': 'user-name' }; Variable declarations¶ One var statement must be used per variable assignment. These must be declared at the top of the function in which they are being used. // GOOD: var good = 'string'; var alsoGood = 'another'; // GOOD: var good = 'string'; var okay = [ 'hmm', 'a bit', 'better' ]; // BAD: var good = 'string', iffy = [ 'hmm', 'not', 'great' ]; Declare variables at the top of the function in which they are first used. This avoids issues with variable hoisting. If a variable is not assigned a value until later in the function then it it okay to define more than one per statement. // BAD: contrived example. function lowercaseNames(names) { var names = []; for (var index = 0, length = names.length; index < length; index += 1) { var name = names[index]; names.push(name.toLowerCase()); } var sorted = names.sort(); return sorted; } // GOOD: function lowercaseNames(names) { var names = []; var index, sorted, name; for (index = 0, length = names.length; index < length; index += 1) { name = names[index]; names.push(name.toLowerCase()); } sorted = names.sort(); return sorted; } Naming¶ All properties, functions and methods must use lowercase camelCase: var myUsername = 'bill'; var methods = { getSomething: function () {} }; Constructor functions must use uppercase CamelCase: function DatasetSearchView() { } Constants must be uppercase with spaces delimited by underscores: var env = { PRODUCTION: 'production', DEVELOPMENT: 'development', TESTING: 'testing' }; Event handlers and callback functions should be prefixed with “on”: function onDownloadClick(event) {} jQuery('.download').click(onDownloadClick); Boolean variables or methods returning boolean functions should prefix the variable name with “is”: function isAdmin() {} var canEdit = isUser() && isAdmin(); Note Alternatives are “has”, “can” and “should” if they make more sense Private methods should be prefixed with an underscore: View.extend({ "click": "_onClick", _onClick: function (event) { } }); Functions should be declared as named functions rather than assigning an anonymous function to a variable. // GOOD: function getName() { } // BAD: var getName = function () { }; Named functions are generally easier to debug as they appear named in the debugger. JSHint¶ All JavaScript should pass JSHint before being committed. This can be installed using npm (which is bundled with node) by running: $ npm -g install jshint Each project should include a jshint.json file with appropriate configuration options for the tool. Most text editors can also be configured to read from this file.. Best Practices¶ Forms¶ All forms should work without JavaScript enabled. This means that they must submit application/x-www-form-urlencoded data to the server and receive an appropriate response. The server should check for the X-Requested-With: XMLHTTPRequest header to determine if the request is an ajax one. If so it can return an appropriate format, otherwise it should issue a 303 redirect. The one exception to this rule is if a form or button is injected with JavaScript after the page has loaded. It’s then not part of the HTML document and can submit any data format it pleases. Ajax¶ - The request is made to the server. - On success the interface is updated. - On error a message is displayed to the user if there is no other way to resolve the issue. - The loading indicator is removed. - The button is re-enabled. Here’s a possible example for submitting a search form using jQuery. jQuery('#search-form').submit(function (event) { var form = $(this); var button = $('[type=submit]', form); // Prevent the browser submitting the form. event.preventDefault(); button.prop('disabled', true).addClass('loading'); jQuery.ajax({ type: this.method, data: form.serialize(), success: function (results) { updatePageWithResults(results); }, error: function () { showSearchError('Sorry we were unable to complete this search'); }, complete: function () { button.prop('disabled', false).removeClass('loading'); } }); }); This covers possible issues that might arise from submitting the form as well as providing the user with adequate feedback that the page is doing something. Disabling the button prevents the form being submitted twice and the error feedback should hopefully offer a solution for the error that occurred. Event Handlers¶ When using event handlers to listen for browser events it’s a common requirement to want to cancel the default browser action. This should be done by calling the event.preventDefault() method: jQuery('button').click(function (event) { event.preventDefault(); }); It is also possible to return false from the callback function. Avoid doing this as it also calls the event.stopPropagation() method which prevents the event from bubbling up the DOM tree. This prevents other handlers listening for the same event. For example an analytics click handler attached to the <body> element. Also jQuery (1.7+) now provides the .on() and .off() methods as alternatives to .bind(), .unbind(), .delegate() and .undelegate() and they should be preferred for all tasks. templates. If you are including them inline this can always be done with jQuery: jQuery(template).find('span').text(_(.
http://docs.ckan.org/en/ckan-2.1.5/javascript-coding-standards.html
2017-12-11T07:51:32
CC-MAIN-2017-51
1512948512584.10
[]
docs.ckan.org
Distillation Through distillation, it is possible to obtain small amount of rarest Resources from more common ones. Distillation requirements The distillation process is carried out using a Distillation Facility in a Base Station. The Facility will access the Corporation Cargo on the same Station for the required Refined Resources. Distillation time Distillation activities will take 1 second every 50 tons of Starting Refined Resources to complete. Distillation result Every Refined Resource, after distillation process, is converted in a smaller quantity of another Refined Resource. Resulting Distilled Resource quantity will be 10% of the Starting Resources quantity. Resulting quantity is increased according to Agent's Distillation and Advanced Distillation Skills value: each point in Skills increase this resulting quantity by 1%. Example: Distilling 1000 tons of Bastantium will produce 100 tons of Fosmanium. An Agent with Distillation Skill at level 25 and Advanced Distillation Skill at level 15 will have a distillation bonus of 40%. Thus, the actual resulting quantity of Fosmanium for that Agent will be 140 tons. Distillation queue Refer to Base Station Manufacturing facilities section for queue rules.
http://docs.gatesofhorizon.com/rules-distillation.php
2017-12-11T07:38:16
CC-MAIN-2017-51
1512948512584.10
[]
docs.gatesofhorizon.com
Deprecation: #78244 - Deprecate TYPO3_DB and Prepared Statement class¶ See Issue #78244 Description¶ The classes TYPO3\CMS\Core\Database\DatabaseConnection and TYPO3\CMS\Core\Database\PreparedStatement have been marked as deprecated. This classes have been succeeded by Doctrine DBAL in TYPO3 v8, and will be removed in TYPO3 v9. Affected Installations¶ Any TYPO3 instances with references to $GLOBALS['TYPO3_DB'] or use instances of the mentioned classes above.
https://docs.typo3.org/typo3cms/extensions/core/8.7/Changelog/8.5/Deprecation-78244-DeprecateTYPO3_DBAndPreparedStatementClass.html
2017-12-11T07:34:57
CC-MAIN-2017-51
1512948512584.10
[]
docs.typo3.org
The full-document-preview functionality of the Ontolica Preview Web Parts is enabled by default. This feature can be modified via the Ontolica Preview Front End Configuration pages in the Central Administration for an entire farm, individual Web applications or even separately for each site collection via the Site Settings page of site collections. To make changes for this functionality, go to the Ontolica Preview Front End Configuration page. Click the Full Document Preview link and select the Enabled Content option. A closely related option is to allow users to open multiple preview panes simultaneously, which is very useful feature to visually compare documents side by side. To enable or disable this feature, open the Preview Pane option. Note: opening too many preview panes simultaneously will decrease the browser speed. Post your comment on this topic.
http://docs.surfray.com/ontolica-search-preview/1/en/topic/enabling-full-document-preview
2017-12-11T07:19:19
CC-MAIN-2017-51
1512948512584.10
[]
docs.surfray.com
Date: Sat, 30 Mar 2019 03:51:13 +0100 From: Polytropon <[email protected]> To: Lowell Gilbert <[email protected]> Cc: [email protected] Subject: Re: eee-dee anyone? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Fri, 29 Mar 2019 19:14:11 -0400, Lowell Gilbert wrote: > Mayuresh Kathe <[email protected]> writes: > > > Are there still people on this list using the "ed" text-editor? > > If yes, is it just for kicks? Or is there any real advantage over "vi"? > > Probably pretty few people using it any more. > > Remember that it's actually just a different flavor of vi (same > executable), so getting rid of it would be silly. I think you're confusing vi and ex here (which are the same executable), but ed is something different (a different program). But I think the reason for this confusion is that using ed feels like using vi's ex mode or the ex standalone program. :-) % hardlinks.sh /usr/bin | grep vi 825093: ex nex nvi nview vi view Compare the locations: % which ed ee vi ex /bin/ed <- the standard editor /usr/bin/ee <- easy to use visual editor /usr/bin/vi <- the "normal" vi /usr/bin/ex <- the "ex mode" vi The standard editor, ed, is the only one present in / (which /bin is part of), whereas the others are located in /usr, which _might_ be a different partition, not accessible in a worst-case scenario where you only have / read-only and nothing else. Of course, this is mostly only historically important, because you won't find "functionally separated partitions" based on UFS very often, and modern partitioning approaches as well as ZFS typically don't have that kind of problem. -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=484233+0+/usr/local/www/mailindex/archive/2019/freebsd-questions/20190331.freebsd-questions
2021-10-16T07:07:29
CC-MAIN-2021-43
1634323583423.96
[]
docs.freebsd.org
Lock Management A process can apply (lock) and release (unlock) locks using the LOCK command. A lock controls access to a data resource, such as a global variable. This access control is by convention; a lock and its corresponding variable may (and commonly do) have the same name, but are independent of each other. Changing a lock does not affect the variable with the same name; changing a variable does not affect the lock with the same name. By itself a lock does not prevent another process from modifying the associated data because Caché does not enforce unilateral locking. Locking works only by convention: it requires that mutually competing processes all implement locking on the same variables. A lock can be a local (accessible only by the current process) or a global (accessible by all processes). Lock naming conventions are the same as local variable and global variable naming conventions. A lock remains in effect until it is unlocked by the process that locked it, is unlocked by a system administrator, or is automatically unlocked when the process terminates. This chapter describes the following topics: Management Portal Lock Table, which displays all held locks system-wide, and all lock requests waiting for the release of a held lock. The lock table can also be used to release held locks. ^LOCKTAB utility, which returns the same information as the Lock Table. Waiting lock requests. How Caché queues lock requests waiting for the release of a held lock. Avoiding deadlock (mutually blocking lock requests). For further information on developing a locking strategy, refer to the article Locking and Concurrency Control. Managing Current Locks System-wide Caché maintains a system-wide lock table that records all locks that are in effect and the processes that have locked them, and all waiting lock requests. The system manager can display the existing locks in the Lock Table or remove selected locks using the Management Portal interface or the ^LOCKTAB utility. You can also use the %SYS.LockQuery class to read lock table information. From the %SYS namespace you can use the SYS.Lock class to manage the lock table. Viewing Locks Using the Lock Table You can view all of the locks currently held or requested (waiting) system-wide using the Management Portal. From the Management Portal, select System Operation, select Locks, then select View Locks. The View Locks window displays a list of locks (and lock requests) in alphabetical order by directory (Directory) and within each directory in collation sequence by lock name (Reference). Each lock is identified by its process id (Owner) and has a ModeCount (lock mode and lock increment count). You may need to use the Refresh icon to view the most current list of locks and lock requests. For further details on this interface see Monitoring Locks in the “Monitoring Caché Using the Management Portal” chapter of Caché Monitoring Guide. ModeCount can indicate a held lock by a specific Owner process on a specific Reference. The following are examples of ModeCount values for held locks: A held lock ModeCount can, of course, represent any combination of shared or exclusive, escalating or non-escalating locks — with or without increments. An Exclusive lock or a Shared lock (escalating or non-escalating ) can be in a Delock state. ModeCount can indicate a process waiting for a lock, such as WaitExclusiveExact. The following are ModeCount values for waiting lock requests: ModeCount indicates the lock (or lock request) that is blocking this lock request. This is not necessarily the same as Reference, which specifies the currently held lock that is at the head of the lock queue on which this lock request is waiting. Reference does not necessarily indicate the requested lock that is immediately blocking this lock request. ModeCount can indicate other lock status values for a specific Owner process on a specific Reference. The following are these other ModeCount status values: The Routine column provides the current line number and routine that the owner process is executing. The View Locks window cannot be used to remove locks. Removing Locks Using the Lock Table cconsole.log. ^LOCKTAB Utility You can also view and delete (remove) locks using the Caché ^LOCKTAB utility from the %SYS namespace. You can execute ^LOCKTAB in either of the following forms: DO ^LOCKTAB: allows you to view and delete locks. It provides letter code commands for deleting an individual lock, deleting all locks owned by a specified process, or deleting all locks on the system. DO View^LOCKTAB: allows you to view locks. It does not provide options for deleting locks. Note that these utility names are case-sensitive. The following Terminal session example shows how ^LOCKTAB displays the current locks: %SYS>DO ^LOCKTAB Node Name: MYCOMPUTER LOCK table entries at 07:22AM 12/05/2016 1167408 bytes usable, 1174080 bytes available. Entry Process X# S# Flg W# Item Locked 1) 4900 1 ^["^^c:\intersystems\cache151\mgr\"]%SYS("CSP","Daemon") 2) 4856 1 ^["^^c:\intersystems\cache151\mgr\"]ISC.LMFMON("License Monitor") 3) 5016 1 ^["^^c:\intersystems\cache151\mgr\"]ISC.Monitor.System 4) 5024 1 ^["^^c:\intersystems\cache151\mgr\"]TASKMGR 5) 6796 1 ^["^^c:\intersystems\cache151\mgr\user\"]a(1) 6) 6796 1e ^["^^c:\intersystems\cache151\mgr\user\"]a(1,1) 7) 6796 2 1 ^["^^c:\intersystems\cache151\mgr\user\"]b(1)Waiters: 3120(XC) 8) 3120 2 ^["^^c:\intersystems\cache151\mgr\user\"]c(1) 9) 2024 1 1 ^["^^c:\intersystems\cache151\mgr\user\"]d(1) Command=> In the ^LOCKTAB display, the X# column lists exclusive locks held, the S# column lists shared locks held. The X# or S# number indicates the lock increment count. An “e” suffix indicates that the lock is defined as escalating. A “D” suffix indicates that the lock is in a delock state; the lock has been unlocked, but is not available to another process until the end of the current transaction. The W# column lists number of waiting lock requests. As shown in the above display, process 6796 holds an incremented shared lock ^b(1). Process 3120 has one lock request waiting this lock. The lock request is for an exclusive (X) lock on a child (C) of ^b(1). Enter a question mark (?) at the Command=> prompt to display the help for this utility. This includes further description of how to read this display and letter code commands to delete locks (if available). You cannot delete a lock that is in a lock pending state, as indicated by the Flg column value. Enter Q to exit the ^LOCKTAB utility. Waiting Lock Requests When a process holds an exclusive lock, it causes a wait condition for any other process that attempts to acquire the same lock, or a lock on a higher level node or lower level node of the held lock. When locking subscripted globals (array nodes) it is important to make the distinction between what you lock, and what other processes can lock: What you lock: you only have an explicit lock on the node you specify, not its higher or lower level nodes. For example, if you lock ^student(1,2) you only have an explicit lock on ^student(1,2). You cannot release this node by releasing a higher level node (such as ^student(1)) because you don’t have an explicit lock on that node. You can, of course, explicitly lock higher or lower nodes in any sequence. What they can lock: the node that you lock bars other processes from locking that exact node or a higher or lower level node (a parent or child of that node). They cannot lock the parent ^student(1) because to do so would also implicitly lock the child ^student(1,2), which your process has already explicitly locked. They cannot lock the child ^student(1,2,3) because your process has locked the parent ^student(1,2). These other processes wait on the lock queue in the order specified. They are listed in the lock table as waiting on the highest level node specified ahead of them in the queue. This may be a locked node, or a node waiting to be locked. For example: Process A locks ^student(1,2). Process B attempts to lock ^student(1), but is barred. This is because if Process B locked ^student(1), it would also (implicitly) lock ^student(1,2). But Process A holds a lock on ^student(1,2). The lock Table lists it as WaitExclusiveParent ^student(1,2). Process C attempts to lock ^student(1,2,3), but is barred. The lock Table lists it as WaitExclusiveParent ^student(1,2). Process A holds a lock on ^student(1,2) and thus an implicit lock on ^student(1,2,3). However, because Process C is lower in the queue than Process B, Process C must wait for Process B to lock and then release ^student(1). Process A locks ^student(1,2,3). The waiting locks remain unchanged. Process A locks ^student(1). The waiting locks change: Process B is listed as WaitExclusiveExact ^student(1). Process B is waiting to lock the exact lock (^student(1)) that Process A holds. Process C is listed as WaitExclusiveChild ^student(1). Process C is lower in the queue than Process B, so it is waiting for Process B to lock and release its requested lock. Then Process C will be able to lock the child of the Process B lock. Process B, in turn, is waiting for Process A to release ^student(1). Process A unlocks ^student(1). The waiting locks change back to WaitExclusiveParent ^student(1,2). (Same conditions as steps 2 and 3.) Process A unlocks ^student(1,2). The waiting locks change to WaitExclusiveParent ^student(1,2,3). Process B is waiting to lock ^student(1), the parent of the current Process A lock ^student(1,2,3). Process C is waiting for Process B to lock then unlock ^student(1), the parent of the ^student(1,2,3) lock requested by Process C. Process A unlocks ^student(1,2,3). Process B locks ^student(1). Process C is now barred by Process B. Process C is listed as WaitExclusiveChild ^student(1). Process C is waiting to lock ^student(1,2,3), the child of the current Process B lock. Queuing of Array Node Lock Requests The Caché queuing algorithm for array locks is to queue lock requests for the same resource strictly in the order received, even when there is no direct resource contention. As this may differ from expectations, or from implementations of lock queuing on other databases, some clarification is provided here. Consider the case where three locks on the same global array are requested by three different processes: Process A: LOCK ^x(1,1) Process B: LOCK ^x(1) Process C: LOCK ^x(1,2) In this case, Process A gets a lock on ^x(1,1). Process B must wait for Process A to release ^x(1,1) before locking ^x(1). But what about Process C? The lock granted to Process A blocks Process B, but no held lock blocks the Process C lock request. It is the fact that Process B is waiting to explicitly lock ^x(1) and thus implicitly lock ^x(1,2) — which is the node that Process C wants to lock — that blocks Process C. In Caché, Process C must wait for Process B to lock and unlock. The Caché lock queuing algorithm is fairest for Process B. Other database implementations that allowed Process C to jump the queue can speed Process C, but could (especially if there are many jobs such as Process C) result in an unacceptable delay for Process B. This strict process queuing algorithm applies to all subscripted lock requests. However, a process releasing a non-subscripted lock (such as LOCK -^abc) when there are both non-subscripted (LOCK +^abc) and subscripted (LOCK +^abc(1,1)) waiting lock requests is a special case. In this case, which lock request is serviced is unpredictable and may not follow strict process queuing. ECP Local and Remote Lock Requests When releasing a lock, an ECP client may donate the lock to a local waiter in preference to waiters on other systems in order to improve performance. The number of times this is allowed to happen is limited in order to prevent unacceptable delays for remote lock waiters. Avoiding Deadlock Requesting a (+) exclusive lock when you hold an existing shared lock is potentially dangerous because it can lead to a situation known as "deadlock". This situation occurs when two processes each request an exclusive lock on a lock name already locked as a shared lock by the other process. As a result, each process hangs while waiting for the other process to release the existing shared lock. The following example shows how this can occur (numbers indicate the sequence of operations): This is the simplest form of deadlock. Deadlock can also occur when a process is requesting a lock on the parent node or child node of a held lock. To prevent deadlocks, you should either request the exclusive lock without the plus sign (thus unlocking your shared lock). In the following example both processes release their prior locks when requesting an exclusive lock to avoid deadlock (numbers indicate the sequence of operations). Note which process acquires the exclusive lock: Another way to avoid deadlocks is to follow a strict protocol for the order in which you issue LOCK + and LOCK - commands. Deadlocks cannot occur as long as all processes follow the same order. A simple protocol is for all processes to apply and release locks in collating sequence order. To minimize the impact of a deadlock situation, you should always include the timeout argument when using plus sign locks. For example, LOCK +^a(1):10. If a deadlock occurs, you can resolve it by using the Management Portal or the LOCKTAB utility to remove one of the locks in question. From the Management Portal, open the Locks window, then select the Remove option for the deadlocked process.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCOS_LOCKTABLE
2021-10-16T06:51:05
CC-MAIN-2021-43
1634323583423.96
[]
docs.intersystems.com
Send speech through ATOM Echo to obtain the converted text Init echo speech recognition token Input token and initialize speech service. Recv echo data Callback function receives data returned by speech recognition. Get recv text Receive data from speech recognition. ATOM Echo flash related ECHO STT firmware and controls led through speech recognition by M5StackFire. Click here to view detailed documentation .
https://docs.m5stack.com/en/uiflow/advanced/media
2021-10-16T05:51:59
CC-MAIN-2021-43
1634323583423.96
[]
docs.m5stack.com
Updating Device Firmware #Introduction The nRF Cloud Firmware-Over-the-Air (FOTA) Updates Service provides a way for you to update your device's firmware wirelessly. If you would like to update your firmware in a wired manner, we recommend using the nRF Connect for Desktop tool. This document describes the inner workings of FOTA. See also the Getting Started with the FOTA Service guide. #Terms #Job Document Holds information about the firmware that is to be deployed as part of a job. Includes firmware version and type along with links to the actual files. #Job Target Specifies which devices the job can apply to. Can include both a set of tags (groups) and an explicit list of device identifiers. #Job The highest level representation of a FOTA job. Holds overall state. Contains the job document and target, along with the overall status and aggregate statistics about its executions. You can think of Jobs as containing executions. #Job Execution FOTA job as it relates to an individual device. One execution is created for every eligible device specified in the Job Target #Firmware Type Type of a firmware bundle that can be applied to a device - APP - MODEM - BOOT - SOFTDEVICE - BOOTLOADER #FOTA Life Cycle #Job Life Cycle #Job States #CREATED Description - Jobs initially start in the CREATED state. This simply means that its initial information has been saved, including the Job Target and Job Document. Once a job has been created, these fields are immutable. However, the Job Target is not resolved until later on. This means that you can change the devices that a job will apply to by adding/removing devices from referenced tags. Transitions - Job Applied - The apply event is what begins the execution of the job. During the processing of this event, the Job Target is validated and the full set of eligible devices is resolved. If the Job Target is successfully validated, the job is then transitioned to IN_PROGRESS and Execution Rollout is started. If an error occurs during validation, the job is instead transitioned to CANCELLED and an appropriate error is added to the statusDetail field. - Target Valid? - Target Invalid? - Job Cancelled - Job Deleted #IN_PROGRESS Description - Execution Rollout has been started and the requested update is currently being performed. Transitions - All executions in terminal states #COMPLETED Description - All executions have reached a terminal status. Transitions - Job Deleted #CANCELLED Description - Job will not be executed Transitions - Job Deleted #DELETE_IN_PROGRESS Description - Internal state the means the job record will be deleted at some point in the future. DELETED jobs will not show up in the user's job list. #Job Execution Life Cycle Job Execution states track the progress of individual job executions. Aside from the initial status of QUEUED, progression is entirely based on updates from the device via either MQTT or the REST API. #Pending States Pending states are used to track the progress of an execution that is currently being performed. The DOWNLOADING and IN_PROGRESS states can be applied multiple times with different status messages to provide more detailed updates. #QUEUED Description - Execution has been created and a job notification has been sent to the device over MQTT Transitions - ➔ DOWNLOADING - ➔ IN_PROGRESS - ➔ Any Terminal State #DOWNLOADING Description - Alias of IN_PROGRESS. Can be applied multiple times. Transitions - ➔ DOWNLOADING - ➔ IN_PROGRESS - ➔ Any Terminal State #IN_PROGRESS Description - Job has been received and accepted by the Device. Can be applied multiple times Transitions - ➔ IN_PROGRESS - ➔ Any Terminal State #Terminal States Terminal states are the final ending state of a Job Execution. An execution in a terminal state is considered done, and cannot be changed once applied. #SUCCEEDED Description - Execution was completed successfully. In the case of an APP or MODEM update, an attempt will be made to update the appropriate version numbers in the device's shadow. #CANCELLED #FAILED Description - An error occurred on the device while performing the update #REJECTED Description - The device refused to perform the update #TIMED_OUT Description - The update took too long to complete. No time limit is currently being enforced. #Job Target Validation Errors Before a job is transitioned to IN_PROGRESS, its Job Target is verified. If a job is determined to be invalid, it is moved directly to the CANCELLED state and one of the following errors will be provided in the Job's status detail field. Empty Job Target - The provided job target did not contain any eligible devices Job may not contain Bluetooth LE and IP devices - Job targets must contain either all IP devices OR all Bluetooth LE devices Job specified more than the max number of gateways - Jobs that update Bluetooth LE devices only support up to a maximum number of unique connected gateways. Gateways that are connected to multiple devices within the target are only counted once. Not all gateways support Bluetooth LE updates - A gateway connected to a Bluetooth LE device specified in the Job Target is not capable of performing FOTA updates on its connected peripherals. #Device Eligibility Device eligibility is determined by matching the Firmware Type specified in the Job Document against the list of supported Firmware Types in the device's record. Ineligible devices will not stop a job from executing, they just won't be given Job Executions. #Configuring FOTA Support What follows is usually configured automatically over MQTT when using the nRF Connect SDK libraries. This information is listed here as a reference of what should be configured in the device shadows to enable FOTA. See the FOTA Getting Started Guide for more information. #All IP Devices IP devices can specify what types of firmware they support using the device.serviceInfo.fota_v2 property in their shadow. This property should be set to a list of the Firmware Types supported by the device. #Gateways IP gateways (i.e., not phone gateways) to which the Bluetooth LE device is connected must also indicate in their shadow that they support Bluetooth LE FOTA. This is configured through the device.serviceInfo.fota_v2_ble property. #Bluetooth® LE Devices Bluetooth LE devices cannot individually specify their supported firmware types. Instead, devices that support secure DFU will automatically support APP, MODEM, and SOFTDEVICE updates. NOTE: Unlike the other eligibility checks, including a peripheral connected to a gateway that does not support Bluetooth LE will cause the job to fail. #Execution Rollout When a job is applied, its Job Target is resolved to a discrete list of Eligible Devices. Execution Rollout is the process of creating Job Executions and sending notifications to all of these devices. Because jobs can potentially contain very large numbers of devices, executions are rolled out progressively in batches of 25. This means that not every device will receive its execution right away and may have to wait several minutes. #MQTT Job Execution Notifications #Receiving Jobs Job execution notifications are sent to devices as a JSON tuple using MQTT messages. Fields - PERIPHERAL_ID - ID of the connected peripheral that is to be updated. Only for Bluetooth LE devices - JOB_ID - UUID for the overall job - FIRMWARE_TYPE - Numeric enum representing the [Firmware Type](@Firmware Type) of the job - APP = 0 - MODEM = 1 - BOOT = 2 - SOFTDEVICE = 3 - BOOTLOADER = 4 - FILE_SIZE - Combined size in bytes of all firmware files - FILE_HOST - Host where the firmware files are stored - FILE_PATH - Space separated list of paths to the firmware files relative to the FILE_HOST #IP Devices Notifications for IP devices are sent directly to the respective devices Topic: prod/<TENANT_ID>/<DEVICE_ID>/jobs/rcv Format: ["<JOB_ID>",<FIRMWARE_TYPE>,<FILE_SIZE>,"<FILE_HOST>","<FILE_PATHS>"] #Bluetooth® LE Devices Notifications for Bluetooth LE devices are sent to their connected gateway. Topic: prod/<TENANT_ID>/<GATEWAY_ID>/jobs/ble/rcv Format: ["<PERIPHERAL_ID>","<JOB_ID>",<FIRMWARE_TYPE>,<FILE_SIZE>,"<FILE_HOST>","<FILE_PATHS>"] #Requesting Pending Jobs Sometimes a devices is not able to receive the initial notification of an pending job execution. In this case, the device can request that a new execution notification be sent for its latest Pending job execution. #Record Expiration Both Job and Job Execution records expire one month after reaching terminal status. #Deleting Jobs Jobs in the COMPLETED, CREATED, or CANCELLED states can be deleted using the REST API. This will remove them from your job list. note Deleting a job will not delete its executions. Executions will remain visible until they expire. #Cancelling Executions Job Execution state is primarily driven by the device. This means that ultimately, the execution will remain in whatever state the device tells it to. However, there can be cases where you want nRFCloud to "give up" on a job execution so that the overall job can complete. This can be done using the UpdateFOTAJobExecutionStatus endpoint, and setting the status to CANCELLED. This is effectively the same as the device updating the execution status itself. It is important to note that this merely tells nRFCloud to stop waiting for a terminal status, and does not trigger any special cancellation behavior on the device itself. caution Changing an execution's status to CANCELLED will NOT trigger any behavior on the device itself. Devices in a bad state before cancelling, will be in the same state afterwards.
https://docs.nrfcloud.com/Reference/DeviceManagement/FOTA/
2021-10-16T04:57:01
CC-MAIN-2021-43
1634323583423.96
[]
docs.nrfcloud.com
Date: Fri, 15 Oct 2021 22:29:29 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_9344_1413563168.1634362169879" ------=_Part_9344_1413563168.1634362169879 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: In this section: To create a custom test configuration, you nee= d to: To create a custom configuration locally, you need to copy a selected bu= ilt-in configuration to the User directory, and then customize the duplicat= ed configuration. Click Parasoft in the menu bar and = choose Options (Visual Studio) or Preferences= (Eclipse). Then select Configuration.<= /p> Right-click the test configuration you want to duplicate, then choos= e Duplicate Locally. The configuration will be add= ed to the User directory and nested in a parent directory = matching the source. Right-click the duplicate configuration= and choose Edit to open the Test Configuration Editor in = you browser. NOTE: The Test Configuratio= n Editor is handled by a separate web server process and may be blocked if = a strict firewall is installed on your machine. In such case, allow the pro= cess to run when prompted. Choosing Edit as text opens a textual repres= entation of the configuration in a simple configuration editor (deprecated)= . The Scope tab contains a set of filters that you can co= nfigure to define the parts of the code that the test configuration should = cover. You must connect the C/C++test to source control in order= to collect scope information. Click Save t= o preserve any changes you make on this tab. Expand the Time filters settings to set time-based filt= ers at the file or line level. The time filters enable you to restrict the = scope of analysis to a specific date range or period. If the true and the source control settings for C/C++test are= configured, the modification time is set from the source control history.&= nbsp;If scope.local is set to t= rue, then the modification time is set form the file = system of the machine running analysis. You can configure the following settings: File-level settings Line-level Settings Expand the File path filters section to specify&nb= sp;file path patterns to include and/or exclude from analysis. Relativ= e paths within a workspace/solution. The following settings are available: Expand the Advanced discloser triangle to use regular e= xpressions to set the file path filters. The following settings are availab= le: Expand the File content filters section to specify regu= lar expressions that exclude specific types of files based on content, e.g.= , auto-generated files. File filtering takes priority over code block filtering<= /p> A potential conflict may occur if you use both filter types at the same = time. settin= gs are configured, then files authorship is taken from the map. The following options are available: Expand the File size filters section to limit the scope= of analysis based on file size. Expand the Code block opti= ons section to define specific blocks of code to include or exclude from th= e analysis. File filtering takes priority over code block filtering<= /p> A potential conflict may occur if you use both filter types at the same = time. Click the Static Analysis tab to enable/disable the sta= tic analysis rules the configuration uses. This page shows all the supporte= d rules. Click Save to preserve any changes= you make on this tab. Enable or disable the Enable static analysis checkbox t= o enable/disable static and flow analysis. You can use the search bar to find a specific rule or rule category. You= can also use the drop-down menu to filter by category and browse for a rul= e. Enable the Show Enabled Only option to only s= how the enabled rules. Rules are grouped by category. Expand a category and enable the rule to = use it in the test configuration. Click the Enable [number] of rule(s) or&= nbsp;Disable [number] of rule(s) button to quickly en= able or disable all rules in the configuration. Click on a rule to open the documentation panel. You can also open the rule documentation in new browser tab.= p> Click on the documentation icon to open all documentation for the enable= d rules in a new browser tab. If the rule can be configured, parameters can be set in the rule options= panel. Click on a rule and click the Rule Parameters tab to configure the = rule. The options available are specific to each rule. Click the Metrics tab to enable/disable the metrics col= lected and calculated during analysis. Click Save to preserve any changes you make on this tab. You can perform the following actions: Click the Unit Tests tab to access controls for unit te= st execution and coverage data collection. You can enable/disable the collection of unit test results and coverage = analysis. Click the Static Settings Analysis tab allows you to co= nfigure your static analysis and flow-based analysis. Click Sa= ve to preserve any changes you make on this tab. Expand the Advanced Settings section to enable the foll= owing options: Expand the Flow Analysis Advanced Settings section to configure settings= related to performance, reporting verbosity, null-checking method paramete= rization, and resources checked. See Conf= iguring Flow Analysis for details. Click on the General Settings tab= to view and edit the name and location of the test configuration.&nbs= p;Click Save to preserve any changes you make on= this tab. Enter a na= me in the Folder field to change the location of= the test configuration. Entering the name of an existing folder moves the = test configuration to that location in the test configuration tree. If the = name you specify doesn't exist, a new folder will be created and the test c= onfiguration moved into it. You can also nest folders by placing a forward = slash (/) between folder names. Click Parasoft in the menu bar and = choose Options (Visual Studio) or Preferences= (Eclipse). Then sel= ect Configuration. Right-click the Built-in or User test configuration you want to dupl= icate, then choose Duplicate on DTP. The configuration will be added to the = DTP directory and uploaded to the DTP server (see Connecting to DTP) = p>
https://docs.parasoft.com/exportword?pageId=50499610
2021-10-16T05:29:30
CC-MAIN-2021-43
1634323583423.96
[]
docs.parasoft.com
Plan Explorer introduces the concept of a Plan Explorer Session and its associated Plan Explorer Session file (.pesession). Plan Explorer Session files are completely portable, and contain a record of the entire session, including versioning. Plan Explorer Sessions are designed to help you manage a historical record as you refine queries. By default, a historical entry generates as part of the Plan Explorer Session during Estimated and Actual Plan retrieval. Note: Change how Plan Explorer generates new historical entries through User Preferences (Tools > User Preferences > Performance Analysis tab). The Only save history when command text or Instance settings change option controls this behavior. Version History Each historical entry retains all captured plan details and metrics within the various Plan Explorer tabs. Each version is associated with a unique version number. Plan Explorer includes a Plan History pane (View > Plan History) that allows you to navigate through the different versions within the active Plan Explorer Session. To delete a version, select Delete in the History pane context menu. Adding Comments Plan Explorer allows you to add comments to each historical version (View > Comments). This allows you to keep track of the reasoning behind any changes you make, like changes to the Command Text or any indexing optimization. You can also add comments in the Plan History pane by selecting the Comments drop-down list. Multiple Sessions and Tabs As each session is managed within its own tab, multiple Plan Explorer Sessions can be open at the same time. SQL Sentry Plan Explorer was designed with a multiple document interface, allowing you several options when managing tabbed sessions, including the ability to arrange tabs horizontally and vertically, and the ability to tear off tabs. Tab Windowing options are found in the Window menu. Sharing Session Files Plan Explorer Session files are easily shared with others, even if they don't have access to the full SQL Sentry client. SQL Sentry Plan Explorer is a stand-alone query analysis tool. Plan Explorer uses session files to manage history, just like the integrated Plan Explorer, and contains complete support for opening Plan Explorer Session files generated in the SQL Sentry client. Plan Explorer also opens session files in a limited fashion. When you open a Plan Explorer Session file (.pesession) for the first time in Plan Explorer, it'll by default open the first history entry. If you've opened the session file previously, it'll open the last history item you were viewing. You can easily switch between history items using View > Plan History. This is possible because each historical entry in a Plan Explorer Session is actually a .queryanalysis file. Plan Explorer Session files are archive files, containing the individual .queryanalysis files and metadata about the session. Note: Save any single historical entry of a Plan Explorer Session as a stand-alone .queryanalysis file. With the entry you wish to save as active, select Save As (File > Save as), and then choose the desired file type. Starting a New Session Start a new Plan Explorer Session by completing any of the following: Saving a Session Save a Plan Explorer Session by completing either of the following: Anonymize Plans SQL Sentry Plan Explorer can obfuscate the sensitive data in your plans. Plan Anonymization changes all Database, Table, and Column names in your plans to generic representations. To use this feature, select Tools > Anonymize Plan. Your original plan is preserved in the current tab, and the anonymized plan is opened in a new tab. Note: For more information, see the Anonymize your plan details natively in Plan Explorer blog post.
https://docs.sentryone.com/help/plan-explorer-sessions
2021-10-16T05:53:40
CC-MAIN-2021-43
1634323583423.96
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60e6ec968ee9b4af577b2439/n/sql-sentry-plan-explorer-anonymized-plan-202112.png', 'SQL Sentry Plan Explorer Anonymize Plans Version 2021.12 SQL Sentry Plan Explorer Anonymized plan'], dtype=object) ]
docs.sentryone.com
Table of Contents Product Index Everything comes out in the wash… The King Wash Laundromat, with highly detailed textures and a cinematic atmosphere, is perfect for your characters. It's ready to be a horror environment, an action scene, a mid-night urban tale, or the impromptu spot for the late-night kiss. This set contains 91 single loaded props and subsets, all are ready to render in Iray. The preload scene has material and light set up along with 8 camera presets with fog geometry. For those who like realistic environments, the King Wash Laundromat is a must-have! You'll love the urban photorealism you get.
http://docs.daz3d.com/doku.php/public/read_me/index/70511/start
2021-10-16T06:17:58
CC-MAIN-2021-43
1634323583423.96
[]
docs.daz3d.com
Jen controller. The Support Bundle is generated on a daily basis and submitted to servers hosted by CloudBees by the Jenkins Health Advisor plugin. For instructions on defining the types and kinds of data to include in the Support Bundle, see the Optional Configuration section. Installation Requirements The requirements for installing and using Jenkins Health Advisor by CloudBees are: A Jenkins LTS instance of version 2.138.4or newer. CloudBees Jenkins Instances must be on version 2.138.4.3or newer to install the Jenkins Health Advisor by CloudBees. If you are using an older version than those listed and you are a CloudBees Subscriber, you can use the Assisted Update process by submitting a ticket to CloudBees Support. For more information, see the Required Data: Assisted Update article. A Support Bundles generated from your controller must be less than 500MB to successfully send the report. An Internet connection is required for the Advisor plugin to send Support Bundles from your controller. For more information, see the Network Configuration section. Install Process Jenkins Health Advisor by CloudBees should be installed on all controllers in your instance. Access the plugin manager for your controllers, Manage Jenkins> Manage pluginsand select the Availabletab. Direct Link: {JENKINS_URL}/pluginManager/available Health Advisor by CloudBees. Select the checkbox to the left under the Installcolumn and click on the button Install without restart. Configuration To configure your system to use Jenkins Health Advisor by CloudBees: Manage Jenkins Health Advisor by CloudBeesfrom the Manage Jenkinsscreen. Accept the Terms and Conditions. Add a valid email to the email field. You may optionally click on send a test email to verify that your Jenkins instance can reach the server and send you back an email. Save your configuration. After the plugin is properly configured, your Jenkins instance will be connected to the Advisor service. You will begin receiving reports to your defined email. Optional configuration You can configure what information is sent to the server from the Analyzed Data section of the configuration page. Click on the button Configure Data and select the data you want to be sent and analyzed. The options are the same as the Support Core plugin. If your system is configured so that you cannot access the server from your instance, you can disable the message from the Reminder section. Select the box to Suppress the reminder to configure Jenkins Health Advisor by CloudBees. Configuration as Code Support Version 3.0 of Jenkins Health Advisor by CloudBees includes support for Jenkins Configuration as Code. Here is a configuration sample: advisor: acceptToS: true email: "[email protected]" ccs: - "[email protected]" - "[email protected]" excludedComponents: - "ItemsContent" - "GCLogs" - "Agents" - "AgentsConfigFile" - "ConfigFileComponent" - "RootCAs" - "OtherConfigFilesComponent" - "HeapUsageHistogram" nagDisabled: false Network Configuration The servers infrastructure used for Jenkins Health Advisor by CloudBees is hosted on Amazon Web Services ( us-east-1 region). The Jenkins instance must be able to connect to the server at insights.cloudbees.com on port 443 (https). Refer to AWS IP address ranges if a more specific IP address range definition is required. Addendum Installing on controllers in an operations center cluster To install, configure and update Jenkins Health Advisor by CloudBees on multiple CloudBees Jenkins controllers managed by a CloudBees operations center, you can use a Cluster Operation. To create and configure a cluster operation: In Target managed controllers, add the controllers upon which you want to install or configure Jenkins Health Advisor by CloudBees. Make sure you’ve taken care of the prerequisites! Add the following steps: An Install pluginstep with the plugin IDset to cloudbees-jenkins-advisorand no version: using no version instructs the system to use the most recent plugin available for the given controller. An Execute Groovy Script on Controllerstep using the following script (replacing the [email protected] with your preferred email address): If you deploy a version of the plugin >= 3.0: import com.cloudbees.jenkins.plugins.advisor.* import com.cloudbees.jenkins.plugins.advisor.client.model.* println "Configuration of Advisor ..." def config = AdvisorGlobalConfiguration.instance config.acceptToS = true config.email = "[email protected]" config.ccs = [new Recipient("[email protected]"),new Recipient("[email protected]")] // optional config.nagDisabled = true // optional config.save() println "Configuration of Advisor done." If you deploy a version of the plugin < 3.0: import com.cloudbees.jenkins.plugins.advisor.* println "Configuration of Advisor ..." def config = AdvisorGlobalConfiguration.instance config.acceptToS = true config.isValid = true config.email = "[email protected]" config.cc = "[email protected],[email protected]" // optional config.nagDisabled = true // optional config.save() println "Configuration of Advisor done." Your controllers are now configured to use Advisor. You will receive the first report within 24 hours.
https://docs.cloudbees.com/docs/admin-resources/latest/plugins/cloudbees-jenkins-advisor
2021-10-16T05:19:10
CC-MAIN-2021-43
1634323583423.96
[array(['_images/cloudbees-jenkins-advisor/advisor-report.3255084.png', None], dtype=object) array(['_images/cloudbees-jenkins-advisor/configure-optional.e534db5.png', None], dtype=object) ]
docs.cloudbees.com
Unable to specify Environment Mappings while creating a DR policy This article applies to: - Product edition: CloudRanger Problem description Environment mappings fail to load during the configuration of disaster recovery (DR) policy, even in the presence of restorable backups. Cause The Automatic Disaster Recovery (ADR) does not have adequate permissions to retrieve the VPCs from the AWS account. Traceback Resolution Update the existing AWS credentials in the CloudRanger.
https://docs.druva.com/Knowledge_Base/Druva_CloudRanger/Troubleshooting/Unable_to_specify_Environment_Mappings_while_creating_a_DR_policy
2021-10-16T06:11:16
CC-MAIN-2021-43
1634323583423.96
[array(['https://docs.druva.com/@api/deki/files/47690/CREnvMappings.png?revision=1', 'CREnvMappings.png'], dtype=object) ]
docs.druva.com
AHE 2 - Leveraging a QCDR to Standardize Processes for Screening Activity Weighting: Medium Subcategory Name: Achieving Health Equity Description: Participation in a QCDR, demonstrating performance of activities for use of standardized processes for screening for social determinants of health such as food security, employment and housing. Use of supporting tools that can be incorporated into the certified EHR technology is also suggested. Supporting Documentation - QCDR for Standardizing Screening Processes - Participation in QCDR for standardizing screening processes for social determinants, e.g., regular feedback reports from QCDR showing screening practices for social determinants; and - Integration of Tools into Certified EHR (suggested) - Integration of one or more of the following tools into practice as part of the EHR, e.g., showing regular referral to one or more of these tools Resources 2018 Improvement Activities Requirements 2019 Improvement Activities Requirements 2018 MIPS Improvement Activities Fact Sheet Scores for Improvement Activities for MIPS APMs in the 2018 Performance Period Fact Sheet
https://docs.webchartnow.com/functions/quality-of-care/measures/improvement-activities-measures/2018-improvement-activities/ahe-2-leveraging-a-qcdr-to-standardize-processes-for-screening.html
2021-10-16T05:28:31
CC-MAIN-2021-43
1634323583423.96
[]
docs.webchartnow.com
Overview ↑ Back to top WooCommerce Intuit Payments integrates with Intuit! Requirements ↑ Back to top - A QuickBooks Online account setup to support Intuit Payments - PHP 5.6+ (You can see this under WooCommerce > Status) - An SSL certificate and configure the plugin. Migrating to OAuth 2.0 ↑ Back to top Intuit is deprecating support for OAuth 1.0, so if you are currently using OAuth 1.0, you should now migrate to OAuth 2.0 to maintain your app’s connection with Intuit by following the steps below: - Follow Intuit’s instructions for migrating to OAuth 2.0. - Once migrated, copy your Client ID and Client Secret keys. You’ll need these to update the plugin settings. - Go to the plugin settings and change the OAuth Version setting to “2.0”. - Update the Client ID and Client Secret fields with your copied keys. - Click Save Changes, then Connect with Quickbooks to connect the plugin to Intuit using the OAuth 2.0 keys. How do I know what OAuth version I’m currently using and whether I need to migrate? There are a few ways to tell! - Do you see an admin notice in WordPress telling you to migrate? If so, you’re currently using the OAuth version 1.0. - Check the credentials listed in your plugin settings. If your OAuth Version field is set to “1.0”, you’ll need to migrate. Getting Started ↑ Back to top You’ll need to create your own app in the developer console to connect your store to Intuit Payments, or you can run the Onboarding Wizard by clicking Setup under the WooCommerce Intuit Payments Gateway plugin in Plugins > Installed Plugins. Create Your Intuit App ↑ Back to top Follow the steps below to create a free connection app with Intuit: - Login into the Intuit Developer site. - Click Dashboard > Create an app. - Select QuickBooks Online and Payments. - Enter a name for your app, select “Payments (US only)” for the scope, and click Create app. - On the Production tab, update the Terms of Service Links for your site and the Countries you accept connections from setting, if necessary. Click Save. - Go to the Keys & OAuth tab. Copy the Client ID and Client Secret to enter in the plugin settings. - Update the Redirect URIs field with the URI provided in the plugin settings. This will take the following format (replace “EXAMPLE.com” with your store’s URL): - Update the App URLs fields. You can use the homepage or shop domain for your Launch URL and Disconnect URL – these fields are required but don’t require a special URL. - Click Save. You’re now ready to proceed with connecting to Intuit from the plugin settings! Connect to Intuit ↑ Back to top Once you’ve created your Intuit app, you can connect the plugin to Intuit by following the steps below: - Enter your Client ID and Client Secret in the plugin settings. - Click Save changes. - Click Connect to QuickBooks. - In the popup window, enter your Intuit credentials to connect your account. All done! Now you can configure the other plugin settings and process payments. Credit Card Settings ↑ Back to top - Enable / Disable: Allow customers to use this gateway to checkout. - Title: The text shown for the payment during checkout and on the Order Received page. - Description: The text shown under the gateway’s title during checkout. Limited HTML is allowed. - Card Verification (CSC): Require customers to enter their card security codes when checking out. - Saved Card Verification: Require customers to enter their card security codes when using a saved payment method at checkout. - Transaction Type: Controls how transactions are submitted to Intuit Payments. Select “Charge” to automatically capture payments. If you select “Authorization”, you must manually capture and settle payments in your Intuit Payments credit card and eCheck gateways, select this setting to share credentials between the gateways so you don’t have to enter them twice. You must enter the credentials in the eCheck settings if this is enabled here. - OAuth Version: The OAuth version associated with your Intuit app. If you’re currently using 1.0, please follow these steps to migrate to OAuth 2.0, since support for OAuth 1.0 has been deprecated. -. eCheck Settings ↑ Back to top - Enable / Disable: Allow customers to use this gateway to checkout. - Title: The text shown for the payment during checkout and on the Order Received page. - Description: The text shown under the gateway’s title during checkout. Limited HTML is allowed. - Tokenization: Let customers save their payment methods for future use at checkout. This is required for Subscriptions or Pre eCheck and credit card gateways, select this setting to share credentials between the gateways so you don’t have to enter them twice. You must enter the credentials in the credit card settings if this is enabled here. -. Managing Orders ↑ Back to top As a site administrator, you can use the WooCommerce Intuit Payments gateway to manually capture charges and automatically refund or void transactions as needed. Capture Charges ↑ Back to top If the Transaction Type setting is set to “Authorization”, you can manually capture these payments from the WooCommerce Orders page. Click here to read more about capturing charges. Automatic Refunds ↑ Back to top You can process refunds directly in WooCommerce without needing to log into your Intuit before the transaction is settled (i.e. funds haven’t been transferred from the customer’s account to your Intuit account). Intuit Payments does not accept partial voids. If a transaction is no longer eligible to be voided, you must refund the order instead. Click here to read more about voiding transactions in WooCommerce. Gateway Features ↑ Back to top Your customers can take advantage of the following features when your site uses WooCommerce Intuit Payments. Saving Payment Methods ↑ Back to top Customers can save payment methods during the checkout process or from their My Account area. This lets them quickly select payment details during future checkouts and also lets your site support WooCommerce Subscriptions and WooCommerce. - Please confirm that tokenization is available for your Intuit account with your representative (for both credit cards and eChecks, if you intend to use both). There may be a service charge to enable this. - Credit card information isn’t stored on your site’s server, but is tokenized and stored on Intuit’s servers. This reduces your site’s PCI compliance burden. - When using WooCommerce Subscriptions, customers cannot delete payment methods associated with active subscriptions. Click here to read more about saving payment methods with Subscriptions. Enhanced Checkout Form ↑ Back to top Intuit Payments supports an enhanced checkout form that improves the checkout experience on mobile and desktop devices. Click here to read about the enhanced payment form. Frequently Asked Questions ↑ Back to top Q: My store is being targeted with fraudulent orders. How can I prevent this? A: Bad actors often target WooCommerce stores to test stolen credit card numbers, which can result in many fraudulent orders on your store. To discourage this, we recommend using a reCaptcha tool, such as reCaptcha for WooCommerce, to protect your store. Q: Does this gateway support legacy QuickBooks Merchant Services (QBMS) accounts? A: Existing installations of this plugin can be used with legacy QBMS accounts. However, we cannot support merchants using this account, so merchants must be using Intuit Payments to receive active support. New installations will use Intuit Payments by default. Q: Why do I get errors while using the test/production environment? A: Please check your plugin settings to ensure you’re using production keys in the “Production” environment and test keys in the “Test” environment. You can find both sets of keys in your Intuit developer app. Q: Will this plugin send purchase details to my Quick Books Online account? A: This plugin only serves as a payment gateway integration with Intuit’s payment processing services, so it cannot send transaction details to your Quick Books Online account. Q: Do I have to refresh my connection to my Intuit app manually? A: Nope! The plugin will do this for you automatically on a recurring basis. Q: Is this plugin compatible with Intuit Pay? A: According to Intuit Pay Support, they do not currently support eCommerce integrations. This is planned for a future Intuit Pay system update, but there’s no firm ETA yet. Q: Does this work with QuickBooks Payments? A: Yes and no. Unfortunately, Intuit names many of their products the same way. This plugin works with the eCommerce processing side of Intuit QuickBooks payments, which can be used with or without Quickbooks. Sometimes this is confused for another version of QuickBooks payments, which is done via QuickBooks online. This service will not work with our gateway integration to the best of our knowledge. Q: Is this plugin compatible with AVS (Address Verification Service)? A: The short answer is no. Our plugin depends on the payment status sent by the payment processor to approve or reject transactions. The address verification, however, is handled as a different parameter (or part) of the response which can cause orders to be approved even if the AVS fails. Our team is in the process of reviewing possible changes, so don’t hesitate to reach out to us if you’d like more information. Troubleshooting ↑ Back to top Difficulty Connecting to Intuit ↑ Back to top Did you encounter the following error when trying to connect to Intuit? “We’re sorry! We’re experiencing some problems. Please try again later.” There are a few potential causes: - Ensure your Redirect URI is properly configured in your Intuit app. You can find the correct Redirect URI for your store in the plugin settings. - Ensure your OAuth version is properly configured in your plugin settings. Not sure which OAuth version you’re using? Click here for tips on determining which version you’re using. If your error looks like this (note the additional “We’re updating our platform…” text), this is due to changes made by Intuit to a small number of new merchants. Unfortunately, according to Intuit, there’s nothing our plugin can do to correct this issue. If you encounter this error, please contact your Intuit representative so they can assist and ensure you can accept payments with the plugin. If you are receiving this message “Uh oh, there’s a connection problem. Sorry, but the app didn’t connect. Please try again later, or contact customer support for help” when connecting your app to Intuit or are having issues processing transactions, double-check that the Intuit Developer account that you have used to create your app is the same account you use to log into your QuickBooks Online account with. Provided card type is invalid. Provided token is invalid. ↑ Back to top If you receive an error like Provided card type is invalid. Provided token is invalid., there is likely a JavaScript conflict or error of some kind on your Checkout page, help resolve the error. If the error persists, try briefly changing to a default theme, such as Twenty Nineteen to rule out any theme conflicts. Other Issues ↑ Back to top Having a different problem? Follow these steps to make sure everything is setup correctly before posting a support request: - Please ensure that your site meets the plugin requirements. - Check the FAQs to see if they address your question. - Confirm that your Client ID and Client Secret are entered correctly in your plugin settings. - Confirm that your Redirect URI is populated correctly in your Intuit app. You can find your Redirect URI in the plugin settings. - Confirm that your OAuth version is populated correctly in the plugin settings. Click here for tips on determining which OAuthversion you’re using. - Enable Debug Mode in the plugin settings and review the error codes/messages provided by Intuit Payments. If the error points to an issue with the plugin, please submit a support ticket.
https://docs.woocommerce.com/document/woocommerce-intuit-qbms/
2021-10-16T04:44:00
CC-MAIN-2021-43
1634323583423.96
[]
docs.woocommerce.com
Deciding which OAuth 2.0 flow to use depends mainly on the type of client the end user will be using and the level of trust between AM authorization server and your clients. Client acts on its own (machine-to-machine) If the party requested for access does not involve an end user, for example a batch program or an API calling another API, the flow to use is the client credentials grant type. Client is a web application with a backend server If the party requested for access is a web app running on server (such as an Java, PHP or .NET app), the authorization code grant type is the best match. With this kind of application, the access and refresh tokens can be securely stored without risking exposure. Client is running on a web browser (single-age app or SPA) If the party requested for access is an SPA (such as an Angular, React or Vue app) the recommended option is to use the authorization code grant type with the PKCE extension. Client is a mobile/native application If the party requested for access is a mobile or native application, the authorization code grant type with the PKCE extension is your best option. Client is highly trustable If the party requested for access is able to use the authorization code grant type and deal with HTTP browser redirection, the end user will need to set their credentials in the client application and the client will send this information to the AM server. Your APIs are accessed by third parties If a partner or third party wants to access your protected resources (APIs) which are secured by AM server, it’s possible to ask your partners to exchange their own tokens for AM tokens. You can use the JWT Bearer grant type for this purpose.
https://docs.gravitee.io/am/current/am_devguide_protocols_oauth2_flows.html
2021-10-16T06:21:44
CC-MAIN-2021-43
1634323583423.96
[]
docs.gravitee.io
Introduction¶ xeus-sql is a Jupyter kernel for general SQL implementations based on the native implementation of the Jupyter protocol xeus and SOCI, a database access library for C++. Licensing¶ We use a shared copyright model that enables all contributors to maintain the copyright on their contributions. This software is licensed under the BSD-3-Clause license. See the LICENSE file for details. Getting started
https://xeus-sql.readthedocs.io/en/latest/?badge=latest
2021-10-16T06:06:03
CC-MAIN-2021-43
1634323583423.96
[]
xeus-sql.readthedocs.io
MS Dynamics CRM Configurations for Cisco Contact Center Agents MS CRM Dynamics 365/On-Premise October 10, 2017 (version 1.1) To allow MS Dynamics CRM user work as a Cisco contact center agent, you need to add/configure the CRM Organization users with CTI login credentials as configured in the Cisco contact center. The configured CRM users will then automatically be able to login to the Cisco contact center when logged in to Unified Service Desk. Login to CRM using admin credentials in the CRM Organization. This should be the same organization selected for “Organization CTI Configurations”. Open the organization Settings and select “Unified Service Desk”. From the list below, click on ‘Configuration’ In the Active configurations, you will see the configuration named as Expertflow CTI Configuration. Expertflow CTI configuration is the Configuration deployed by the “Organization CTI Configurations installer” i-e Package Deployer. Click Expertflow CTI Configuration >> users.Add the users to the configuration. Assign Roles to users Select the following roles for CRM users selected for Expertflow CTI Configuration. These are mandatory USD roles for the CRM user to work as a Cisco CTI agent. UII Administrator UII Agent USD Administrator USD Agent Customer Service Representative Assign Additional Permissions to Roles USD Agent Core Records Give permissions to the selected role according to the following screenshots. Business Management Customization Custom Entities USD Administrator Give permissions to the selected role according to the following screenshots. Core Records Business Management Custom Entities UII Agent Core Records Business Management Customization Custom Entities Customer Service Representative Core Records Voice Only configurations Note: Skip this step if only chat solution is being deployed SSO Configurations For Cisco Contact Center Agents This Single Sign-on configuration would allow an MS Dynamics CRM user to automatically login to the Cisco contact center when logged into Microsoft Unified Service Desk. You must be an Organization Administrator to make these changes. Open a CRM User and select “Form Editor”. Select the Form Editor option and add three new fields. The three fields should have the following name for the SSO to work. new_agentid - Cisco contact center Agent ID of the user new_password - Cisco contact center Agent Password new_extension - Cisco contact center phone extensions Drag the newly created fields at any location on the form. - To allow SSO to work, specify values for the 3 fields added above for each CRM user (contact center agent).
http://docs.expertflow.com/mducc/ms-usd-cti-connector-install-guide/ms-dynamics-crm-configurations-for-cisco-contact-center-agents
2021-04-10T22:47:18
CC-MAIN-2021-17
1618038059348.9
[]
docs.expertflow.com
On the off chance ton times DVD rental locales likewise however a closer area will guarantee fast conveyance. Learn your gaming routine so you’ll know the number of games you can play in a month’s time without squandering each game for which you have paid and don’t have the opportunity to play. The best internet games rental for you will be the one which offers your playable number of games in a month’s range at least expensive rates. This can be handily done by next to each other examinations of the internet games rental administrations. Examinations are done dependent on standard as month to month charges, client remarks, appraisals based on include set, computer game determination, search abilities, games appearance time, accessible plans and so forth While contrasting plans offered by the web based game rentals one need intentional in the quantity of plans offered by each in light of the fact that it shows the adaptability that can be appreciated. Best web based games rental must offer: – show accessible computer game classes, – have computer games titles, an alternative to buy computer games you like, – a concise game synopsis for the new client, – games audits by the clients, parental control alternative, – no late expenses, no due dates and free two way transportation, – on the off chance that you need the best one site to lease computer games so you can attempt our audit of GameMine 먹튀검증 For least speculation and greatest fun in making the most of your internet prominence and rank games as indicated by age and development so legitimate parental control could be applied. Presently with this information close by you are totally prepared to head out in the outlandishly fun universe of web based gaming. Appreciate!
http://www.ogi-docs.com/ways-to-find-the-best-deals-in-online-game-rental/
2021-04-10T21:26:29
CC-MAIN-2021-17
1618038059348.9
[]
www.ogi-docs.com
Create storage pools inSync Private Cloud Editions: Elite Enterprise.
https://docs.druva.com/010_002_inSync_On-premise/010_inSync_On-Premise_5.5/020_Install_and_Upgrade/050_Create_additional_storage/040_Create_storage_pools
2021-04-10T23:19:35
CC-MAIN-2021-17
1618038059348.9
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
Sections in this article This is a web version of community sourced manual for TensorCharts (TC). You can find Google Docs version here, where you can comment, suggest or edit. Share your experience, notes, links or books which could help others to get better understanding of TC and trading itself. (proof-reading is also welcomed) Thank you all who will contribute even slightly. David | TensorCharts.com Order flow If you ask an economist what price action indicates, his/her answer would immediately reference the equilibrium point between Supply and Demand -and that’s what the Order Flow can tell you.. So there you have it, the whole story, no speculation. Once you are accustomed to what it means when price movements occur (order flow), you will be able to recognize price action and articulate price action from an analytical perspective. Acquiring this fundamental will afford you the wisdom to exploit any trading market’s needs, wants or desires. In short, pun intended, investors and traders alike, find value in interpreting price action and quickly recognize the Order Flow as an indicator and the many vantage points it offers in trading a market. Explore TensorCharts.com and find your vantage point. Learn them all, and enjoy watching your revenue and confidence grow.
https://docs.tensorcharts.com/docs/home/
2021-04-10T21:43:18
CC-MAIN-2021-17
1618038059348.9
[]
docs.tensorcharts.com
In a nutshell, TileDB Cloud allows you to share your TileDB arrays on the cloud with other users and perform serverless computations on them in a cost-effective manner (see Pricing) and with zero deployment hassle. Sign up now, earn $10 in credit and enjoy the following features. You can register with TileDB Cloud any of your existing TileDB arrays stored on AWS S3 and share it with other users defining access policies. You retain full ownership of the data and nothing gets moved around. When a user slices or writes to a TileDB array, TileDB Cloud enforces the access policies and performs the query, sending back any results to the TileDB Cloud client. There is no need to manage IAM roles, or Apache Sentry/Range setups anymore. TileDB Cloud handles everything transparently, while you write code in the same manner as in TileDB Developer. TileDB Cloud comes with two serverless capabilities (while new ones will be added soon): SQL: Experience the power of the TileDB-MariaDB integration and perform rapid SQL queries on your S3-stored TileDB arrays. Python UDFs: Perform any custom Python function serverlessly Array UDFs: Perform any custom Python function on a TileDB array slice, which can range from reductions (e.g., aggregate queries) to any sophisticated computation. All access to arrays (e.g., slicing, serverless computations, etc) is logged and can be viewed for audit purposes. TileDB Cloud allows you to keep track of how your shared arrays are being used and gain valuable insights. Create organizations and invite members, in order to share team-wide arrays and consolidate billing. Mark any of your arrays as public and let it be discovered and used by other users. You do not bear any extra charge for a public array, but instead only the user that access the array gets charged for usage. The transition from TileDB Developer to TileDB Cloud is a couple of configuration parameters away. This allows you to test your code locally, and transition to the cloud or a shared array by changing 1-2 lines of code. Currently TileDB Cloud runs on AWS, but in the future it will be deployed on other cloud providers. Moreover, the TileDB Cloud instances are deployed only in us-east-1, but we will soon make it work on any region. The user arrays can be stored in any S3 region. TileDB Cloud consists of two components: (i) a global state that handles the customer accounts, all encrypted AWS keys, billing, and task queues, and (ii) an elastic cloud of stateless workers deployed with Kubernetes. The worker cloud expands transparently from the users and everything is performed in a serverless manner. TileDB cloud may spin up any of the following EC2 instances, and the user currently has no control to choose a specific machine: m5.4xlarge, c5.4xlarge and r5.4xlarge. There are three types of user tasks: Access (read/write): Read any array slice or write to a TileDB array. SQL: Perform a SQL query on one or more TileDB arrays. Python UDF: Run any Python function on a TileDB array slice. Every query is dispatched from the TileDB client and is handled by a different worker pod, i.e., TileDB Cloud balances the load. An access query is handled by a REST server pod which securely enforces the array access policies and manages all encrypted AWS keys. A SQL or Python UDF query is placed on a worker pod and any involved slicing always goes through a separate REST server pod for security purposes (e.g., so that the user is not able to maliciously retrieve any keys by dumping the machine memory contents). Each query is given all the CPUs of the worker machine and 2GB of RAM. Do you wish to run TileDB Cloud under your full control on premises or on the cloud? See TileDB Enterprise.
https://docs.tiledb.com/cloud/
2021-04-10T22:57:42
CC-MAIN-2021-17
1618038059348.9
[]
docs.tiledb.com
Performing Trial Scans The Scan application. - Scanning Sessions. >.
https://docs.toonboom.com/help/harmony-20/scan/scan-module/how-to-scan/perform-trial-scan.html
2021-04-10T22:24:00
CC-MAIN-2021-17
1618038059348.9
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Scan/Steps/HAR_trialscan_002.png', None], dtype=object) array(['../../Resources/Images/HAR/Scan/bw_options.png', None], dtype=object) ]
docs.toonboom.com
Converting Strokes When drawing on vector layers, you may want to change brush strokes to pencil lines to convert contour strokes into centre line pencil strokes. NOTE Any line thickness information is lost upon conversion from brush to pencil At times, you may want to change pencil lines to brush strokes. This converts a centre line stroke to a contour line stroke. Or you can convert strokes to pencil lines. - Select the strokes you want to convert. - Right-click and Convert > Pencil Lines to Brush Strokes. - Select the strokes you want to convert. - Right-click and select Convert > Brush Strokes to Pencil Lines.
https://docs.toonboom.com/help/storyboard-pro-20/storyboard/drawing/convert-stroke.html
2021-04-10T21:49:25
CC-MAIN-2021-17
1618038059348.9
[array(['../../Resources/Images/SBP/an_brushes_to_pencillines.png', None], dtype=object) array(['../../Resources/Images/SBP/an_pencilline_to_brush.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
3. Installation instructions¶ 3.1. Summary¶ The goals of this page are to provide more detail than Quickstart and to treat special cases. If Quickstart worked for you, then you can safely skip this page. The details vary depending upon the hardware that you want to share. These instructions are work in progress, and contributions and feedback are welcome. Please open a ticket at Installation instructions are provided for modern GNU/Linux distributions, such as Ubuntu. We are working to support other kinds of hosts, including Windows, macOS, and FreeBSD. 3.2. Main aspects¶ The main aspects to an operational hardshare installation: - API token for a rerobots user account, - an SSH key pair, hardshareclient, - a container provider (also known as cprovider), - rules around instance initialization, termination, and filtering. To begin, initialize a new configuration: hardshare config -c 3.2.1. API tokens¶ Instructions about managing API tokens are in the rerobots Web Guide. The token that you create at and download is saved to your local hardshare configuration. As such, the default expiration time might be too small for your application. Download the token, and add it: hardshare config --add-key path/to/your/jwt.txt 3.2.2. SSH keys¶ An SSH key is required to create SSH tunnels through which remote users connect to containers that you host. This section describes how to manually create keys and some security considerations. Because a key pair is created as part of a new configuration ( hardshare config -c) automatically, this section can be skipped unless something breaks. There might already be an SSH key at ~/.ssh/id_rsa. If not, or if you want to create a new pair for this purpose, then: ssh-keygen to start an interactive process to create a new pair. The default options are sufficient here; the prompt “default” is selected by simply pushing ENTER without typing text. For convenience, we recommend that you do not create a password for the key. If you insist, then managing such a key is discussed below. Additional instructions about creating and working with SSH keys, for example from DigitalOcean or GitHub. The SSH key is used by the hardshare client in a way that does not motivate adding password protection: to create reverse tunnels from rerobots-managed servers into containers that you host. Only the public key is copied to the rerobots server-side. Furthermore, API tokens provide for authentication and authorization of the hardshare client with respect to your rerobots account. In summary, this SSH key has a technical role and provides for encryption, but exposure risk of the secret key small. If the SSH key has a password, then there must be some way for the hardshare client to use the key without having to know the password. For this, you should configure ssh-agent, usage of which is presented in the OpenBSD manual. If you are new to ssh-agent, we recommend reading about basic ideas of how it works at Finally, add the SSH secret key path: hardshare config --add-ssh-path path/to/your/ssh_key 3.2.3. Containers¶ Hardshare shares hardware among remote users through containers. The term container in the context of hardshare includes Linux containers. Supporting software that facilitates containers in hardshare are known cproviders. For new users, Docker is a good first cprovider to try and is the default in a newly installed hardshare client configuration. Finally, the primary client is implemented in Python and available via PyPI: pip install hardshare or pipenv install hardshare if Pipenv is installed. If it succeeded, then you should be able to get the version from the command-line interface (CLI): hardshare version 3.3. Prepare a cprovider¶ 3.3.1. Docker¶ In most cases, Docker images are available via Docker Hub. The correct image to use depends on your host architecture. On Linux, you can do uname -m to find this. For example, on Raspberry Pi this would be armv7l, so Docker image tags that begin with armv7l- can be used. To get the latest release of the base generic image: docker pull rerobots/hs-generic:armv7l-latest which pulls the image from Docker Hub. To declare this image in the local hardshare configuration: hardshare config --assign-image rerobots/hs-generic:armv7l-latest Many consumer “desktop” and “laptop” computers have the x86_64 architecture, so the corresponding image is instead rerobots/hs-generic:x86_64-latest. Images in this registry are defined by Dockerfiles under the directory robots/ of the sourcetree. To build the image from source files, use the command given in the comments of the Dockerfile. For example, docker build -t rerobots/hs-generic:latest -f Dockerfile . 3.3.2. Podman¶ For many operations, podman is a drop-in replacement for docker. To switch to it with an existing hardshare configuration (created as described above), hardshare config --cprovider podman Then, the section about Docker can be followed by replacing docker with podman. 3.5. Access rules¶ Each robot shared through rerobots is subject to access rules about who can do what with it. These rules are said to define capabilities. The decision sequence for a user username trying to perform some action is the following: - if there is a rule about actionexplicitly for username, then apply it; - else, if there is a rule about actionthat is for a class of users of which usernameis a member, then apply it; - else, if there is a rule about actionthat targets all users (indicated by *), then apply it; - else (no match), default to not permit. The most simple kind of rule is whether or not to allow someone to remotely access a device. When a new device is registered, a single rule is created that permits only you (i.e., your user account) to create instances. To get the list of access rules: hardshare rules -l which should only have 1 item under rules: a capability CAP_INSTANTIATE and your username. To allow other users: hardshare rules --permit-all 3.
https://hardshare.readthedocs.io/en/latest/install.html
2021-04-10T22:23:37
CC-MAIN-2021-17
1618038059348.9
[]
hardshare.readthedocs.io
This page is a WIP. There is likely to be incomplete and or missing information while the page is being built. Below, and to the left under the Website Topics section, you will find sections that group together the various topics you are likely to encounter while interacting with the website.
http://docs.daz3d.com/doku.php/public/website/start
2021-04-10T21:09:47
CC-MAIN-2021-17
1618038059348.9
[]
docs.daz3d.com
Sports Betting Online – Before You Start Betting You may have gone over certain games wagering destinations on the web, there are a great many them. Since the introduction of the web it made it extremely advantageous for individuals who appreciate sports wagering to be amped up for the game. The game that is football wagering, ball wagering, baseball wagering, nascar wagering, wagering on golf competitions, soccer for all intents and purposes whatever isn’t chosen at this point you can put down a wager in an online sportsbook. The most recent American Idol carried large volume of bettors to the wagering entries entryway. While choosing a spot for sports wagering there are some key things we need to consider, which the new-to-the-game-individual probably won’t know about to just learn subsequent to dropping some truckloads of money on ลาลีกาผลบอลเมื่อคื that cheat, cutoff and cut players as they feel like. That is the reason Sports Betting Press is continually checking a wide scope of online sportsbooks and keeps the shoppers refreshed about the ones that are consistenly scoring at an agreeable level for sports wagering fans, transcending all different sportsbooks. A decent sportsbook will deal with your security at the most elevated level, have various approaches to store cash, have a responsive client care, offers a wide scope of functions to put down your wager on. You may believe that sportsbooks offering immense sign up rewards be a decent put down to wager, yet generally those are the ones that simply leave with your cash. There are some exeptions obviously. NFL wagering, International soccer wagering, ball wagering, baseball wagering and wagering on boxing functions are probably the most well known functions sports bettors place their bets on. A decent sportsbook additionally offers you decreased commission, which means at one put down you need to wager $110 to win $100 at somewhere else where the sportsbooks commission is diminished you may just need to wager $ 105 to win $100, that can have any kind of effect on the off chance that you are not kidding about games wagering. It’s essential to choose a sportsbook that is custom-made to your requirements for example on the off chance that you are a hot shot you presumably don’t have any desire to play at a sportsbook where the most noteworthy cutoff is $ 500 and the other way around the recreational player might want a spot where sports wagering is obliged recreational players.…
http://www.ogi-docs.com/category/uncategorized/page/6/
2021-04-10T22:44:29
CC-MAIN-2021-17
1618038059348.9
[]
www.ogi-docs.com
Class 10 Total Docs : 9 Uploaded on : 10 Sep 2019 780 33 Uploaded on : 10 Sep 2019 670 9 Uploaded on : 10 Sep 2019 517 12 Uploaded on : 11 Sep 2019 495 18 Uploaded on : 11 Sep 2019 292 15 Uploaded on : 11 Sep 2019 506 14 Uploaded on : 11 Sep 2019 608 14 Uploaded on : 04 Mar 2021 38 - Uploaded on : 04 Mar 2021 26 - > :
https://docs.aglasem.com/org/nios/class-10/question-paper?subject=economics
2021-04-10T22:35:19
CC-MAIN-2021-17
1618038059348.9
[array(['https://cdn.aglasem.com/assets/docs/images/org-logo/nios.jpg', 'NIOS logo'], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['/images/aglasem-docs-pdf.png', None], dtype=object) array(['https://cdn.aglasem.com/assets/docs/images/org-logo/nios.jpg', 'NIOS image'], dtype=object) ]
docs.aglasem.com
Industrial is an open architecture platform, therefore a user can theoretically establish almost any type of industrial machine connection. "Machine plugins" are applications developed by Ardexa to read data from identified machines and (in some cases) control aspects of the machine. These applications can be installed via the Ardexa App or the API. As shown below, these plugins are grouped into categories reflective of their function. Some or all of these machine plugins are made available to your workgroup as required for the tasks identified in your workgroup. If a machine plugin is available to be installed, you will see the "Install" button, otherwise a "Not Available" button will appear (As per the example in the diagram above). For example, if your workgroup has a need to monitor PLCs, you may not necessarily be given access to the Solar Inverter plugins. If you need access to one or more machine plugins, and do not have access, please contact Ardexa support. Details on accessing the Machine Plugins functions via the Ardexa App can be found here. Ardexa has implemented a new Dynamic Mapping system, that is used in conjunction with the Machine Plugins. This means that data and metadata is updated automatically when there are changes notified by the Machine Plugin. Machine Plugins, along with Dynamic Mapping allows: Machine Plugins to be easily installed remotely via the Ardexa App or the API. Machine Plugins can be easily (re)configured remotely via the Ardexa App or the API. This means changes to the collection rates, changes in accessing the machines and or diagnostics or discovery can be easily changed. Enables the automatic detection of machine variables. Previously, new or changed machine configurations had to be statically mapped - meaning manual intervention and changes to a configuration file. This was very time consuming and unresponsive. The new method of dynamic mapping is MUCH better. Updates the metadata automatically when the edge device detects a change to the machine plugin or a network failure. eg. Dynamic Mapping compresses the configuration data for the machine plugin configuration (which can be very big). This means that if I have a station of say 500 machines, all of which are configured identically, then I do not need redundant field definitions for all 500 sources corresponding to each machine. Details on accessing the Machine Plugins functions via the Ardexa App or API can be found here.
https://docs.ardexa.com/knowledge/configure/plugins
2021-04-10T22:52:03
CC-MAIN-2021-17
1618038059348.9
[]
docs.ardexa.com
Infrastructure Security in AWS Device Farm As a managed service, AWS Device Farm is protected by the AWS global network security procedures that are described in the Amazon Web Services: Overview of Security Processes You use AWS published API calls to access Device Farm. Requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or, you can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests. Infrastructure Security for Physical Device Testing Devices are physically separated during physical device testing. Network isolation prevents cross-device communication over wireless networks. Public devices are shared, and Device Farm makes a best-effort attempt at keeping devices safe over time. Certain actions, such as attempts to acquire complete administrator rights on a device (a practice referred to as rooting or jailbreaking), cause public devices to become quarantined. They are removed from the public pool automatically and placed into manual review. Private devices are accessible only by AWS accounts explicitly authorized to do so. Device Farm physically isolates these devices from other devices and keeps them on a separate network. On privately managed devices, tests can be configured to use an Amazon VPC endpoint to secure connections in and out of your AWS account. Infrastructure Security for Desktop Browser Testing When you use the desktop browser testing feature, all test sessions are separated from one another. Selenium instances cannot cross-communicate without an intermediate third party, external to AWS. All traffic to Selenium WebDriver controllers must be made through the HTTPS endpoint generated with createTestGridUrl. The desktop browser testing feature does not support Amazon VPC endpoint configuration at this time. You are responsible for making sure that each Device Farm test instance has secure access to resources it tests.
https://docs.aws.amazon.com/en_us/devicefarm/latest/developerguide/infrastructure-security.html
2021-04-10T23:06:47
CC-MAIN-2021-17
1618038059348.9
[]
docs.aws.amazon.com
Administering As an administrator, you can manage configuration items and archive records in Remedy Service Desk. The following primary configuration tasks are documented in the Configuring after installation section: Related topics Support group configuration for assignments Data Management Incident Management permissions Problem Management permissions Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/servicedesk1908/administering-878331758.html
2021-04-10T22:32:58
CC-MAIN-2021-17
1618038059348.9
[]
docs.bmc.com
Recent Activities First Activity The main activity has been to communicate updates shared regarding our accepted program “Instructional Technologies tool share and LITA guide on privacy” MON, June 24, 2018, from 10:30-11:30 (Bobbi Newman (author for theme) Lilly Ramin (ITIG chair) and Bree Kirsch (LITA ITIG member Meets LITA’s strategic goals for Education and Professional Development, Member Engagement What will your group be working on for the next three months? For the next three months there will be some virtual communication about the program among the presenters. I will continue to answer the periodic questions about the Instructional Technologies IG as chair and other LITA duties as they come up. Submitted by Lilly Ramin on 01/14/2019
https://docs.lita.org/2019/01/instructional-technologies-interest-group-december-2018-report/
2021-04-10T22:14:06
CC-MAIN-2021-17
1618038059348.9
[]
docs.lita.org
Hi everyone I want to install Microsoft Azure AD Connect to use NPS extension on Windows Server 2016. At step "Configure", the program has dynamically created an Azure account and require a password. I don't know this account's password. So, i changed its password at After login success, an error has been displayed. Is there a solution to this problem? Thanks for your time.
https://docs.microsoft.com/en-us/answers/questions/289718/configure-microsoft-azure-active-directory-connect.html
2021-04-10T23:24:59
CC-MAIN-2021-17
1618038059348.9
[array(['/answers/storage/attachments/72150-1.png', '72150-1.png'], dtype=object) array(['/answers/storage/attachments/72292-2.png', '72292-2.png'], dtype=object) ]
docs.microsoft.com
Description The EmailWorkflowOperationHandler queries the SMTP Service to send an email with the provided parameters. It is useful to send email notifications that some operation(s) have been completed or that some error(s) occurred in a workflow. The mail body consists of a single line of the form: Parameter Table Some other email parameters can be customized in the SMTP Service configuration Operation Example <operation id="send-email" fail- <configurations> <configuration key="to">root@localhost</configuration> <configuration key="subject">Failure processing a mediapackage</configuration> </configurations> </operation>
https://docs.opencast.org/r/2.1.x/admin/workflowoperationhandlers/email-woh/
2021-04-10T21:56:37
CC-MAIN-2021-17
1618038059348.9
[]
docs.opencast.org
IngestDownloadWorkflowOperationHandler Description With the IngestDownloadWorkflowOperationHandler it's possible to initially download external URI's from mediapackage elements and store them to the working file repository. The external element URI's are then rewritten to the stored working file repository URI. In case of having external element URI's showing to a different Opencast working file repository, it's also possible to delete them after downloading it by activating the "delete-external" option. This operation is originally implemented to get rid of remaining files on ingest working file repositories. Parameter Table Operation Example <operation id="ingest-download" fail- <configurations> <configuration key="delete-external">true</configuration> </configurations> </operation>
https://docs.opencast.org/r/2.1.x/admin/workflowoperationhandlers/ingestdownload-woh/
2021-04-10T22:40:58
CC-MAIN-2021-17
1618038059348.9
[]
docs.opencast.org
A validator instance is permitted to begin participating in the network once 32 ETH is locked up in a validator deposit contract. Validators are tasked with correctly proposing or attesting to blocks on the beacon chain, and receive either rewards or penalties to the initial deposit based upon their overall performance. If validators act against the protocol, their locked up deposit will be cut in a process known as 'slashing'. Validators that are intermittently offline or do not have reliable uptime will gradually lose their deposit, eventually leaking enough to be automatically removed from the network entirely. More on this topic can be found in the Ethereum 2.0 economics outline. Validators are quite lightweight pieces of software and perform only a small number of tasks, though it is critical that they are performed properly. They run in a single function that effectively summarizes every step of their lifecycle succinctly. In order of operations, the client: As mentioned, every validator instance represents 32 ETH being staked in the network. In Prysm, this is currently the default; however, the Prysm validator also supports running multiple keypairs that correspond to multiple validators in a single runtime, simplifying the process of deploying several validator instances for those whom want to stake more funds to help secure the network. To run multiple keypairs, they must be encrypted with the same password and kept in the same directory. A block proposal must include several items to meet the minimum requirements for verification by the protocol. In chronological order, these steps are: Attesting to a block is a similar process to proposing, albeit with a few slightly different steps:
https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/
2021-04-10T22:51:14
CC-MAIN-2021-17
1618038059348.9
[]
docs.prylabs.network
i follow bellow link to create private endpoint for my postgresql service at step setting resource, after i choose my postgres service then it said "resource is not supported" detail can see screenshoot image note: i use free trial subscriptions for test not like guideline separate step create vm and virtual network. I already create vm and virtual network is created follow
https://docs.microsoft.com/en-us/answers/questions/35531/private-endpoint-dont-support-postgresql-database.html
2021-04-10T23:49:53
CC-MAIN-2021-17
1618038059348.9
[array(['/answers/storage/attachments/9840-privatelink.png', '9840-privatelink.png'], dtype=object) ]
docs.microsoft.com
1.1 Glossary This document uses the following terms: active replica: A name given to a server that hosts content and is expected to serve that content to clients.. attachments table: A Table object whose rows represent the Attachment objects that are attached to a Message object. bookmark: A data structure that the server uses to point to a position in the Table object. There are three pre-defined bookmarks (beginning, end, and current). A custom bookmark is a server-specific data structure that can be stored by the client for easily navigating a Table object.. contents table: A Table object whose rows represent the Message objects that are contained in a Folder object. Deferred Action Message (DAM): A hidden message indicating to a client that it needs to execute one or more rules on another user-visible message in the store. distinguished name (DN): A name that uniquely identifies an object by using the relative distinguished name (RDN) for the object, and the names of container objects and domains that contain the object. The distinguished name (DN) identifies the object and its location in a tree. Embedded Message object: A Message object that is stored as an Attachment object within another Message object. FastTransfer download context: A Server object that represents a context for a FastTransfer download. FastTransfer upload context: A Server object that represents a context for a FastTransfer upload.. Gateway Address Routing Table (GWART): A list of values that specifies the address types that are supported by transport gateways. ghosted folder: A folder whose contents are located on another server. table: A Table object whose rows represent the Folder objects that are contained in another Folder object. Hypertext Transfer Protocol (HTTP): An application-level protocol for distributed, collaborative, hypermedia information systems (text, graphic images, sound, video, and other multimedia files) on the World Wide Web... named property: A property that is identified by both a GUID and either a string name or a 32-bit identifier. non-read receipt: A message that is generated when an email message is deleted at the expiration of a time limit or due to other client-specific criteria. permissions table: A Table object whose rows represent entries in a permissions list for a Folder object. property ID: A 16-bit numeric identifier of a specific attribute. A property ID does not include any property type information. property name: A string that, in combination with a property set, identifies a named property. property tag: A 32-bit value that contains a property type and a property ID. The low-order 16 bits represent the property type. The high-order 16 bits represent the property ID. public folder: A Folder object that is stored in a location that is publicly available. Receive folder: A Folder object that is configured to be the destination for email messages that are delivered. recipient: (1) An entity that can receive email messages. (2) An entity that is in an address list, can receive email messages, and contains a set of attributes. Each attribute has a set of associated values.. restriction: A filter used to map some domain into a subset of itself, by passing only those items from the domain that match the filter. Restrictions can be used to filter existing Table objects or to define new ones, such as search folder or rule criteria. ROP buffer: A structure containing an array of bytes that encode a remote operation (ROP). The first byte in the buffer identifies the ROP. This byte is followed by ROP-specific fields. Multiple ROP buffers can be packed into a single remote procedure call (RPC) request or response.. rule: A condition or action, or a set of conditions or actions, that performs tasks automatically based on events and values. rules table: A Table object whose rows represent the rules that are contained in a Folder object.. server object: A class of object in the configuration naming context (config NC). A server object can have an nTDSDSA object as a child. Server object handle: A 32-bit value that identifies a Server object. Server object handle table: An array of 32-bit handles that are used to identify input and output Server objects for ROP requests and ROP responses. server replica: A copy of a user's mailbox that exists on a server. special folder: One of a default set of Folder objects that can be used by an implementation to store and retrieve user data objects. Stream object: A Server object that is used to read and write large string and binary properties..
https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxcrops/8e94d3b8-4946-4d1f-b579-e6e17612ccf3
2021-04-10T23:32:19
CC-MAIN-2021-17
1618038059348.9
[]
docs.microsoft.com
This element contains various settings to setup mirrors for openstack ci gate testing in a generic fashion. It is intended to be used as a dependency of testing elements that run in the gate. It should do nothing outside that environment. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/diskimage-builder/latest/elements/openstack-ci-mirrors/README.html
2021-04-10T21:56:18
CC-MAIN-2021-17
1618038059348.9
[]
docs.openstack.org
Almost all failed installations or problems with backups fall into two categories: To avoid these problems, read the Amanda Enterprise 3.6 using the.6 Release Notes Zmanda Management Console Users Manual Zmanda Application Agents Vaulting - Copies of backup media Amanda Enterprise Man Pages Zmanda is a trademark of BETSOL Viewing Details:
https://docs.zmanda.com/Project:Amanda_Enterprise_3.6
2021-04-10T22:04:26
CC-MAIN-2021-17
1618038059348.9
[]
docs.zmanda.com
You own all the data. We do not provide access to your data to other parties unless you approve it. We do not punch holes through firewalls. We use individual digital certificates to authenticate devices and encrypt communications. We implement a real-time connection, both ways (to and from the device). We move any data types, including videos, photographs. We allow any protocol to be securely tunnelled to the device (similar to a VPN). We log commands and instructions. You can use our cloud, or access the device and its data via an API. We provide software "plugins" that communicate with most modern and old machine protocols.
https://docs.ardexa.com/knowledge/about/what-makes-us-different
2021-04-10T21:56:10
CC-MAIN-2021-17
1618038059348.9
[]
docs.ardexa.com
This is for advanced users. DISM installing is entirely unnecessary for the average computer user. If you need or want more control over how Windows is installed you can do so with DISM. There are some use cases for this is. Examples include: You have multiple disks in a computer where there is more than one EFI partition present. It appears the Windows installer will sometimes get confused during the install process. It will either write to an EFI partition not on the destination disk or flat out refuse to install. You'll of course need a Windows Install USB to boot from. Boot from it and press SHIFT + F10 to open Command and enter the command line steps below in code format. diskpart list volume - Note the mount letter assigned to the USB device. You'll need this later to replace example letter X. list disk - We need to show all disks and their numbers. select disk # - Replace # with the target disk you want to use. detail disk - Ensure you have the correct disk. You can verify by checking the name, size and type of disk it is. clean - This wipes the entire disk. convert gpt - This is required for proper installation on modern hardware. create part EFI size=200 - This is EFI boot. format fs=fat32 label="EFI" - Formats EFI. assign letter=Y - Assign EFI to Y. create part MSR size=16 - This is for system use. DO NOT FORMAT! create part pri size=1024 - This is the Recovery Tools (WinRE). format fs=ntfs quick label="Recovery" - Formats Recovery. assign letter=R - Assign Recovery to R. create part pri - Creates the primary partition for Windows. format fs=ntfs quick label="Windows" - Format Windows. assign letter=W - Assign partition to W. exit For simplicity, you can create any additional partitions you may need after Windows is installed with Disk Manager. Replace X in the example with the letter assigned to your USB device. You need to find the index you want. This determines if you install Home, Pro, etc. and their N or KN variants: > dism /Get-WimInfo /WimFile:X:\Sources\install[.wim|.esd] Look over the results displayed to find the number pertaining to the edition you want. Here is an example of what Media Creation Tool gives you. Index : 1Name : Windows 10 HomeDescription : Windows 10 HomeSize : 14,513,453,277 bytesIndex : 2Name : Windows 10 Home NDescription : Windows 10 Home NSize : 13,698,165,844 bytesIndex : 3Name : Windows 10 Home Single LanguageDescription : Windows 10 Home Single LanguageSize : 14,495,067,516 bytesIndex : 4Name : Windows 10 EducationDescription : Windows 10 EducationSize : 14,780,689,298 bytesIndex : 5Name : Windows 10 Education NDescription : Windows 10 Education NSize : 13,967,235,459 bytesIndex : 6Name : Windows 10 ProDescription : Windows 10 ProSize : 14,782,181,615 bytesIndex : 7Name : Windows 10 Pro NDescription : Windows 10 Pro NSize : 13,968,715,159 bytes > dism /Apply-Image /ImageFile:X:\Sources\install[.wim|.esd] /index:[1|2|3|4|5|6] /ApplyDir:C:\ A complete example to install Home would be: > dism /Apply-Image /ImageFile:H:\Sources\install.esd /index:1 /ApplyDir:W:\ After the image is applied, you need to create EFI boot files: > bcdboot W:\Windows /s Y: /f UEFI Create the WinRE folder structure: > md R:\Recovery\WindowsRE Copy the recovery tools: > xcopy /h W:\Windows\System32\Recovery\Winre.wim R:\Recovery\WindowsRE\ Link recovery tools: > W:\Windows\System32\Reagentc /Setreimage /Path R:\Recovery\WindowsRE /Target W:\Windows Reboot by exiting the installer. System will reboot into Windows installation and complete the process. If you want to change the user folder path, do not complete the install. Read here. Once setup, you can navigate to C:\Windows\System32\Recovery and delete the file Winre.wim. You already have a copy on the recovery partition.
https://docs.icedterminal.me/windows/manually-installing-windows
2021-04-10T22:45:55
CC-MAIN-2021-17
1618038059348.9
[]
docs.icedterminal.me
I have a query regarding the behavior of Windows Store App when it is getting installed. We usually see a notification on the bottom right corner of the screen when we install a store app, something similar to the attached image. If we click this notification, the application gets launched. Unfortunately, it does not happen in case of the my app. Therefore I want to know whether we need to do something extra within the app do handle the notification, or is it supposed to work by default. Please share your experience. Mine is a WPF app converted to store app through desktop bridge. It is also a Windows Setting app, i.e. it is launched from Windows Settings window. It cannot have a start menu tile.
https://docs.microsoft.com/en-us/answers/questions/15111/query-regarding-windows-store-app-behavior-during.html
2021-04-10T23:26:32
CC-MAIN-2021-17
1618038059348.9
[array(['/answers/storage/attachments/4941-installnotification2.png', '4941-installnotification2.png'], dtype=object) ]
docs.microsoft.com
AWS S3 Archive Configuration This page documents the configuration for the AWS S3 components in the Opencast module asset-manager-storage-aws. This configuration is only required on the admin node, and only if you are using Amazon S3 as an archive repository. Amazon User Configuration Configuration of Amazon users is beyond the scope of this documentation, instead we suggest referring to Amazon's documentation. You will, however, require an Access Key ID and Secret Access Key. The user to which this key belongs requires the AmazonS3FullAccess permission, which can be granted using these instructions. A free Amazon account will work for small scale testing, but be aware that S3 archiving Archive service to create the S3 bucket for you. It will create the bucket per its configuration, with private-only access to the files, and no versioning. Opencast Service Configuration The Opencast AWS S3 Archive service has four configuration keys which can be found in the org.opencastproject.assetmanager.aws.s3.AwsS3ArchiveElementStore.cfg configuration file. Using S3 Archiving There are two major methods to access S3 archiving features: manually, and via a workflow. Amazon S3 archiving is not part of the default workflows and manual S3 offload is disabled by default. To enable manual S3 offload you must edit the offload.xml workflow configuration file and change var s3Enabled = false; to var s3Enabled = true;. To manually offload a mediapackage follow the directions in the user documentation. To automatically offload a mediapackage to S3 you must add the move-storage workflow operation to your workflow. The operation documentation can be found here. Migrating to S3 Archiving with Pre-Existing Data Archiving to S3 is a non-destructive operation in that it is safe to move archive files back and forth between local storage and S3. To offload your local archive, select the workflow(s) and follow the manual offload steps described in the user documentation.
https://docs.opencast.org/r/7.x/admin/modules/awss3archive/
2021-04-10T22:33:50
CC-MAIN-2021-17
1618038059348.9
[]
docs.opencast.org
This doc explains technical details about using the Partnership API. For an introduction and requirements, first read Intro to Partnership API. Requirements For requirements, see Intro to Partnership API. Find your Partnership API key The Partnership API requires that you authenticate with the REST API key that is specific to your partnership owner account (you cannot use the other REST API keys). When using your Partnership API key with calls to REST API (v2) endpoints that require the use of an Admin user's API key, see Admin user's API Key and partnerships. Find your Partner ID The Partnership API also requires that you authenticate by providing a Partner ID specific to your partnership. This is unique from the account ID for your partnership owner account. To obtain your Partner ID, go to your partner admin console and retrieve the partner ID number that is listed in your URL: You must include the Partner ID as part of the base URL for the Partner API. Authenticate the API call To authenticate to the Partner API when making an API call: - Add a request header labeled x-api-key and set its value to your Partner API key. - Include your Partner ID at the specified point in the request URI. Notes for partners who manage New Relic accounts For partners who manage New Relic accounts for their customers, the initial API call for all account-level interactions is to "create account." This call returns an xml record of the newly created account. Part of this record is the account_id. All of the other calls in the Partnership API require the account_id as a parameter. Provision will need to be made by the partner to parse the returned xml extract, store the account_id, and associate it with the users' partner account record. Errors New Relic uses conventional HTTP response codes to indicate success or failure of an API request. In general, codes in the 2xx range indicate success and codes in the 4xx range indicate an error that resulted from the provided information (for example, a required parameter was.
https://docs.newrelic.com/docs/new-relic-partnerships/partnerships/partner-api/partner-api-reference/
2021-04-10T22:49:45
CC-MAIN-2021-17
1618038059348.9
[]
docs.newrelic.com
Moving Nodes Between Groups It is possible to move nodes from one group to another group, from the top level to a group or from a group to the top level. To do this, you must cut the nodes you want to move and paste them in the intended group. In the Node view, select the node or nodes you want to move.TIPS You can draw a rectangle around a cluster of nodes to select to select all of them simultaneously. - You can hold the Ctrl key and click on each node you want to select to add them to the selection. - Do one of the following: - Right-click on the selection and select Cut selection. - In the top menu, select Edit > Cut selection. - Press Ctrl (Windows/Linux) or ⌘ (macOS) - Navigate to the group in which you want to move the selected nodes. - Do one of the following: - Right-click in the Node view and select Paste. - In the top menu, select Edit > Paste. - Press Ctrl + V (Windows/Linux) or ⌘ + V (macOS). The nodes are pasted in the current group. You will have to manually connect them to this group's node system.
https://docs.toonboom.com/help/harmony-20/premium/nodes/move-node-between-groups.html
2021-04-10T21:50:52
CC-MAIN-2021-17
1618038059348.9
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
HA deployment Solution Prerequisites Following are the solution setup prerequisites. For HA deployment, we will be using two VMs, each machine in the cluster should have the following hardware specifications. The twoVMs will be referred by VM1 and VM2 in this guide. Software requirements On this page Installation Steps To start the firewall on CentOS (if it isn't started already), execute the following commands: # systemctl enable firewalld # systemctl start firewalld To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines. # firewall-cmd --add-port=2376/tcp --permanent # firewall-cmd --add-port=2377/tcp --permanent # firewall-cmd --add-port=7946/tcp --permanent # firewall-cmd --add-port=7946/udp --permanent # firewall-cmd --add-port=4789/udp --permanent # firewall-cmd --add-port=80/tcp --permanent # firewall-cmd --add-port=443/tcp --permanent # firewall-cmd --reload On VM1 and VM2, execute below additional commands, # firewall-cmd --add-port=5060/tcp --permanent # firewall-cmd --add-port=16386-32768/udp --permanent # firewall-cmd --add-port=9092/tcp --permanent # firewall-cmd --reload Configure Log Rotation Add the following lines in /etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster.. Installation Steps - Download the deployment script deployment.sh and place it in the user home or any desired directory. This script will: - delete the recording-solution directory if it exists. - clone the required files for deployment To execute the script, give it the execute permissions and execute it. $ chmod 755 deployment.sh $ ./deployment.sh Change to newly created directory with name recording-solution. This directory contains all the required files. - Run SQL script in MySQL to create database and tables. (recording-solution/db_schema.sql). Update environment variables in the following files inside /root/recording-solution/docker/environment_variablesfolder. Having environment configurations done, copy the recording-solution directory on VM2 in /rootdirectory using the following command. # scp -r /root/recording-solution root@<vm-ip>:/root/ Execute the following commands inside /root/recording-solution directory. # chmod 755 install.sh # ./install.sh Run the following command to ensure that all the components are up and running. # docker ps This will show services status as shown below image Now go to VM2, update LOCAL_MACHINE_IP variable to VM2 IP in root/recording/solution/docker/environment variables/recorder-environment.env file and run below command inside /root/recording-solution to start recorder and activemq services. The two activemq services on VM1 and VM2 will now act as master/slave to provide HA. The two recorder services on VM1 and VM2 will be configured in Cisco Call Manager (CUCM) to provide HA. # chmod 755 install.sh # ./install.sh - The directory "/root/recording-solution/recordings/wav" should also be mounted on network shared file system on both VMs or they should be synchronized with each other . In this way, all services on two VMs will have a shared directory for recording files reading or writing. Follow next step if network shared and synchronized folder is not provided - Recording folder synchronization, follow below steps; Install lyncd utility on one machine, run below commands. root@host # yum -y install epel-release root@host # yum -y install lsyncd Generate SSH Keys on same. Run below command to generate a key. Use default by pressing enter every time it prompts root@host # ssh-keygen -t rsa Transfer the SSH key to the other other machine by running below commands, enter other machine root password when prompted ssh-copy-id root@other-machine-ip vi ~/.ssh/config enter below text in config file, replace the Hostname with other machine IP Host dest_host Hostname 172.16.144.32 User root IdentityFile ~/.ssh/id_rsa settings { logfile = "/var/log/lsyncd/lsyncd.log", statusFile = "/var/log/lsyncd/lsyncd-status.log", statusInterval = 1 } sync { default.rsync, source="/root/recording-solution/recordings", target="192.168.1.125:/root/recording-solution/recordings", delete = false, rsync={ compress = true, acls = true, verbose = true, owner = true, group = true, perms = true, rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"} } - Follow above steps for the other machine Repeat the following steps on both machines. - Download keepalived.sh script and place it in any directory. Give execute permission and execute the script. This will create a keep-alived directory. Configure keep.env file inside keep-aliveddirectory Give the execute permission and execute the script: Troubleshooting
http://docs.expertflow.com/vrs/latest/deployment-guide/ha-deployment
2021-04-10T21:32:14
CC-MAIN-2021-17
1618038059348.9
[]
docs.expertflow.com
The group met 2-3 times this quarter. Recent Activities First Activity LITA CAM continues to handle social media promotion requests submitted to the committee. We schedule the majority of LITA's Facebook and Twitter posts based on these submissions, using a rotating calendar system to ensure committee members share the responsibility fairly. Meets LITA’s strategic goals for Education and Professional Development, Member Engagement, Organizational Stability and Sustainability Second Activity LITA CAM continues to collect monthly social media analytics for LITA's Facebook and Twitter accounts. These statistics are available here: (Email Joel Tonyan at [email protected] if you cannot view the spreadsheet) Meets LITA’s strategic goals for Organizational Stability and Sustainability Third Activity LITA CAM continues to partner with the Membership Development Committee on monthly #LITAchat's on Twitter. Lately, these chats have focused on tech projects by LITA members. CAM's role is to run the live chat sessions using the official LITA Twitter handle. For the first time ever, CAM, MDC, and the Blog committee all collaborated on the March LITAchat. MDC recruited the interviewees, the Blog Committee published profiles of the interviewees prior to the event, and CAM promoted the chat and ran the actual chat session. Meets LITA’s strategic goals for Education and Professional Development, Member Engagement, Organizational Stability and Sustainability What will your group be working on for the next three months? CAM will continue to manage LITA's social media presence, partner with MDC and the Blog on upcoming LITAchats over the new three months. The current chair will work closely with the incoming chair to ensure continuity in CAM's operations. Submitted by Joel Tonyan on 05/29/2019
https://docs.lita.org/2019/05/communications-and-marketing-committee-may-2019-report/
2021-04-10T22:24:50
CC-MAIN-2021-17
1618038059348.9
[]
docs.lita.org
Microsoft SMB Protocol and CIFS Protocol Overview..
https://docs.microsoft.com/en-US/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview
2021-04-10T23:18:40
CC-MAIN-2021-17
1618038059348.9
[]
docs.microsoft.com
of its default functionality, such as Splunk Web, can be disabled, if necessary, to reduce the size of its footprint. A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event. One key advantage of the heavy forwarder is that it can index data locally, as well as forward data to another Splunk comparison This table summarizes the similarities and differences among the three types of forwarders: For detailed information on specific capabilities, see the rest of this topic, as well as the other forwarding topics in the manual. Types of forwarder data Forwarders can transmit three types of data: - Raw - Unparsed - Parsed The type of data a forwarder can send depends on the type of forwarder it is, as well as how you configure it. Universal forwarders and light forwarders can send raw or unparsed data. Heavy forwarders can send raw or parsed data. With raw data, the indexer. It can also examine the events. Because the data has been parsed, the forwarder can perform conditional routing based on event data, such as field values. The parsed and unparsed formats are both referred to as cooked data, to distinguish them from raw data. By default, forwarders send cooked data (universal forwarders send unparsed data and heavy forwarders send parsed data.) To send raw data instead, set the sendCookedData=false attribute/value pair in outputs.conf. Forwarders and indexes Forwarders forward and route data on an index-by-index basis. By default, they forward all external data, as well as data for the _audit internal index. In some cases, they also forward data for the _internal internal index. You can change this behavior as necessary. For details, see Filter data by target!
https://docs.splunk.com/Documentation/Splunk/8.0.7/Forwarding/Typesofforwarders
2021-04-10T22:19:29
CC-MAIN-2021-17
1618038059348.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
VMware Tanzu Greenplum for Kubernetes Version 2.3 Release Notes VMware Tanzu Greenplum for Kubernetes 2.3.3.0 Changes VMware Tanzu Greenplum 2.3.0 is a minor release that includes these changes: - Greenplum Text is now a supported feature. - Greenplum Command Center installation files are now installed by default with Greenplum for Kubernetes. To use Command Center you must execute the Command Center installation after deploying a new cluster. See Using Greenplum Command Center. gpbackupcan back up Greenplum data either to local backup files, or to Minio S3 using the gpbackupS3 plugin. See Using the S3 Storage Plugin with gpbackup and gprestore in the Greenplum Backup and Restore documentation for more information on configuring the plugin. Known Issues and Limitations for Kubernetes Deployments - Currently VMware Tanzu Greenplum for Kubernetes does not support automatic upgrades of Greenplum Text. If you installed an earlier version of Greenplum Text in VMware Tanzu Greenplum for Kubernetes and then upgrade to version 2.3, you must manually complete the Greenplum Text upgrade by executing gptext-installsql -c gpadmin && gptext-installsql gpadminafter the upgraded cluster is fully running. - The zkManagerutility, used for checking the ZooKeeper cluster state and cluster management with Greenplum Text deployments, is not available with VMware Tanzu Greenplum for Kubernetes. As a workaround, use kubectlto manage the ZooKeeper cluster. - If you deploy a cluster with Greenplum Text, the gptext-stateutility always throws the warning message: object of type 'NoneType' has no len(). This message can be safely ignored in most cases. However, note that the utility cannot detect if the ZooKeeper cluster is down. -.
https://greenplum-kubernetes.docs.pivotal.io/2-3/release-notes.html
2021-04-10T21:33:34
CC-MAIN-2021-17
1618038059348.9
[]
greenplum-kubernetes.docs.pivotal.io
Product Index Below is a list of files provided by the Kokoro Hair for Genesis 3 Female(s) and Male(s) product. This is a placeholder for the product Read Me - this product has not yet been publicly released. Keep an eye on the New Releases section of the DAZ 3D Store for the product to become available, then check back here for more information.
http://docs.daz3d.com/doku.php/public/read_me/index/39453/file_list
2021-04-10T22:26:35
CC-MAIN-2021-17
1618038059348.9
[]
docs.daz3d.com
gpss.json gpss.json Greenplum Streaming Server configuration file. Synopsis { "ListenAddress": { "Host": "gpss_host", "Port": gpss_portnum [, "Certificate": { "CertFile": "certfile_path", "KeyFile": "keyfile_path", "CAFile": "CAfile_path" }] }, "Gpfdist": { "Host": "gpfdist_host", "Port": gpfdist_portnum [, "ReuseTables": bool_value ][, "Certificate": { "CertFile": "certfile_path", "KeyFile": "keyfile_path", "CAFile": "CAfile_path" }] [, "BindAddress": bind_addr ] }[, "Shadow": { "Key": "passwd_shadow_key", }] } Description You specify runtime configuration properties for a gpss service instance in a JSON-formatted configuration file. Run properties for GPSS include gpss and gpfdist service hosts, addresses, port numbers, and optional encryption certificates and key files. You can also specify a shadow key that GPSS uses to encode and decode the Greenplum Database password.. - Certificates: - When you specify certificates, GPSS uses the SSL certificates to authenticate both the client connection and the connection.. This hostname or IP address must be reachable from each Greenplum Database segment host. The default value is the fully qualified distinguished name of the host on which GPSS is running. - Port: gpfdist_portnum - The gpfdist service port number. The default port number is 8080. -. - Certificates: - When you specify gpfdist certificates, GPSS uses the SSL certificates and the gpfdists protocol to transfer encrypted data between itself. - BindAddress: bind_addr - The address to which GPSS binds the gpfdist port. The default bind address is 0.0.0.0. The encode/decode key for the Greenplum Database password. - Shadow: passwd_shadow_key - The key that GPSS uses to encode and decode the Greenplum Database shadow password string. Keep this key secret. Notes When you provide the --config gpss.json option to the gpsscli shadow command, GPSS uses the Shadow:Key specified in the file, or a default key, to encode a Greenplum Database password that you specify and generate a shadowed password string. When you provide a variant of this file to the other gpsscli subcommands via the --config gpsscliconfig.json option, GPSS uses the information provided in the ListenAddress block of the file to identify the GPSS server instance to which to route the request, and/or to identify the client certificates when SSL is enabled between GPSS client and server. When you provide the --config gpfdistconfig.json option to the gpkafka load command, GPSS uses the information provided in the Gpfdist block of the file to specify the gpfdist protocol instance, and, when SSL is enabled on the data channel to Greenplum Database, to identify the GPSS SSL certificates. Examples Start a Greenplum Streaming Server instance with properties as defined in a configuration file named gpss4ic.json located in the current directory: $ gpss gpss4ic.json Example gpss4ic.json configuration file: { "ListenAddress": { "Host": "", "Port": 5019 }, "Gpfdist": { "Host": "", "Port": 8319, "ReuseTables": false }, "Shadow": { "Key": "a_very_secret_key" } } See Also gpss, gpsscli, gpkafka load
https://gpdb.docs.pivotal.io/streaming-server/1-3-6/ref/gpss-json.html
2021-04-10T21:53:15
CC-MAIN-2021-17
1618038059348.9
[]
gpdb.docs.pivotal.io
Base¶ RTCBot is heavily based upon the concept of data producers, and data consumers. To that end, all classes that produce data, such as cameras, microphones, and incoming data streams are considered producers, and all classes that consume data, such as speakers, video displays or outgoing data streams are considered consumers. This section of the documentation is built to describe the backend base classes upon which all of the data streams are based, and to help you create your own producers and consumers with an API compatible with the rest of RTCBot. There are 3 main base classes types - BaseSubscriptionProducerand BaseSubscriptionConsumer - ThreadedSubscriptionProducerand ThreadedSubscriptionConsumer - MultiprocessSubscriptionProducer The three types allow setting up your own data acquisition and processing code loops without needing to worry about the asyncio loop (Threaded) or even the GIL (Multiprocess), but also come with the downside of increasing complexity and communication overhead. API¶ Note Unlike elsewhere in RTCBot’s documentation, inherited members are not shown here, so some functions available from a class might be hidden if they were defined in a parent. - class rtcbot.base.events. baseEventHandler(logger)[source]¶ Bases: object This class handles base events _setError(value)[source]¶ Sets the error state of the class to an error that was caught while processing data. After the error is set, the class is assumed to be in a closed state, meaning that any background processes either crashed or were shut down. Warning Only call this if you are subclassing baseEventHandler. _setReady(value=True)[source]¶ Sets the ready to a given value, and fires all subscriptions created with onReady(). Call this when your producer/consumer is fully initialized. Warning Only call this if you are subclassing baseEventHandler. -. onClose(subscription=None)[source]¶ This is mainly useful for connections - they can be closed remotely. This allows handling the close event. @myobj.onClose def closeCallback(): print("Closed!) Be aware that this is equivalent to explicitly awaiting the object: await myobj onError(subscription=None)[source)[source. - class rtcbot.base.events. threadedEventHandler(logger, loop=None)[source]¶ Bases: rtcbot.base.events.baseEventHandler A threadsafe version of baseEventHandler. _setError(err)[source]¶ Threadsafe version of baseEventHandler._setError(). _setReady(value)[source]¶ Threadsafe version of baseEventHandler._setReady(). - class rtcbot.base.base. BaseSubscriptionConsumer(directPutSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None)[source]¶ Bases: rtcbot.base.events.baseEventHandler A base class upon which consumers of subscriptions can be built. The BaseSubscriptionConsumer class handles the logic of switching incoming subscriptions mid-stream and all the other annoying stuff. - async _get()[source]¶ Warning Only call this if you are subclassing BaseSubscriptionConsumer. This function is to be awaited by a subclass to get the next datapoint from the active subscription. It internally handles the subscription for you, and transparently manages the user switching a subscription during runtime: myobj.putSubscription(x) # await self._get() waits on next datapoint from x myobj.putSubscription(y) # _get transparently switched to waiting on y - Raises SubscriptionClosed – If close()was called, this error is raised, signalling your data processing function to clean up and exit. - Returns The next datapoint that was put or subscribed to from the currently active subscription.)[source stopSubscription()[source. - class rtcbot.base.base. BaseSubscriptionProducer(defaultSubscriptionClass=<class 'asyncio.queues.Queue'>, defaultAutosubscribe=False, logger=None)[source]¶ Bases: rtcbot.base.events.baseEventHandler This is a base class upon which all things that emit data in RTCBot are built. This class offers all the machinery necessary to keep track of subscriptions to the incoming data. The most important methods from a user’s perspective are the subscribe(), get()and close()functions, which manage subscriptions to the data, and finally close everything. From an subclass’s perspective, the most important pieces are the _put_nowait()method, and the _shouldCloseand _readyattributes. Once the subclass is ready, it should set _readyto True, and when receiving data, it should call _put_nowait()to insert it. Finally, it should either listen to _shouldCloseor override the close method to stop producing data. Example A sample basic class that builds on the BaseSubscriptionProvider: class MyProvider(BaseSubscriptionProvider): def __init__(self): super().__init__() # Add data in the background asyncio.ensure_future(self._dataProducer) async def _dataProducer(self): self._ready = True while not self._shouldClose: data = await get_data_here() self._put_nowait(data) self._ready = False def close(): super().close() stop_gathering_data() # you can now subscribe to the data s = MyProvider().subscribe() - Parameters defaultSubscriptionClass (optional) – The subscription type to return by default if subscribe()is called without arguments. By default, it uses asyncio.Queue: sp = SubscriptionProducer(defaultSubscriptionClass=asyncio.Queue) q = sp.subscribe() q is asyncio.Queue # True defaultAutosubscribe (bool,optional) – Calling get()creates a default subscription on first time it is called. Sometimes the data is very critical, and you want the default subscription to be created right away, so it never misses data. Be aware, though, if your defaultSubscriptionClass is asyncio.Queue, if get()is never called, such as when someone just uses subscribe(), it will just keep piling up queued data! To avoid this, it is False by default. logger (optional) – Your class logger - it gets a child of this logger for debug messages. If nothing is passed, creates a root logger for your class, and uses a child for that. ready (bool,optional) – Your producer probably doesn’t need setup time, so this is set to True automatically, which automatically sets _ready. If you need to do background tasks, set this to False. _close()[source]¶ This function allows closing from the handler itself. Don’t call close() directly when implementing producers or consumers. call _close instead. _put_nowait(element)[source]¶ Used by subclasses to add data to all subscriptions. This method internally calls all registered callbacks for you, so you only need to worry about the single function call. Warning Only call this if you are subclassing BaseSubscriptionProducer. _shouldClose¶ Whether or not close()was called, and the user wants the class to stop gathering data. Should only be accessed from a subclass. - async get()[source() subscribe(subscription=None)[source unsubscribe(subscription=None)[source(). - class rtcbot.base.base. NoClosedSubscription(awaitable)[source]¶ Bases: object NoClosedSubscription wraps a callback, and doesn’t pass forward SubscriptionClosed errors - it converts them to asyncio.CancelledError. This allows exiting the application in a clean way. - exception rtcbot.base.base. SubscriptionClosed[source]¶ Bases: Exception This error is returned internally by _get()in all subclasses of BaseSubscriptionConsumerwhen close()is called, and signals the consumer to shut down. For more detail, see BaseSubscriptionConsumer._get(). - class rtcbot.base.base. SubscriptionConsumer(directPutSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionConsumer - class rtcbot.base.base. SubscriptionProducer(defaultSubscriptionClass=<class 'asyncio.queues.Queue'>, defaultAutosubscribe=False, logger=None)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionProducer - class rtcbot.base.base. SubscriptionProducerConsumer(directPutSubscriptionType=<class 'asyncio.queues.Queue'>, defaultSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None, defaultAutosubscribe=False)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionConsumer, rtcbot.base.base.BaseSubscriptionProducer This base class represents an object which is both a producer and consumer. This is common with two-way connections. Here, you call _get() to consume the incoming data, and _put_nowait() to produce outgoing data. _close()[source]¶ This function allows closing from the handler itself. Don’t call close() directly when implementing producers or consumers. call _close instead. - class rtcbot.base.thread. ThreadedSubscriptionConsumer(directPutSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None, loop=None, daemonThread=True)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionConsumer, rtcbot.base.events.threadedEventHandler close()[source]¶ The object is meant to be used as a singleton, which is initialized at the start of your code, and is closed when exiting the program. Make sure to run close on exit, since sometimes Python has trouble exiting from multiple threads without having them closed explicitly. - class rtcbot.base.thread. ThreadedSubscriptionProducer(defaultSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None, loop=None, daemonThread=True)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionProducer, rtcbot.base.events.threadedEventHandler exiting the program. - class rtcbot.base.multiprocess. ProcessSubscriptionProducer(defaultSubscriptionType=<class 'asyncio.queues.Queue'>, logger=None, loop=None, daemonProcess=True, joinTimeout=1)[source]¶ Bases: rtcbot.base.base.BaseSubscriptionProducer shutting down.
https://rtcbot.readthedocs.io/en/latest/base.html
2021-04-10T21:59:18
CC-MAIN-2021-17
1618038059348.9
[]
rtcbot.readthedocs.io
Community Petitions & Campaigns (general) Avaaz campaigns and community petitions - I have a campaign idea - what can I do about it? - I can't sign a petition or am having another problem with the website - How do I start a petition or solve a problem on a Community Petitions? - What does Detox the Algorithm mean? - I'm a journalist - how can I contact Avaaz? - Why isn't my name showing in the live activity feed?
https://avaaz-docs-en.helpscoutdocs.com/category/85-campaigns
2021-04-11T01:21:19
CC-MAIN-2021-17
1618038060603.10
[]
avaaz-docs-en.helpscoutdocs.com
. ** Yes! We are still taking grant applications. You can apply any time here. The Oasis Network is a Layer 1 blockchain protocol using a BFT, proof-of-stake consensus system. The network’s innovative ParaTime architecture enables us to scale without using sidechains. For more information please refer to our platform whitepaper.. Unlike a Parachain, a ParaTime does not need to do consensus itself, making them simpler to develop and more integrated into the network as a whole. ParaTimes take care of compute and discrepancy detection is used to ensure correctness and integrity of execution, making ParaTimes more efficient than Parachains and other chain designs that rely on sharding. The network is agnostic in this regard. Anyone can run a ParaTime. It is completely left up to the devs and users to see which ones provide the functionality that they need. Examples of ParaTimes in development include the Oasis Labs Data Sovereignty ParaTime and the Second State Virtual Machine, an EVM compatible Runtime. The Oasis Network uses Tendermint as its BFT consensus protocol. Given that the consensus layer uses a BFT protocol, the Oasis Network offers instant finality, meaning that once a block is finalized, it cannot be reverted (at least not for full nodes). A ParaTime commitment goes into a block and as such the ParaTime state is also finalized and cannot be reverted once a block is finalized. The Oasis Network does not use sharding. Instead, Oasis leverages a discrepancy detection model leading up to roothash updates, giving the network the same scalability benefits that sharding offers but with added benefits that come from a design that is much simpler to implement in practice.. Storage on the Oasis Network is determined by each ParaTime. There is a clear separation of concerns between the consensus layer and the runtime layer. The ParaTimes that make up the runtime layer have a lot of flexibility in how they choose to manage storage. For instance, the ParaTime being developed by Oasis Labs can support IPFS as its storage solution. Other ParaTime developers could opt to implement different storage mechanisms based on their own unique storage needs. The first generation of DeFi dApps has provided the market with a huge number of protocols and primitives that are meant to serve as the foundation for the specific components of a new financial system. Despite the current focus on short-term returns, we at Oasis believe the goal of DeFi applications should be to give rise to a new financial system that removes subjectivity, bias, and inefficiencies by leveraging programmable parameters instead of status, wealth, and geography. Oasis aims to support the next wave of DeFi applications by offering better privacy and scalability features than other Layer 1 networks. The terms “Open Finance” and “DeFi” are interchangeable. However, we believe that “Open Finance” better represents the idea that the new financial system should be accessible to everyone who operates within the bounds of specific programmable parameters, regardless of their status, wealth, or geography. Oasis recently announced a partnership with Chainlink as the preferred oracle provider of the Oasis Network. This integration is ongoing. In the current generation of DeFi, some miners and traders are leveraging the inefficiencies of Ethereum to stack mining fees and interest rates, while preventing many more people from participating in the industry. Privacy can play a strong role in making the network function properly by reducing these inefficiencies. At the application level, privacy is an enabler. For instance, strong privacy guarantees can encourage established institutions to participate in the system because these institutions would be able to protect their interests and relationships. Additionally, privacy features can serve as the foundation for a reputation system, thereby unlocking the full potential of undercollateralized lending. We keep hearing that privacy is the next big thing in DeFi, and we look forward to empowering developers to build the next generation of DeFi applications. Existing financial systems and data systems are not open at all. They are only accessible to a select few. Privacy has a much broader meaning than just keeping something private. Thanks to privacy-preserving computation, users can retain ownership of their information and grant others access to compute on their data without actually revealing (or transferring) their data. This will enable users to accrue data yields by essentially staking their data on the blockchain, unlocking a wide range of new financial opportunities. Open Finance refers to the idea that status, wealth, and geography won't block you from accessing a certain financial product. Adherence to a programmable set of parameters will determine whether someone can participate or not, making new financial opportunities open to more people around the world. For example, services such as lending protocols could offer different interest rates depending on the history of that user. What's game changing for the world of finance is that companies would not have to rely on a centralized score such as FICO - they would be able to build their own models. ROSE token will be used for transaction fees, staking, and delegation at the Consensus Layer. There are many ways to achieve confidentiality. Using Trusted Execution Environments (TEEs) is one way. This is what we do. In effect, we provide end-to-end confidentiality for transactions where state and payload are encrypted at rest, in motion, and, more importantly, in compute. homomorphic encryption is another technique for confidentiality. At this time, anyone can build a ParaTime on the Oasis Network that uses homomorphic encryption to provide confidentiality. We are not prescriptive about what approach developers should take. Something worth noting is that privacy and confidentiality are not equivalent. Privacy implies confidentiality but not the other way around. For privacy, there are techniques such as differential privacy that can be implemented. In short, yes! The Oasis Network supports EVM-compatible ParaTimes which will support a wide range of applications.
https://docs.oasis.dev/general/faq/oasis-network-faq
2021-04-11T00:09:31
CC-MAIN-2021-17
1618038060603.10
[]
docs.oasis.dev
Understand the Impact of Each Predictor in Your Model Who: This feature is available to admins with the Einstein Analytics Plus, or Einstein Predictions license. Where: This change applies to Lightning Experience in Enterprise and Developer editions. How: After a prediction is finished building, find it in your predictions list and select View Scorecard from the action menu. Impact is a number between 0 and 1 that represents the scaled weight or importance of a predictor. Importance (or weight, depending on the model type) represents how significant a predictor is in the predictive model. Einstein builds predictive models using the model type that performs best for the data in the particular model. Since some models use importance and others use weight, it can be difficult to compare predictors across multiple models. That’s why Einstein gives you an impact score, which you can find on the scorecard Details page. The scorecard also shows you the predictive model type that was used to build the model. For true/false fields, Einstein tests Random Forest and Logistic Regression model types. For number fields, Einstein tests Random Forest and Linear Regression model types. You can view predictor impact visually on the scorecard Overview page. Here you can see the impact for the top 5 predictors in your model. To view impact for all predictors, click View All Predictors.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_forcecom_einstein_prediction_builder_impact.htm
2019-08-17T13:27:04
CC-MAIN-2019-35
1566027313259.30
[array(['release_notes/images/prediction_builder_setup_view_scorecard.png', 'Predictions List View Scorecard'], dtype=object) array(['release_notes/images/prediction_builder_top_predictors.png', 'Top Predictors'], dtype=object) ]
docs.releasenotes.salesforce.com
Contents Now Platform Administration Previous Topic Next Topic Create a template using the Template form Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a template using the Template form Create a template record for any table to automatically populate certain fields. Before you beginRole required: admin Procedure Navigate to System Definition > Templates. Click New. Complete the form, as appropriate. Table 1. Template form fields Field Description Name Display name of this template. Table Table this template applies to. Select Global to make the template available for use with all tables.Note: The list shows only tables and database views that are in the same scope as the template. Active Option for making the template available for use. A template must be active to be Description of the template. Note: Adding content to this field does not add that content to the Short description field of forms that use this template. Template The content that automatically populates records based on this template. Select a field from the specified table in the left column, then enter the data to automatically populate in the right column.Note: Even though you can select dot-walked fields in the template, they do not apply to fields that are on the form. Link element Links a template for a child table with the template for the parent table. In the template for the child table, set the value to the field that references the parent table. Once set, the child template is explicitly linked to the parent. Note: This field does not appear by default. Configure the template form to add the field. Click Submit. What to do next See Application scope . Related tasksCreate templates for related task recordsCreate a template by saving a formCreate records based on a templateCreate a module for a templateToggle the template barRelated conceptsTemplate barRelated referenceScripted templates On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/form-administration/task/t_CreateATemplateUsingTheTmplForm.html
2019-08-17T13:35:24
CC-MAIN-2019-35
1566027313259.30
[]
docs.servicenow.com
Connect to your Azure App Account with Splunk Add-on for Microsoft Cloud Services Connect between the Splunk Add-on for Microsoft Cloud Services and your Azure App account so that you can ingest your Microsoft cloud services data into the Splunk platform. You can configure this connection using Splunk Web on your data collection node (recommended), or using the configuration files. Before you complete these steps, follow the directions in Configure an Active Directory Application in Azure AD for the Splunk Add-on for Microsoft Cloud Services to prepare your Microsoft account for this integration. Connect to your account using Splunk Web Access Splunk Web on the node of your Splunk platform installation that collects data for this add-on. - Launch the add-on, then click Configuration. - Click Azure App Account > Add Azure App Account. - Enter a friendly Name for the account. - Enter the Client ID , Key (Client Secret) and Tenant ID using the following account parameter table. - Click Add. Connect to your account using configuration files If you do not have access to Splunk Web on your data collection node, you can configure the connection to your account using the configuration files. - Create or open $SPLUNK_HOME/etc/apps/Splunk_TA_microsoft-cloudservices/local/mscs_azure_accounts.conf. - Add the following stanza: [<account_stanza_name>] client_id = <value> client_secret = <value> tenant_id = <value> Account Attributes This documentation applies to the following versions of Splunk® Supported Add-ons: released Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/Configureazureappaccount
2019-08-17T13:04:19
CC-MAIN-2019-35
1566027313259.30
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
serverclass.conf The following are the spec and example files for serverclass.conf. serverclass.conf.spec # Version 7.3.1 # # This file contains possible attributes and values for defining server # classes to which deployment clients can belong. These attributes and # values specify what content a given server class member will receive from # the deployment server. # # For examples, see serverclass.conf.example. You must reload the deployment # server configuration ("splunk reload deploy-server"), or restart splunkd, # for changes to this file to take effect. # # To learn more about configuration files (including precedence) please see # the documentation located at # #*************************************************************************** # Configure the server classes used by a deployment server instance. # # Server classes are essentially categories. They use filters to control # what clients they apply to, contain a set of applications, and might define # deployment server behavior for the management of those applications. The # filters can be based on DNS name, IP address, build number of client # machines, platform, and the clientName. If a target machine # matches the filter, then the deployment server deploys the apps and configuration # content that make up the server class to that machine. # Property Inheritance # # Stanzas in serverclass.conf go from general to more specific, in the # following order: # [global] -> [serverClass:<name>] -> [serverClass:<scname>:app:<appname>] # # Some properties defined in the [global] stanza can be # overridden by a more specific stanza as it applies to them. If a global # setting can be overridden, the description says so. FIRST LEVEL: global ########### # Global stanza that defines properties for all server classes. [global] disabled = <boolean> * Toggles the deployment server off and on. * Set to true to disable. * Default: false. crossServerChecksum = <boolean> * Ensures that each app has the same checksum across different deployment servers. * Useful if you have multiple deployment servers behind a load-balancer. * Default: false. excludeFromUpdate = <path>[,<path>]... * Specifies paths to one or more top-level files or directories (and their contents) to exclude from being touched during app update. Note that each comma-separated entry MUST be prefixed by "$app_root$/" to avoid warning messages. *. * Default: $SPLUNK_HOME/etc/deployment-apps targetRepositoryLocation = <path> * The location on the deployment client where the deployment server should install the apps. * If this value is unset, or set to empty, the repositoryLocation path is used. * Useful only with complex (for example, tiered) deployment strategies. * Default: $SPLUNK_HOME/etc/apps, the live configuration directory for a Splunk Enterprise instance. tmpFolder = <path> * Working folder used by deployment server. * Default: $SPLUNK_HOME/var/run/tmp continueMatching = <boolean> * Controls how configuration is layered across classes and server-specific settings. * If true, configuration lookups continue matching server classes, beyond the first match. * If false, only the first match is used. * Matching is done in the order in which server classes are defined. * A serverClass can override this property and stop the matching. * Can be overridden at the serverClass level. * Default: true. endpoint = <URL template string> * The endpoint from which content a deployment client can download content. The deployment client knows how to substitute values for variables in the URL. * You can supply any custom URL here, as long as it uses the specified variables. * Need not be specified unless you have a very specific need, for example: To acquire deployment application files from a third-party Web server, for extremely large environments. * Can be overridden at the serverClass level. * Default: $deploymentServerUri$/services/streams/deployment?name=$serverClassName$:$appName$ filterType = whitelist | blacklist * The whitelist setting indicates a filtering strategy that pulls in a subset: * Items are considered to not match the stanza by default. * Items that match any whitelist entry, and do not match any blacklist entry, are considered to match the stanza. * The blacklist setting indicates a filtering strategy that rules out a subset: * Items are considered to match the stanza by default. * Items that match any blacklist entry are considered to not match the stanza, regardless of whitelist. * More briefly: * whitelist: default no-match -> whitelists enable -> blacklists disable * blacklist: default match -> blacklists disable-> whitelists enable * Can be overridden at the serverClass level, and the serverClass:app level. * Default: Enterprise version > 6.4, the instanceId of the client. This is a GUID string, for example: 'ffe9fe01-a4fb-425e-9f63-56cc274d7f8b'. * All of these can be used with wildcards. The asterisk character (*) matches '.' to mean '\.' * You can specify '*' causes all hosts in splunk.com, except 'printer' and 'scanner', to # match this server class. # Example with filterType=blacklist: # blacklist.0=* # whitelist.0=*.web.splunk.com # whitelist.1=*.linux.splunk.com # This causes only the 'web' and 'linux' hosts to match the server class. # No other hosts match. # You can also use deployment client machine types (hardware type of host # machines) to match deployment clients. # This filter deployment server, however, you can determine the value of a # deployment client's machine type with this Splunk CLI command on the # deployment server: # <code>./splunk list deploy-clients</code> # The <code>utsname</code> values in the output are the respective deployment # clients' comma in conjunction the <field name> must specify conjunction with (whitelist|blacklist).from_pathname. * May be used in conjunction, either BOTH by name or BOTH by number. * MUST be used in conjunction '.' to mean '\.' * You can specify '*' to mean '.*' * Matches are always case-insensitive; you do not need to specify the '(?i)' prefix. * MUST be used in conjunction whitelist/blacklist filters. Only clients which match the whitelist/blacklist AND which match this machineTypesFilter are included. * In other words, the match is an intersection of the matches for the whitelist/blacklist and the matches for MachineTypesFilter. * This filter can be overridden at the serverClass and serverClass:app levels. * These patterns are PCRE regular expressions, with the following aids for easier entry: * You can specify '.' to mean '\.' * You can specify '*' to mean '.*' * Matches are always case-insensitive; you do not need to specify the '(?i)' prefix. * Unset by default. restartSplunkWeb = <boolean> * If true, restarts SplunkWeb on the client when a member app or a directly configured app is updated. * Can be overridden at the serverClass level and the serverClass:app level. * Default: false restartSplunkd = <boolean> * If true, restarts splunkd on the client when a member app or a directly configured app is updated. * Can be overridden at the serverClass level and the serverClass:app level. * Default: false issueReload = <boolean> * If true, triggers a reload of internal processors at the client when a member app or a directly configured app is updated. * If you don't want to immediately start using an app that is pushed to a client, you should set this to false. * Default: false restartIfNeeded = <boolean> * This is only valid on forwarders that are newer than 6.4. * If true and issueReload is also true, then when an updated app is deployed to the client, that client tries to reload that app. If it fails, it restarts. * Default: is the same as on the deployment server. * Can be overridden at the serverClass level and the serverClass:app level. * Default: enabled precompressBundles = true | flase * Controls whether the deployment server generates both .bundle and .bundle.gz files. The pre-compressed files offer improved performance as the deployment server up to the serverclass level (not app). Apps belonging to server classes that required precompression are compressed, even if they belong to a server class which does not require precompression. * Default:, spaces, underscores, dashes, dots, tildes, and the '@' symbol. It is case-sensitive. # NOTE: # The keys listed below are all described in detail in the # [global] section above. They can be used with serverClass stanza to # override the global setting continueMatching = <boolean> endpoint = <URL template string> excludeFromUpdate = <path>[,<path>]... filterType = whitelist | blacklist whitelist.<n> = <clientName> | <IP address> | <hostname> blacklist.<n> = <clientName> | <IP address> | <hostname> machineTypesFilter = <comma-separated list> restartSplunkWeb = <boolean> restartSplunkd = <boolean> issueReload = <boolean> restartIfNeeded = <boolean> stateOnClient = enabled | disabled | noop repositoryLocation = <path> THIRD LEVEL: app ########### [serverClass:<server class name>:app:<app name>] * This stanza maps an application (which must already exist in repositoryLocation) to the specified server class. * server class name is, spaces, underscores, dashes, dots, tildes, = <boolean> restartIfNeeded = <boolean> excludeFromUpdate = <path>[,<path>]... serverclass.conf.example # Version 7.3.1 # #: 7.3.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Serverclassconf
2019-08-17T13:05:15
CC-MAIN-2019-35
1566027313259.30
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Elixir Enterprise is a set of components that extends the Adobe Flex component set. The IBM ILOG Elixir Enterprise components can be deployed in Web-based (Flex) applications or in Adobe Integrated Runtime (AIR ) desktop-based applications. IBM ILOG Elixir Enterprise provides 2D and 3D charts including pivot charts, timeline, treemaps, heatmaps, map-based dashboards, calendars, organization charts, and gauges and indicators.
https://docs.bmc.com/docs/display/Configipedia/IBM+ILOG+Elixir+Enterprise
2019-08-17T13:59:27
CC-MAIN-2019-35
1566027313259.30
[]
docs.bmc.com
RichTextLabel¶ Inherits: Control < CanvasItem < Node < Object Category: Core Signals¶ Triggered when the user clicks on content between [url] -. Tutorials¶ Property Descriptions¶ If true, the label uses BBCode formatting. The label’s text in BBCode format. Is not representative of manual modifications to the internal tag stack. Erases changes made by other methods when edited. If true, the label underlines meta tags such as [url]{text}[/url]. If true, the label uses the custom font color. The text’s visibility, as a float between 0.0 and 1.0. If true, the scrollbar is visible... -. Returns the number of visible lines. - [align] tag based on the given align value. See Align for possible values. -_strikethrough ( ) Adds a [s] tag to the tag stack. Adds a [table=columns] tag to the tag stack. - void push_underline ( ) Adds a [u] tag to the tag stack. Removes a line of content from the label. Returns true if the line exists..
https://docs.godotengine.org/en/latest/classes/class_richtextlabel.html
2019-08-17T12:32:48
CC-MAIN-2019-35
1566027313259.30
[]
docs.godotengine.org
Other Enhancements to Dashboards in Lightning Experience In addition to this release’s major features, we made some small-but-notable improvements to dashboards. Where: These changes apply to Lightning Experience in Group, Essentials, Professional, Enterprise, Performance, Unlimited, and Developer editions. Why: A lot of little enhancements is a big deal. - Add Two Groups to Lightning Tables - Lightning tables now support two groups. Previously, you could have only one. For example, this dashboard shows opportunities grouped by stage and type. - Clone a Dashboard with Save As - To match reports, the Clone option dashboards is now labeled Save As. The functionality remains the same. - Add Another Date - We renamed the filter option Add More Date to Add Another Date. The previous label snuck past our grammar checkers. The functionality remains the same. - Reorder Dashboard Filter Values with Drag-and-Drop - You can drag dashboard filter values in the order you want them. Previously, you had to delete the filter values and then add them again in the desired order. - “Whole Number” Relabeled “Full Number” - We’ve changed the Display Units option to display a complete number from “Whole Number” to “Full Number.” Now that you can set decimal precision, “whole number” could be confused for “round to the nearest integer,” which is not what the option does. (To round decimals to the nearest integer, set Decimal Places to “0.” - People with SalesforcePlatform User Licences Can View More Dashboards - People with a SalesforcePlatform, SalesforcePlatformOne, or SalesforcePlatformLight user license can now view more dashboards. Previously, they could only view dashboards as people who had the same user license. Now they can view dashboards as people with other user license types, too.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_rd_dashboards_otherenhancements.htm
2019-08-17T13:04:11
CC-MAIN-2019-35
1566027313259.30
[]
docs.releasenotes.salesforce.com
RAMFUNCEMLIB Detailed Description RAM code support. This module provides support for executing code from RAM. A unified method to manage RAM code across all supported tools is provided. - Note - Other cross-compiler support macros are implemented in COMMON. - Functions executing from RAM should not be declared as static. - Warning - With GCC in hosted mode (default), standard library facilities are available to the tool the define SL_RAMFUNC_DISABLE, code placed in RAM by the SL_RAMFUNC macros will be placed in default code space (Flash) instead. - Note - This define is not present by default. Definition at line 113 of file em_ramfunc.h.
https://docs.silabs.com/mcu/5.4/efm32lg/group-RAMFUNC
2019-08-17T12:41:01
CC-MAIN-2019-35
1566027313259.30
[]
docs.silabs.com
Here is a list of). The following auto-configuration classes are from the spring-boot-autoconfigure module:
https://docs.spring.io/spring-boot/docs/2.0.10.BUILD-SNAPSHOT/reference/html/auto-configuration-classes.html
2019-08-17T13:33:35
CC-MAIN-2019-35
1566027313259.30
[]
docs.spring.io
Set. Where: This change applies to Lightning Experience for Essentials, Professional, Enterprise, Performance, Unlimited, and Developer editions. Lightning console apps are available for an extra cost to users with Salesforce Platform user licenses for certain products. Some restrictions apply. For pricing details, contact your Salesforce account executive.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_console_new_js_api.htm
2019-08-17T13:16:12
CC-MAIN-2019-35
1566027313259.30
[]
docs.releasenotes.salesforce.com
Custom HTTP Headers¶ Sometimes, you need to send custom HTTP headers. From Twirp’s perspective, “headers” are just metadata since HTTP is a lower level, transport layer. But since Twirp is primarily used with HTTP, sometimes you might need to send or receive some information from that layer too. Client side¶ Send HTTP Headers with client requests¶ Use Twirp\Context::withHttpRequestHeaders to attach a map of headers to the context: <?php // Given a client ... $client = new \Twitch\Twirp\Example\HaberdasherClient($addr); // Given some headers ... $headers = [ 'Twitch-Authorization' => 'uDRlDxQYbFVXarBvmTncBoWKcZKqrZTY', 'Twitch-Client-ID' => 'FrankerZ', ]; // Attach the headers to a context $ctx = []; $ctx = \Twirp\Context::withHttpRequestHeaders($ctx, $headers); // And use the context in the request. Headers will be included in the request! $resp = $client—>MakeHat($ctx, new \Twitch\Twirp\Example\Size()); Read HTTP Headers from responses¶ Twirp client responses are structs that depend only on the protobuf response. HTTP headers can not be used by the Twirp client in any way. However, remember that the TwirPHP client is instantiated with a PSR-18 HTTP client, which can be anything that implements the minimal interface. For example you could configure an HTTPlug PluginClient and read the headers in a plugin. Server side¶ Send HTTP Headers on server responses¶ In your server implementation you can set HTTP headers using Twirp\Context::withHttpResponseHeader. <?php public function MakeHat(array $ctx, \Twitch\Twirp\Example\Size $size): Hat { \Twirp\Context::withHttpResponseHeader($ctx, 'Cache-Control', 'public, max-age=60'); $hat = new \Twitch\Twirp\Example\Hat(); return $hat; } Read HTTP Headers from requests¶ TwirPHP server methods are abstracted away from HTTP, therefore they don’t have direct access to HTTP Headers. However, they receive the PSR-7 server attributes as the context that can be modified by HTTP middleware before being used by the Twirp method. For example, lets say you want to read the ‘User-Agent’ HTTP header inside a twirp server method. You might write this middleware: <?php use Psr\Http\Message\ServerRequestInterface; // class UserAgentMiddleware... public function handle(ServerRequestInterface $request) { $request = $request->withAttribute('user-agent', $request->getHeaderLine('User-Agent')); return $this->server->handle($request); }
https://twirphp.readthedocs.io/en/latest/advanced/http-headers.html
2019-08-17T12:38:20
CC-MAIN-2019-35
1566027313259.30
[]
twirphp.readthedocs.io
About the Eraser Tool T-HFND-004-010 Using the Eraser tool, you can remove parts of your drawing. When erasing brush strokes and painted areas, the contours will keep the shape you traced. When erasing pencil stroke, the central vector will be erased and the line tips will reshape based on the line's parameters. It is a good idea to create and save erasers with precise sizes and parameters in order to save time when drawing and designing. Toon Boom Harmony provides you with a variety of default eraser styles and allows you to create and save your own. Eraser presets are created by saving the properties of the current eraser as a preset, in order to reuse it again and again. The Eraser tool uses its own preset list, separated from the Brush tool. NOTE: To learn more about each individual parameter, see Eraser Tool Properties.
https://docs.toonboom.com/help/harmony-14/paint/drawing/about-eraser-tool.html
2019-08-17T12:44:46
CC-MAIN-2019-35
1566027313259.30
[]
docs.toonboom.com
All Files Displaying Controls T-HFND-009-011 You can display a layer's controls to adjust a trajectory or other parameters such as a gradient's position or deformation settings. How to display the layer’s controls Verify that the Camera view (click its tab) is selected and that the layer whose trajectory you want to display is selected in the Timeline view. From the top menu, select View > Show > Control or press Shift + F11 or in the Camera view toolbar, click on the Show Control button. NOTE If nothing appears in the Camera view, you may not have animated or selected the layer.
https://docs.toonboom.com/help/harmony-15/advanced/motion-path/display-control.html
2019-08-17T13:02:08
CC-MAIN-2019-35
1566027313259.30
[]
docs.toonboom.com
Cross-Validation Cross-validation is a technique for evaluating ML models by training several ML models on subsets of the available input data and evaluating them on the complementary subset of the data. Use cross-validation to detect overfitting, ie, failing to generalize a pattern. In Amazon ML, you can use the k-fold cross-validation method to perform cross-validation. In k-fold cross-validation, you split the input data into k subsets of data (also known as folds). You train an ML model on all but one (k-1) of the subsets, and then evaluate the model on the subset that was not used for training. This process is repeated k times, with a different subset reserved for evaluation (and excluded from training) each time. The following diagram shows an example of the training subsets and complementary evaluation subsets generated for each of the four models that are created and trained during a 4-fold cross-validation. Model one uses the first 25 percent of data for evaluation, and the remaining 75 percent for training. Model two uses the second subset of 25 percent (25 percent to 50 percent) for evaluation, and the remaining three subsets of the data for training, and so on. Each model is trained and evaluated using complementary datasources - the data in the evaluation datasource includes and is limited to all of the data that is not in the training datasource. You create datasources for each of these subsets with the DataRearrangement parameter in the createDatasourceFromS3, createDatasourceFromRedShift, and createDatasourceFromRDS APIs. In the DataRearrangement parameter, specify which subset of data to include in a datasource by specifying where to begin and end each segment. To create the complementary datasources required for a 4k-fold cross validation, specify the DataRearrangement parameter as shown in the following example: Model one: Datasource for evaluation: Copy {"splitting":{"percentBegin":0, "percentEnd":25}} Datasource for training: Copy {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}} Model two: Datasource for evaluation: Copy {"splitting":{"percentBegin":25, "percentEnd":50}} Datasource for training: Copy {"splitting":{"percentBegin":25, "percentEnd":50, "complement":"true"}} Model three: Datasource for evaluation: Copy {"splitting":{"percentBegin":50, "percentEnd":75}} Datasource for training: Copy {"splitting":{"percentBegin":50, "percentEnd":75, "complement":"true"}} Model four: Datasource for evaluation: Copy {"splitting":{"percentBegin":75, "percentEnd":100}} Datasource for training: Copy {"splitting":{"percentBegin":75, "percentEnd":100, "complement":"true"}} Performing a 4-fold cross-validation generates four models, four datasources to train the models, four datasources to evaluate the models, and four evaluations, one for each model. Amazon ML generates a model performance metric for each evaluation. For example, in a 4-fold cross-validation for a binary classification problem, each of the evaluations reports an area under curve (AUC) metric. You can get the overall performance measure by computing the average of the four AUC metrics. For information about the AUC metric, see Measuring ML Model Accuracy. For sample code that shows how to create a cross-validation and average the model scores, see the Amazon ML sample code. Adjusting Your Models After you have cross-validated the models, you can adjust the settings for the next model if your model does not perform to your standards. For more information about overfitting, see Model Fit: Underfitting vs. Overfitting. For more information about regularization, see Regularization. For more information on changing the regularization settings, see Creating an ML Model with Custom Options.
http://docs.aws.amazon.com/machine-learning/latest/dg/cross-validation.html
2017-05-22T21:27:50
CC-MAIN-2017-22
1495463607120.76
[array(['images/image63.png', None], dtype=object)]
docs.aws.amazon.com
Picker Properties Here are listed the properties affecting RadDatePicker and RadTimePicker: - DisplayMode - Standard (default) - The control is displayed by two parts: Picker and Selector Popup. - Inline - The control displays only the Selector part, which is part of the main layout. - IsReadOnly (bool): Sets the button to read-only. Its default value is false. MinValue (DateTime): Sets the minimum available date/time value for selection. Date/time with lower value will be disabled. Here's an example: <telerik:RadDatePicker x: or this.radDate.MinValue = new DateTime(2012, 9, 12); The code sample sets the minimum available date to 09/12/2012 so the expected output at runtime is: MaxValue (DateTime): Sets the maximum available date/time value for selection. - ValueString (string): Returns the selected date in its string representation, as it appears within the control. - Value (DateTime?): Gets or sets the currently selected DateTime. May be null as the property type is of type System.Nullable<DateTime>. - DisplayValueFormat (string): Sets the display format string for the date/time value. For example it could be "d-M-yyyy". - EmptyContent (object): Sets a value of the Selected Value field when it is empty. - EmptyContentTemplate (DataTemplate): Sets a custom template for an Empty Content template. - AutoSizeWidth (bool): Gets or sets a bool value indicating whether the control will automatically change its width to match the width of the Selector Popup. - CalendarIdentifier (string): Gets or sets the calendar identifier. - CalendarLanguage (string): Gets or sets the language of the picker. - CalendarNumeralSystem (string): Gets or sets the numeral system of the picker. - Step (DateTimeOffset): Gets or sets the step that will be applied to the picker date/time lists. Each list will take the correspomding component fron the DateTimeOffset structure. RadDatePicker specific properties - DayStepBehavior (StepBehavior): Gets or sets a value that defines how the step for the day component is interpreted. - MonthStepBehavior (StepBehavior): Gets or sets a value that defines how the step for the month component is interpreted. - YearStepBehavior (StepBehavior): Gets or sets a value that defines how the step for the year component is interpreted. Here are the the possible values from the StepBehavior enumeration: - Multiples: All multiples of the current step are shown in the looping selector. If the step is 3, then the available values will be (0), 3, 6, 9... - StartFromBase: Each value in the looping selector is generated by adding the step value to each previous value starting from the base value. If the step is 3, and the base value is 1, then the available values will be 1, 4, 7... - BaseAndMultiples: The base (starting) value and all multiples are shown in the looping selector. If the step is 3, and the base value is 1, then the available values will be 1, 3, 6, 9... RadTimePicker specific properties - CalendarClockIdentifier (string): Gets or sets a value that specifies whether the clock will be 12 or 24-hour.
http://docs.telerik.com/devtools/universal-windows-platform/controls/raddatepicker-and-radtimepicker/properties-and-configuration/raddatetimepickers-properties-pickerproperties
2017-05-22T21:25:04
CC-MAIN-2017-22
1495463607120.76
[]
docs.telerik.com
Core Components¶ In Pyxley, the core component is the UILayout. This component is composed of a list of charts and filters, a single React component from a JavaScript file, and the Flask app. # Make a UI from pyxley import UILayout ui = UILayout( "FilterChart", "./static/bower_components/pyxley/build/pyxley.js", "component_id") This will create a UI object that’s based on the FilterChart React component in pyxley.js. It will be bound to an html div element called component_id. If we wanted to add a filter and a chart we could do so with the following #) Calling the ui.add_chart and ui.add_filter methods simply adds the components we’ve created to the layout. app = Flask(__name__) sb = ui.render_layout(app, "./static/layout.js") Calling ui.render_layout builds the JavaScript file containing everything we’ve created. Charts¶ Charts are meant to span any visualization of data we wish to construct. This includes line plots, histograms, tables, etc. Several wrappers have been introduced and more will be added over time. Implementation¶ All charts are UIComponents that have the following attributes and methods - An endpoint route method. The user may specify one to override the default. - A urlattribute that the route function is assigned to by the flask app. - A chart_idattribute that specifies the element id. - A to_jsonmethod that formats the json response.
http://pyxley.readthedocs.io/en/latest/core.html
2017-05-22T21:13:43
CC-MAIN-2017-22
1495463607120.76
[]
pyxley.readthedocs.io
Versions | v0.14 (td-agent3) This page is for v0.14, not the latest stable version which is v0.12. For the latest stable version of this article, click here. Writing Plugin Test Code TBD: The whole page is to be written. Table of Contents Basics of Plugin Testing Plugin Test Driver Overview Testing Utility Methods Test Driver Base API Testing Input Plugins Testing Filter Plugins Testing Output Plugins Testing Parser Plugins Testing Formatter Plugins Tests for logs Last updated: 2017-02-14 11:01:37.
https://docs.fluentd.org/v0.14/articles/plugin-test-code
2017-05-22T21:25:30
CC-MAIN-2017-22
1495463607120.76
[]
docs.fluentd.org
Why use YAML for Options Mappings?¶ YAML’s killer feature is that it allows comments alongside your config entries. When the options mapping is deciding critical things in your pyplate, it’s nice to be able to explain why something is one way and not another. For example: # The task scheduler breaks if more than one instance spawns right now # We're working on it in ticket #1234, but for now just cap the group at 1 TaskSchedulerAutoScalingMaxSize: 1 In addition to comments, breaking options out into the mapping mean that you can potentially spend less time in the pyplate itself by implementing branching logic based on keys in the options mapping. Earlier drafts of cfn-pyplates used JSON for the options mapping, but despite JSON’s already lightweight markup compared to something like XML, nothing beats YAML for its simplicity and content/markup ratio. See Also
http://cfn-pyplates.readthedocs.io/en/latest/why_yaml.html
2017-05-22T21:13:26
CC-MAIN-2017-22
1495463607120.76
[]
cfn-pyplates.readthedocs.io
cloud_sptheme.ext.auto_redirect – Redirect Deprecated URLs¶ New in version 1.9. Overview¶ This extension is helpful for when html documentation has been relocated to a new host; e.g. moving from pythonhosted.org to readthedocs.io. Once enabled, it adds a helpful “Documentation has moved” message to the top of every page, and automatically redirects the user as well. Configuration¶ This extension looks for the following config options: auto_redirect_subjectsubject to insert into message. defaults to The {project name} documentation. auto_redirect_domain_urlurl to redirect user to. no message or redirect will happen if this isn’t set. auto_redirect_domain_footeroptional footer text to append to message Internals¶ This should work with other sphinx themes, the js & css is (mostly) generic. Todo the “redirect to exact page” part of JS code currently only works with this theme; would like to fix that. Todo support configuring redirects to other pages within same documentation. Todo if user has dismissed “auto redirect”, remember via cookie?
http://cloud-sptheme.readthedocs.io/en/latest/lib/cloud_sptheme.ext.auto_redirect.html
2017-05-22T21:13:01
CC-MAIN-2017-22
1495463607120.76
[]
cloud-sptheme.readthedocs.io
Chapter 20: How to Import Sound and Add Lip-Sync Lip-Sync_22<< Here is an approximation of the sound each mouth shape can produce: You can lip-sync the traditional way or let the system automatically create the basic detection. You can refer to the mouth chart positions as you draw the shape of the character's mouth. Automatic Lip-Sync Detection Harmony can automatically map. - In the Timeline or Xsheetview,. - If the selected layer contains symbols, you can map the lip-sync using drawings located directly on the layer or use the symbol's frames. In the Symbol Layer field select Don't Use Any Symbol if you want to use the drawings or select the desired symbol from the drop-down menu. - In the Mapping section, type the drawing name or Symbol frames in the field to the right of the phoneme it represents. If your drawings are already named with the phoneme letters, you do not have to do anything. - Click OK. - Press the Play button in the Playback toolbar to see and hear the results in the Camera view. To play back your scene with sound, enable the Sound button in the Playback toolbar.
http://docs.toonboom.com/help/harmony-12/advanced/Content/_CORE/Getting_Started/016_CT_Sound.html
2017-05-22T21:19:35
CC-MAIN-2017-22
1495463607120.76
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/HAR/_Skins/stagePremium.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stageAdvanced.png', 'Toon Boom Harmony 12 Stage Advanced Online Documentation'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stageEssentials.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Sound/an_xsheet_view.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Sound/anp2_timeline_view.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Breakdown/HAR12/HAR12_Mouth_Chart.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Sketch/HAR11/HAR11_sound_LayerProp.png', None], dtype=object) array(['../../Resources/Images/HAR/Sketch/HAR11/Sketch_mapLipSync.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Sound/HAR11/HAR11_AutomatedLipSyncDetection_MAP-002.png', None], dtype=object) ]
docs.toonboom.com