content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
dask.array.nanprod¶ - dask.array.nanprod(a, axis=None, dtype=None, keepdims=False, split_every=None, out=None)[source]¶ Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. This docstring was copied from numpy.nanprod. Some inconsistencies with the Dask version may exist. One is returned for slices that are all-NaN or empty. New in version 1.10.0. - Parameters - aarray_like Array containing numbers whose product is desired. If a is not an array, a conversion is attempted. - axis{int, tuple of int, None}, optional Axis or axes along which the product is computed. The default is to compute the product. - keepdimsbool, optional If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original arr. - Returns - nanprodndarray A new array holding the result is returned unless out is specified, in which case it is returned..])
https://docs.dask.org/en/latest/generated/dask.array.nanprod.html
2021-10-16T00:03:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.dask.org
Download Links The inSync On-Premise 5.9.6 documentation is applicable for the On-Premise v5.9.7 patch, v5.9.8 patch, v5.9.9 patch, v5.9.9 HotFix 5, and v5.9.9 HotFix 8 release as well. Refer to the Release details for information about the Upgrade matrix and installation options before you download and install the software. - inSync Server Master: Download link - inSync Storage Node: Download link - inSync Edge Server: Download link - inSync Client: Download link
https://docs.druva.com/010_002_inSync_On-premise/inSync_On-Premise_5.9.6/010_Release_Details/Download_Links
2021-10-15T23:26:14
CC-MAIN-2021-43
1634323583087.95
[]
docs.druva.com
Where Can I Download MemberPress? We're so glad you chose MemberPress as your WordPress Membership Plugin! The very first thing you'll need to do to get started with MemberPress is download the plugin's .zip file from your account page. Clicking the Download button below will direct you to your account page where you can download the latest stable version of MemberPress anytime.
https://docs.memberpress.com/article/5-where-can-i-download-memberpress
2021-10-15T23:44:43
CC-MAIN-2021-43
1634323583087.95
[]
docs.memberpress.com
See you at Remix Boston!
https://docs.microsoft.com/en-us/archive/blogs/seema/see-you-at-remix-boston
2021-10-15T23:06:21
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Synthetic monitoring supports a variety of authentication mechanisms. Depending on the type of monitor you choose, this includes Basic, Digest, NTLM, and NTLMv2. Supported authentication by monitor type Support for various monitor types may depend on your site configuration. Tip For NTLM and NTLMv2, synthetic monitoring'; To encode values for a ping or simple browser monitors, follow these instructions. The full URL http(s)://username:[email protected] will be recorded as plain text in the corresponding synthetic's check data. The URL will be visible when viewing results for this monitor. New Relic will be able to properly authenticate against your NTLM endpoint using curl or with a scripted API monitor. You must use a host or location with access to your endpoint. curl Scripted API Monitor Create a new API test monitor and assign it to a location with access to your endpoint. Replace the URL and validate the following script, which will print all response headers to the script log: $http.get('',{followRedirect: false},// Callbackfunction (err, response, body) {console.log(response.headers);}); Confirm that the WWW-Authenticate response header includes NTLM. Redirects NTLM authentication failures may be caused by $browser.get calls that result in a redirect. Check the response code for your request in the Timeline view in your monitor results. If the request is being redirected you may need to use the redirection location as the URL in the initial $browser.get call.
https://docs.newrelic.com/docs/synthetics/synthetic-monitoring/using-monitors/handle-sites-authentication/
2021-10-15T22:44:01
CC-MAIN-2021-43
1634323583087.95
[]
docs.newrelic.com
Tower Service The foundational Service trait that Tower is based on. Overview The Service trait provides the foundation upon which Tower is built. It is a simple, but powerful trait. At its heart, Service is just an asynchronous function of request to response. async fn(Request) -> Result<Response, Error> Implementations of Service take a request, the type of which varies per protocol, and returns a future representing the eventual completion or failure of the response. Services are used to represent both clients and servers. An instance of Service is used through a client; a server implements Service. By using standardizing the interface, middleware can be created. Middleware implement Service by passing the request to another Service. The middleware may take actions such as modify the request. License This project is licensed under the MIT license. Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tower by you, shall be licensed as MIT, without any additional terms or conditions.
https://docs.rs/crate/tower-service/0.3.0
2021-10-15T23:22:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.rs
Answer A calculated index uses prices that are computed for the purpose of creating the index, instead of the valuations contributed by asset managers or owners. EDHECinfra indices are calculated and follow a consistent and robust valuation methodology for all the assets in the index universe, thus eliminating all biases. Some of the key advantages of calculated indices include: - Defined universe with appropriate representation of assets - Consistent modelling methodology over time which is calibrated to market trends - Clearly explain the valuation of all assets in the universe - Allow for sophisticated analysis Things to consider Calculated indices benefit from greater methodological transparency and consistency. They use prices that are model-driven i.e. average market prices that representative investors would tend to pay rather than what individual investors actually pay. In a highly illiquid and segmented market this can make a significant difference.
https://docs.edhecinfra.com/pages/viewpage.action?pageId=7897162
2020-10-19T21:05:41
CC-MAIN-2020-45
1603107866404.1
[]
docs.edhecinfra.com
: UnityStandardAssetsSetup.exe /S /D=E:\Development\Unity Note: If specifying a folder, use the Unity root folder (that is, the folder containing the Editor folder, and not the folder in which Unity.exe is installed.). You can install multiple versions of Unity on the same computer. On a Mac, the installer creates a folder called Unity, and overwrites any existing folder with this name. To install). Any existing shortcuts, aliases and links to the offline docs might no longer point to the old version of Unity. This can be particularly confusing with the offline docs; if you suddenly find that browser bookmarks to the offline docs no longer work, then check that they have the right folder name in the URL.
https://docs.unity3d.com/2019.4/Documentation/Manual/InstallingUnity.html
2020-10-19T22:05:25
CC-MAIN-2020-45
1603107866404.1
[]
docs.unity3d.com
The Software-Defined Wide Area Network (SD-WAN) dashboard allows you to configure and monitor the services related to VeloCloud and SD-WAN using vRealize Operations Cloud. Using the SD-WAN dashboard, you can also collect the metrics for VeloCloud Orchestrator and VeloCloud Gateway. By default the SD-WAN dashboards are disbaled, if you want to know how to enable them, see Manage Dashboards.You can discover the following services using VeloCloud Orchestrator: - Java Application - VeloCloud Orchestrator - Nginx - ClickHouse - MySQL - Redis - Network Time Protocol You can discover the following services using VeloCloud Gateway: - Network Time Protocol - VeloCloud Gateway
https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/config-guide/GUID-8AB5B13D-1A5F-428B-AC38-30D07AD66B17.html
2020-10-19T22:14:46
CC-MAIN-2020-45
1603107866404.1
[]
docs.vmware.com
Rasa Skill¶ A Rasa wrapper implementation that reads a folder with Rasa models (provided by path_to_models argument), initializes Rasa Agent with this configuration and responds for incoming utterances according to responses predicted by Rasa. Each response has confidence value estimated as product of scores of executed actions by Rasa system in the current prediction step (each prediction step in Rasa usually consists of multiple actions). If Rasa responds with multiple BotUttered actions, then such phrases are merged into one utterance divided by '\n'. Quick Start¶ To setup a Rasa Skill you need to have a working Rasa project at some path, then you can specify the path to Rasa’s models (usually it is a folder with name models inside the project path) at initialization of Rasa Skill class by providing path_to_models attribute. Dummy Rasa project¶ DeepPavlov library has a template config for RASASkill. This project is in essence a working Rasa project created with rasa init and rasa train commands with minimal additions. The Rasa bot can greet, answer about what he can do and detect user’s mood sentiment. The template DeepPavlov config specifies only one component (RASASkill) in a pipeline. The metadata.download field in configuration allows to download and unpack the gzipped template project into subdir {DOWNLOADS_PATH}. If you create a configuration for a Rasa project hosted on your machine, you don’t need to specify metadata.download and just need to correctly set path_to_models of the rasa_skill component. path_to_models needs to be a path to your Rasa’s models directory. See Rasa’s documentation for explanation on how to create project. Usage without DeepPavlov configuration files¶ from deeppavlov.skills.rasa_skill import RASASkill rasa_skill_config = { 'path_to_models': <put the path to your Rasa models>, } rasa_skill = RASASkill(**rasa_skill_config) states_batch = None for utterance in ["Hello", "Hello to the same user_id"]: responses_batch, confidences_batch, states_batch = rasa_skill([utterance], states_batch) print(responses_batch[0])
http://docs.deeppavlov.ai/en/master/features/skills/rasa_skill.html
2021-11-27T10:42:21
CC-MAIN-2021-49
1637964358180.42
[]
docs.deeppavlov.ai
Data Access The optional Data Access feature of Zenoss Cloud allows you to use your favorite SQL tools to display and analyze the monitoring data that Zenoss Cloud receives from your infrastructure and applications. When you enable Zenoss Cloud Data Access, Zenoss grants you access to datasets in a dedicated Google Cloud project that contains views of your data. Then, you use the views or your own views or queries to display and analyze your data. You can use any Google BigQuery interface or any SQL tool that is compatible with BigQuery. For example: - Google Data Studio or Looker - Google Sheets, which is included in Google Cloud - Tableau or Grafana In addition, you can export your data with tools like Matillion, or create customized integrations with data from other sources. The following diagram illustrates the relationships among the components of Data Access. Near real-time access Zenoss Cloud processes incoming data in parallel pipelines. One pipeline populates the data store used by Smart View and dashboards, and another populates the data store used by Data Access. The pipelines complete their work at about the same time. Your data is available in near real-time, every time you run a query—there's no delay for ETL processing because there is no ETL processing! Enabling Data Access To enable the Data Access feature, you need a Google Cloud account. Zenoss Operations creates a Zenoss-owned, Google Cloud project dedicated to you, in the same Google Cloud region as your Zenoss Cloud instance. The project includes BigQuery datasets that contain views of your data. Then, Zenoss Support grants read permission to the datasets to one or more of your Google Cloud account resources. Warning Google charges you for querying the Data Access data store. Once enabled, Zenoss starts retaining your data for the period specified in your Zenoss Cloud contract. Currently, the retention period options are 3 months and 15 months. Of course, you can easily export your data to your own data store and retain it there as long as you wish. Selecting Google Cloud account resources You can request read permission to the datasets in your dedicated Google Cloud project for one or more of the Google Cloud account resources described in the following table.
https://docs.zenoss.io/access/data-access.html
2021-11-27T12:23:22
CC-MAIN-2021-49
1637964358180.42
[]
docs.zenoss.io
The documentation you are viewing is for Dapr v1.4 which is an older version of Dapr. For up-to-date documentation, see the latest version. status CLI command reference Detailed information on the status CLI command Description Show the health status of Dapr services. Supported platforms Usage dapr status -k Flags Examples # Get status of Dapr services from Kubernetes dapr status )
https://v1-4.docs.dapr.io/reference/cli/dapr-status/
2021-11-27T11:17:06
CC-MAIN-2021-49
1637964358180.42
[]
v1-4.docs.dapr.io
Change the UTA2 port from CNA mode to FC mode Contributors You should change the UTA2 port from Converged Network Adapter (CNA) mode to Fibre Channel (FC) mode to support the FC initiator and FC target mode. You should change the personality from CNA mode to FC mode when you need to change the physical medium that connects the port to its network. Take the adapter offline: network fcp adapter modify -node node_name -adapter adapter_name -status-admin down Change the port mode: ucadmin modify -node node_name -adapter adapter_name -mode fcp Reboot the node, and then bring the adapter online: network fcp adapter modify -node node_name -adapter adapter_name -status-admin up Notify your admin or VIF manager to delete or remove the port, as applicable: If the port is used as a home port of a LIF, is a member of an interface group (ifgrp), or hosts VLANs, then an admin should do the following: Move the LIFs, remove the port from the ifgrp, or delete the VLANs, respectively. Manually delete the port by running the network port deletecommand. If the network port deletecommand fails, the admin should address the errors, and then run the command again. If the port is not used as the home port of a LIF, is not a member of an ifgrp, and does not host VLANs, then the VIF manager should remove the port from its records at the time of reboot. If the VIF manager does not remove the port, then the admin must remove it manually after the reboot by using the network port deletecommand. net-f8040-34::> network port show Node: net-f8040-34-01 Speed(Mbps) Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status --------- ------------ ---------------- ---- ---- ----------- -------- ... e0i Default Default down 1500 auto/10 - e0f Default Default down 1500 auto/10 - ... net-f8040-34::> ucadmin show Current Current Pending Pending Admin Node Adapter Mode Type Mode Type Status ------------ ------- ------- --------- ------- --------- ----------- net-f8040-34-01 0e cna target - - offline net-f8040-34-01 0f cna target - - offline ... net-f8040-34::> network interface create -vs net-f8040-34 -lif m -role node-mgmt-home-node net-f8040-34-01 -home-port e0e -address 10.1.1.1 -netmask 255.255.255.0 net-f8040-34::> network interface show -fields home-port, curr-port vserver lif home-port curr-port ------- --------------------- --------- --------- Cluster net-f8040-34-01_clus1 e0a e0a Cluster net-f8040-34-01_clus2 e0b e0b Cluster net-f8040-34-01_clus3 e0c e0c Cluster net-f8040-34-01_clus4 e0d e0d net-f8040-34 cluster_mgmt e0M e0M net-f8040-34 m e0e e0i net-f8040-34 net-f8040-34-01_mgmt1 e0M e0M 7 entries were displayed. net-f8040-34::> ucadmin modify local 0e fc Warning: Mode on adapter 0e and also adapter 0f will be changed to fc. Do you want to continue? {y|n}: y Any changes will take effect after rebooting the system. Use the "system node reboot" command to reboot. net-f8040-34::> reboot local (system node reboot) Warning: Are you sure you want to reboot node "net-f8040-34-01"? {y|n}: y Verify that you have the correct SFP+ installed: network fcp adapter show -instance -node -adapter For CNA, you should use a 10Gb Ethernet SFP. For FC, you should either use an 8 Gb SFP or a 16 Gb SFP, before changing the configuration on the node.
https://docs.netapp.com/us-en/ontap/san-config/change-uta2-port-cna-mode-fc-task.html
2021-11-27T12:05:49
CC-MAIN-2021-49
1637964358180.42
[]
docs.netapp.com
Use the CleanupStalledJob tool to assist in aborting a job that otherwise cannot be aborted via the UI or command line. The job and its steps are then aborted, freeing up resources. The job is then marked for background deletion. This tool is delivered in the form of a single .jar file on GitHub found here. To get started, download the appropriate .jar: CleanupStalledJob-jar-with-dependencies_v.10.0.2.jar: Use with CloudBees CD/RO v10.0.2 and later. CleanupStalledJob-jar-with-dependencies_ver6.jar: Use with CloudBees CD/RO up to and including v10.0.1. General usage - Linux <installation-dir>/jre/bin/java -jar /tmp/CleanupStalledJob-jar-with-dependencies_v.10.0.2.jar [options] - Windows <installation-dir>\jre\bin\java -jar c:\tmp\CleanupStalledJob-jar-with-dependencies_v.10.0.2.jar [options] where options are: The steps below assume the .jar file is downloaded to /tmp (Linux) or c:\tmp (Windows). Locate the UUID of the job you wish to abort. Navigate to. Highlight and copy UUID of the job for which you want to abort. The screenshot below shows the location of the UUID for the selected job. Run the command. <installation-dir>/jre/bin/java -jar \ /tmp/CleanupStalledJob-jar-with-dependencies_v.10.0.2.jar \ - -database-properties <installation-dir>/conf/database.properties \ – -passkey <installation-dir>/conf/passkey \ – -jobId “c6fa2a6a-9bed-11eb-8215-10f00530b5d8” –output “jobid123” Locate and open cleanupStalledJob.login the directory where you ran the CleanupStalledJobcommand. Find the entry indicating that CleanupStalledJobwas able to find the job. It looks similar to this: | CleanupStalledJob | Cleaning up job c6fa2a6a-9bed-11eb-8215-10f00530b5d8 . . .
https://docs.cloudbees.com/docs/cloudbees-cd/10.2/tools-and-utilities/cleanupstalledjob
2021-11-27T12:19:11
CC-MAIN-2021-49
1637964358180.42
[]
docs.cloudbees.com
Picture - 4 minutes to read This document describes report elements that allow you to provide Snap reports with either static or dynamic graphics. The document consists of the following sections. General Information). Static Pictures affected by the options in the Picture Tools: Format tab of the main toolbar. To add an inline picture to a report, do the following. - In the document, place the caret in the position in which you wish to insert an inline picture. Click the Picture command in the General Tools: Insert tab of the main toolbar. The newly created inline picture is inserted in the current caret position and has an In Line with Text wrapping style. A Floating Picture can be freely resized and relocated using drag-and-drop. picture is inserted in the current caret position and has an In Line with Text wrapping style. On the Picture Tools | Format tab, in the Arrange group, click Wrap Text and select the required type of text wrapping around the selected object from the invoked list. A floating picture automatically anchors to the nearest text paragraph. The paragraph to which the selected picture is anchored is marked with an anchor icon ( ). You can anchor a picture to another paragraph using drag-and-drop. Dynamic. - Binding - specifies a data field that provides the data to be encoded as an image. - Empty Field Data Alias - specifies the text to show in a document instead of a blank space if a field receives an empty data source record. - Enable Empty Field Data Alias - shows or hides the text specifies as the Empty Field Data Alias value. - Show Placeholder - shows or hides a placeholder shown instead of a field result when a field receives an empty data source record. Has effect only if the Enable Empty Field Data Alias option is set to false. - Update Mode - specifies whether to preserve the image box size or the ratio of the original image; - Sizing - specifies how an image is resized to fit the size of the box.
https://docs.devexpress.com/WindowsForms/14800/controls-and-libraries/snap/graphical-user-interface/data-visualization-tools/picture?v=19.1
2021-11-27T12:40:08
CC-MAIN-2021-49
1637964358180.42
[array(['/WindowsForms/images/snap-end-user-elements-picture421041.png?v=19.1', 'snap-end-user-elements-picture4'], dtype=object) array(['/WindowsForms/images/snap-end-user-elements-picture321034.png?v=19.1', 'snap-end-user-elements-picture3'], dtype=object) ]
docs.devexpress.com
The Join Group smart service allows you to select a public group and add yourself to it as a member, when the group has an Automatic group membership policy. You cannot use this smart service to join a group that has a restricted or exclusive membership policy. Category: Identity Management Icon: Assignment Options: Attended/Unattended This section contains tab configuration details specific to this smart service. For more information about common configurations see the Process Node Properties page. The data tab displays the node inputs and outputs for the smart service. You can add additional inputs and outputs, if needed. The default input is Group. If the activity is run without being assigned to a user (or a group) you must designate a value for the Group node input. You can either manually enter the group, or a value can be generated using the Expression Editor. When using the Expression Editor, you can reference process variables, incorporate rules and constants, and other data. The expression you create is used at runtime to populate your node input. If the activity is not assigned, it runs as the user who started the process. It is also possible to set the unattended task to run as the process designer, which may not be useful for this activity. The activity does not return any values. On This Page
https://docs.appian.com/suite/help/21.3/Join_Group_Smart_Service.html
2021-11-27T10:53:41
CC-MAIN-2021-49
1637964358180.42
[]
docs.appian.com
- Docs - Denzel Documentation - - Getting Started - - Theme Installation Theme Installation Estimated reading : 2 minute First of all, you need to download the theme installable file from your account that purchased the item, go to your ThemeForest Account > Downloads Tab After the download is completed, you need to unzip the fol and select the way you want to install the theme. Once the download is complete, unzip the file and you would see the following packages: - Documentation – our detail documentation for the theme - Denzel Theme – for manual installation - Licensing – the theme license - Plugins – the plugin you need for the theme Method 1: Theme Installation Via Admin Panel Please, follow the steps below to install Denzel Theme - Step 1: Login to your WordPress Dasrdoard - Step 2: Navigate to Appearance > Themes - Step 3: Click Add New button on top of the page. - Step 4: Then click Upload Theme - Step 5: Next, Browse to the zip file that you have Downloaded from ThemeForest and click Install now , then wait for the installation process - Step 6: After done installing, click Activate the theme Method 2: Theme Installation Via FTP Using an FTP software like File Zilla or CuteFTP to upload the theme files to your WordPress site. Please, follow the steps below to install the theme via FTP: - Step 1: Log in to your server via your FTP client software ( FileZilla, Transmit, etc ) - Step 2: In extracted archive folder, find Denzel.zip. - Step 3: Upload the extracted theme folder into wp-content > themes - Step 4: Activate the newly installed theme by going to Appearance > Themes - Step 5: Done Recommended Article: Still Stuck? We can help you. Create a Support Ticket
https://docs.droitthemes.com/docs/denzel-documentation/getting-started/theme-installation/
2021-11-27T10:49:31
CC-MAIN-2021-49
1637964358180.42
[array(['https://docs.droitthemes.com/wp-content/uploads/2018/08/ti1.jpg', 'Step 1 - Theme Installation'], dtype=object) array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png', 'Still_Stuck'], dtype=object) ]
docs.droitthemes.com
Read Modifiers XAP ReadModifiers class (see DotNetDoc provides static methods and constants to decode read-type modifiers. The sets of modifiers are represented as integers with distinct bit positions representing different modifiers. Four main types of modifiers can be used: RepeatableRead- default modifier DirtyRead ReadCommitted ExclusiveReadLock You can use bitwise or the | operator to unite different modifiers. RepeatableRead, DirtyRead, and ReadCommitted are mutually exclusive (i.e. can’t be used together). ExclusiveReadLock can be joined with any of them. These modifiers can be set either at the proxy level - proxy.ReadModifiers= int, or at the operation level (e.g. using one of read/ ReadIfExists/ ReadMultiple/count methods with a modifiers parameter). Repeatable Read RepeatableRead is the default modifier, defined by the JavaSpaces specification. The RepeatableRead isolation level allows a transaction to acquire read locks on an object it returns to an application, and write locks an object it write, updates, or deletes. By using the RepeatableRead isolation level, space operations issued multiple times within the same transaction always yield the same result. A transaction using the RepeatableReadRead isolation level wait until the object that is write-locked by other transactions are unlocked before they acquire their own locks. This prevents them from reading “dirty” data. In addition, because other transactions cannot update or delete an object that is locked by a transaction using the RepeatableReadRead modifier, once set, enables read/ readIfExists/ readMultiple/count operations under a null transaction to have this complete visibility. Code Example // write something under txn X and commit, making it publicly visible proxy.Write( something, txnX, long.MaxValue); txnX.commit(); // update this something with a new one under a different txn Y proxy.Write( newSomething, txnY, long.MaxValue, 0); // all read operations (Read, ReadIfExists, ReadMultiple, Count) return the // version of the object before txnY was committed (newSomething). // operations can be performed with a new txn Z or a null txn proxy.Read( tmpl, null, ReadModifiers.DirtyRead); // Note: using the same txn (txn Y) will return matches that are visible under the transaction Read Committed The ReadCommitted ISpaceIterator, which performs ReadMultiple and keep their current status by registering notify templates. The ReadCommitted modifier is provided at the proxy level and the read API level. It is relevant for read, ReadIfExists, ReadMultiple, and count. ReadCommitted and DirtyReadCommitted mode. - To read the current state of a space object that is locked under transaction (take or update) should use Dirty Read mode. - Dirty read (without transaction) does not blocks transactional take operation. Code Example The examples below assumes you are using ISpaceProxy interface. ISpaceProxy proxy; // write an object under txn X and commit, making it publicly visible proxy.Write( user, txnX, long.MaxValue); txnX.commit(); // update this object with a new one under a different txn Y proxy.Write( user, txnY, 0, long.MaxValue); // all read operations (read, ReadIfExists, ReadMultiple, Count) return the last publicly visible match. // operations can be performed with a new txn Z or a null txn proxy.Read( user, txnZ, ReadModifiers.ReadCommitted); //. Code Example public void exclusiveReadLock() { // this will allow all read operations with this proxy to use Exclusive Read Lock mode proxy.ReadModifiers = ReadModifiers.ExclusiveReadLock; Lock lok = new Lock(); lok.key = 1; lok.data = "my data"; proxy.Write(lok, null, long.MaxValue); ITransactionManager mgr = GigaSpacesFactory.CreateDistributedTransactionManager (); ITransaction txn1 = mgr.Create(); Lock lock_template1 = new Lock(); lock_template1.key = 1; Lock lock1 = proxy.Read<Lock>(lock_template1, txn1, 10000); if (lock1 != null) { Console.WriteLine("Transaction " + txn1 + " Got exclusive Read Lock on Entry:" + lock1.key); } }
https://docs.gigaspaces.com/xap/10.1/dev-dotnet/transaction-read-modifiers.html
2021-11-27T10:50:23
CC-MAIN-2021-49
1637964358180.42
[]
docs.gigaspaces.com
Setting Window Properties Using STARTUPINFOfield: - The width and height, in pixels, of the window created by CreateWindow. - The location, in screen coordinates of the window created by CreateWindow. -: - The size of the new console window, in character cells. - The location of the new console window, in screen coordinates. - The size, in character cells, of the new console's screen buffer. - The text and background color attributes of the new console's screen buffer. - The title of the new console's window.
https://docs.microsoft.com/en-US/windows/win32/procthread/setting-window-properties-using-startupinfo
2021-11-27T11:34:02
CC-MAIN-2021-49
1637964358180.42
[]
docs.microsoft.com
To allow parallel building of Windows code, Accelerator virtualizes the registry and the file system. The following sections discuss important registry information. There are two relevant areas of registry usage during a build. By default, Accelerator virtualizes HKEY_CLASSES_ROOT (except HKEY_CLASSES_ROOT\Installer and HKEY_CLASSES_ROOT\Licenses). HKEY_CLASSES_ROOT This key contains file name extensions and the COM class registration (). Configuration data is stored under the program IDs, CLSID, Interface, TypeLib, AppId, and so on. For entities created during the build, this information must be virtualized to all involved agents. The following information is registered for a type library: \TypeLib\{libUUID} \TypeLib\{libUUID}\major.minor = human_readable_string \TypeLib\{libUUID}\major.minor\HELPDIR = [helpfile_path] \TypeLib\{libUUID}\major.minor\Flags = typelib_flags \TypeLib\{libUUID}\major.minor\lcid\platform = localized_typelib_filename Other entities that are registered by UUID are registered in different places: A ProgID("ApplicationName") maps to and from a CLSID(GUID). The CLSID maps to the actual ActiveX component ("APP.EXE"). The type library is available from the CLSID: \CLSID\TypeLib = {UUID of type library} \CLSID\{UUID} = human_readable_string \CLSID\{UUID}\ProgID = AppName.ObjectName.VersionNumber \CLSID\{UUID}\VersionIndependentProgID = AppName.ObjectName \CLSID\{UUID}\LocalServer[32] = filepath[/Automation] \CLSID\{UUID}\InProcServer[32] = filepath[/Automation] Applications that add interfaces must register the interfaces, so that: \Interface\{UUID} = InterfaceName \Interface\{UUID}\Typelib = LIBID \Interface\{UUID}\ProxyStubClsid[32] = CLSID All Other Keys Other keys are probably not relevant to the build. HKEY_LOCAL_MACHINE, HKEY_CURRENT_USER, HKEY_USERS, and HKEY_CURRENT_USER are machine specific. If other areas must be virtualized, you should add them to the emake-reg-root option. When a process in the build requests information from the registry, EFS first checks if the requested key is present in its cache. If the key is not present, EFS relays the request to the agent, which then sends the request to eMake. After receiving the response from eMake, the agent loads the key into the EFS cache, subject to the following conditions: If the key is not in the local registry on the agent host, the value from the eMake response is used unconditionally. If the key is in the local registry, the value from the local registry has precedence over the initial value from eMake, but not any value set by prior commands in the build. That is, if the key changes during the course of the build, the new value is used instead of any value from the local registry. The order of precedence is (lowest to highest): Value from eMake host registry before the build starts Value from the agent host registry, if any Value set by an earlier job in the build The additional checking of precedence lets Accelerator operate with tools that store host-specific licensing information in the registry. If the agent simply used the value from eMake unconditionally in all cases, such tools would fail to operate correctly. Electric Cloud strongly recommends against running builds locally on agent host machines. Running builds locally on agent host machines might add relevant keys to the local machine, which take precedence over the eMake machine’s keys. If a key that should come from the eMake machine (such as the typelib information for a lib generated during the build) is already present on the agent because of a local build, the wrong information is used, which might break a build.If an agent machine has locally-created keys, remove the typelibs that are created during the build from the registry. Any typelib with an invalid path name associated with it is a likely candidate for an “underlayed” lookup. Ideally, typelibs created by a build are known. At this point, you should check for their existence on the cluster. If an error occurs that indicates the direction of this problem (for example, a library/typelib cannot be found), investigate the failing agent’s registry. You can add a multistring registry value to the agent host inside HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ElectricFS to exclude processes from interfering with EFS and causing a crash. The ExcludeProcessList entry can list processes from McAfee AntiVirus (for example, Mcshield.exe and mfevtps.exe) or other antivirus software.
https://docs.cloudbees.com/docs/cloudbees-build-acceleration/11.0/configuration-guide/configuring-win/registry-info
2021-11-27T11:33:51
CC-MAIN-2021-49
1637964358180.42
[]
docs.cloudbees.com
$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' For OpenShift Container Platform OpenShift Container Platform OpenShift Container Platform redhat-operators registry.redhat.io: $ podman login registry.redhat.io OpenShift Container Platform: openshift-marketplace (2) spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 ..18>:<tag> \(2) --tag <registry>/<namespace>/<existing_index_image>:<tag> (3) Push the updated index image: $ podman push <registry>/<namespace>/<existing_index_image>:-redhat-operator-index-
https://docs.openshift.com/container-platform/4.9/operators/admin/olm-restricted-networks.html
2021-11-27T12:34:29
CC-MAIN-2021-49
1637964358180.42
[]
docs.openshift.com
Introduction The Snow Integration Connector for CloudSphere iQSonar is used for replication of third-party inventory information into Snow License Manager. The connector collects hardware and software inventory data as well as information about Oracle database products, options, management packs and users. Prerequisites A user account, assigned to a role including the RestAPI View permission. This is available as an extension to the iQSonar interface. A supported version of CloudSphere iQSonar is required. Refer to Snow Compatibility Matrix for versions supported by the Snow Integration Connector for CloudSphere iQSonar. Dependencies on other Snow products The following Snow product versions are required to support the integration connector for CloudSphere iQSonar: Snow Inventory Server 5 (or later) for processing of inventory data Snow License Manager 8 (or later) for presentation of inventory data Updates from older versions Warning Updating from an SQL-based version to the REST API-based version can result in massive computer duplicates. If you have used the Snow Integration Manager for CloudSphere iQSonar since before version SIM 5.24.1, and if you have not used HostnameOnly mode in Snow Inventory Server properly, you have to reach out to Snow support before updating CloudSphere iQsonar. Also refer to the Knowledge article Prevent Computer Duplicates before switching the CloudSphere iQSonar scanner from SQL to RestAPI, available on Snow Globe (requires login).
https://docs.snowsoftware.com/snow-integration-manager/en/UUID-a7ac52e8-d495-05f5-abda-4665952cb23b.html
2021-11-27T10:47:21
CC-MAIN-2021-49
1637964358180.42
[]
docs.snowsoftware.com
Configure SSO with Microsoft Azure AD or AD FS as your Identity Provider If you use Microsoft AzureAD or AD FS as your Identity Provider (IdP), follow these instructions to configure the Splunk platform for single sign-on. After you configure the Splunk platform for SSO, you can map groups form the IdP to those roles so that users can log in. See Map groups on a SAML identity provider to Splunk user roles so that users in those groups can log in. For information about configuring: https://<name>.splunkcloud.com/en-US/account/login?loginType=splunk Configure Splunk Software for SAML - Verify that your system meets all of the requirements. See Configure single sign-on with SAML. - In the Settings menu, select Authentication methods. - Select SAML as your authentication type. - Click Configure Splunk to use SAML. - On the SAML Groups page, click SAML Configuration. - Download or browse and select your metadata file, or copy and paste your metadata directly into the text window. Refer to your.!
https://docs.splunk.com/Documentation/Splunk/8.0.0/Security/ConfigureSSOAzureADandADFS
2021-11-27T12:46:14
CC-MAIN-2021-49
1637964358180.42
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Unable to build project after upgrade. Now what?: Use our Upgrade Wizard (started by using the Telerik > RadControls for WinForms > Upgrade Wizard menu item) and upgrade your project with a few clicks. More information is available in the following documentation article:. Manually change your project references: - Go ahead and open the References node in the Solution Explorer and make a list with all Telerik assemblies. - Then select and remove them. - Right click the References node > Add Reference - In the .NET tab (or the Assemblies > Extensions node in VS2012 and above) find the Telerik references from your list and add them back. With either approach, your project references will be updated to the new version and you wlll be able to successfully build your project.
https://docs.telerik.com/devtools/winforms/knowledge-base/unable-to-build-project-after-upgrade
2021-11-27T12:35:13
CC-MAIN-2021-49
1637964358180.42
[array(['images/ControlPanelHelp.png', None], dtype=object)]
docs.telerik.com
Check XDR status and manage endpoint groups. The Endpoint Inventory app allows you to view which features are enabled on your endpoints, as well as create and manage endpoint groups. For Apex One on-premises customers, only endpoints with the Apex One Patch installed can report to Trend Micro Vision One. After installing the Apex One Patch to Security Agents, allow around 10 minutes for online endpoints to report back. From the Agent Installer tab, select Download to obtain the installation package or a URL link to the Windows, macOS, or Linux installer. Install the Agent on as many endpoints as possible for maximum visibility. Endpoint Basecamp only supports HTTP proxies and does not support the use of proxy credentials. For more information on installing to Linux endpoints, see Deploying the Agent Installer to Linux Endpoints. For more information on installing to macOS endpoints, see Deploying the Agent Installer to Mac Endpoints.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-xdr-online-help/inventory-management_001/endpoint-inventory-2.aspx
2021-11-27T12:01:42
CC-MAIN-2021-49
1637964358180.42
[]
docs.trendmicro.com
Getting Started¶ The easiest way to get started is to run Nivio using Docker. To compile it, you need Java 11. The Docker image is about 350MB and can be started with: docker run dedica/nivio Demo mode¶ docker run -e DEMO=1 dedica/nivio In the demo mode Nivio loads sample data for demonstration purposes. There is a demo in the directory ./nivio-demo/ which starts nginx to serve all example configs from the project and starts Nivio as Docker container. From this directory run docker-compose up then point your browser to. Adding your own content (seed config)¶ Make sure to read Using Templates to dynamically assign data before putting too much effort into item configuration. Nivio expects a seed configuration at start time (unless you want to run the demo mode). You need to set the environment variable SEED. If you want to use files on the host, modify the docker-compose.yml to bind to the corresponding folder, e.g: version: '3.2' services: nivio: image: dedica/nivio:latest environment: SEED: ${SEED} DEMO: ${DEMO} volumes: - type: bind source: /onmyhost/my/files/here target: /my/files/here ports: - 8080:8080 Then you can point to a specific file with the SEED environment variable: SEED=/my/files/here/file.yml docker-compose up Or you provide a URL that serves the yml files to Nivio: SEED= java -jar nivio then point your browser to the GUI at or the API at. Environment variables¶ The following environment variables can be set to configure nivio: A non-empty value causes Nivio to start in demo mode with prepared data. Use the value ‘all’ to load more landscapes. GitHub JSON Web Token (JWT) to connect to GitHub as a GitHub App. GitHub user name. Can also be used to connect as organization with OAuth. GitHUb OAuth Token to connect to GitHub via personal access token. GitHub password (for username/password login). The full URL to the GitLab API, e.g.. GitLab OAuth login password (optional). Personal token to access the GitLab API at GITLAB_HOST_URL (optional). GitLab OAuth login username (optional). If used, GITLAB_PASSWORD is also required). K8s master URL (optional). All variables from can be used. The base URL of Nivio to be used for frontends if running behind a proxy. Branding background color (hexadecimal only). Branding foreground color (hexadecimal only). A URL pointing to a logo. A welcome message on the front page. Accent color used for active elements (hexadecimal only). SMTP mail host. SMTP mail password. SMTP mail port. SMTP mail username. The port Nivio runs on. A semicolon-separated list of file paths containing landscape configurations. SonarQube login (username). SonarQube password. SonarQube proxy host (optional). SonarQube proxy port (optional). SonarQube server URL. Landscape configuration¶ The configuration file contains basic data, references to item descriptions sources, which can be local paths or URLs. The descriptions can be gathered by HTTP, i.e. it is possible to fetch files from protected sources via authentication headers. Think of GitLab or GitHub and the related tokens. You can also add state providers which are used to gather live data and thereby provide state for the items. To finetune the visual appearance of rendered landscapes, the automatic color choice for groups can be overridden as well. Deleting items¶ Items not referenced anymore in the descriptions will be deleted automatically on a complete and successful re-index run. If an error occurs fetching the source while indexing, the behaviour of the indexer changes to treat the available data as partial input. This means only upserts will happen and no deletion. Behind a proxy¶ If you deploy Nivio to run under a different path than root ( /), make sure to set the environment variables SERVER_SERVLET_CONTEXT_PATH and NIVIO_BASE_URL to the path. SERVER_SERVLET_CONTEXT_PATH: /my-landscape NIVIO_BASE_URL:
https://nivio.readthedocs.io/en/master/install.html
2021-11-27T12:26:59
CC-MAIN-2021-49
1637964358180.42
[]
nivio.readthedocs.io
kill_vm_gracefully¶ Description¶ Flags whether a graceful shutdown command should be sent to the VM guest OS before attempting to either halt the VM at the hypervisor side (sending an appropriate command to QEMU or even killing its process). Of course, this is only valid when kill_vm is set to ‘yes’. To force killing VMs without using a graceful shutdown command (such as ‘shutdown -h now’): kill_vm_gracefully = no
https://avocado-vt.readthedocs.io/en/91.0/cartesian/CartesianConfigReference-KVM-kill_vm_gracefully.html
2021-11-27T11:33:33
CC-MAIN-2021-49
1637964358180.42
[]
avocado-vt.readthedocs.io
You're viewing Apigee Edge documentation. View Apigee X documentation. On Tuesday, May 30, 2017,. -.
https://docs.apigee.com/release/notes/4170104-edge-private-cloud-release-notes?authuser=0&hl=ja
2021-11-27T13:42:06
CC-MAIN-2021-49
1637964358189.36
[]
docs.apigee.com
. The parameter's value range is from -1 to 1. By default, the parameter is set to 0 which means that the contrast remains unchanged. Adjusting Brightness# To adjust the brightness, enter a value for the BslBrightness parameter. The parameter's value range is from -1 to 1. By default, the parameter is set to 0 which means that the brightness remains unchanged. How It Works# Contrast# Adjusting the contrast changes the degree of difference between light and dark areas in the image. The more contrast you apply, the more pronounced the difference will be. camera uses an S-curve function to adjust the contrast. This allows you to improve the perceived contrast while preserving the dynamic range of the image. The more you increase the contrast, the more S-shaped the graph of the function will be: The figures above show that increasing the contrast in S-Curve mode has the following effects: - The S-curve gets flatter around its starting and end points and steeper around the center. As a result, contrast in light and dark areas of the image is reduced, and contrast in mid tones is increased. - Low input pixel values are lowered and high input pixel values are increased. As a result, extreme dark and light areas of your image are compressed, which further improves the perceived contrast. - As the curve always starts at (0,0) and ends at (Xmax,Ymax), the dynamic range of the image is preserved. Contrast settings below 0 in S-Curve mode will result in an inverted S-curve with opposite effects. Brightness# Adjusting the brightness allows you to lighten or darken the image by increasing or decreasing its tonal values. Adjusting the brightness moves the pivot point of the Brightness/Contrast function: - Increasing the brightness moves the pivot point towards the upper left. This means that the image will appear lighter. - Decreasing the brightness moves the pivot point to the lower right. This means that the image will appear darker.); // Set the Contrast parameter to 1.2 camera.BslContrast.SetValue(1.2); INodeMap& nodemap = camera.GetNodeMap(); // Set the Brightness parameter to 0.5 CFloatParameter(nodemap, "BslBrightness").SetValue(0.5); // Set the contrast.BslContrast].SetValue(1.2); /* Macro to check for errors */ #define CHECK(errc) if (GENAPI_E_OK != errc) printErrorAndExit(errc) GENAPIC_RESULT errRes = GENAPI_E_OK; /* Return value of pylon methods */ /* Set the Brightness parameter to 0.5 */ errRes = PylonDeviceSetFloatFeature(hdev, "BslBrightness", 0.5); CHECK(errRes); /* Set the contrast.
https://docs.baslerweb.com/brightness-and-contrast.html?filter=Camera:a2A2600-20gmPRO
2021-11-27T14:33:22
CC-MAIN-2021-49
1637964358189.36
[]
docs.baslerweb.com
Date: Wed, 10 Jun 2015 17:28:56 +0530 From: Mayuresh Kathe <[email protected]> To: [email protected] Subject: Re: FreeBSD and Docker Message-ID: <74b05bc6a0e5606ffe29b9a0230b1e86@kathe On 2015-06-10 16:41, [email protected] wrote: > Hi! > > Attempting to install the Discourse discussion forum > () on my DigitalOcean FreeBSD > VPS. Discourse requires Docker, however nothing happens when I try > `wget -qO- | sh` > (). > > What am I doing wrong? did you try getting that file first? as it is! maybe you could issue; wget once downloaded, try running the file; sh <filename> ~mayuresh Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=179444+0+/usr/local/www/mailindex/archive/2015/freebsd-questions/20150614.freebsd-questions
2021-11-27T14:58:48
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Version: 3.2.2 Getting Started NativeBase is a component library that enables devs to build universal design systems. It is built on top of React Native, allowing you to develop apps for Android, iOS and the Web. A Brief History of NativeBase NativeBase v1.xNativeBase started out as an open source framework that enabled developers to build high-quality mobile apps using React Native. The first version included UITabBar on iOS and Drawer on Android. NativeBase v1 was very well-received by the dev community. NativeBase v2.xThe second version was released with new components, preset themes, unified icons & more. The main focus of v2 was to make components easy to theme with very few modifications. From v2.4.1 onwards, NativeBase also included support for the web. NativeBase v3.xWe wanted to make NativeBase the go-to component library for anyone building with React Native and Web (in alpha). This version is accessible, highly customizable and consistent across android, iOS & web. That's not all though, read on for the full benefits of using v3. What's New with NativeBase v3?We had clear goals in mind while building version 3. Take a look at some of the new features we added: Multiplatform NativeBase supports multiple platforms; android, iOS and web. You can also customise properties using platform-specific props. Inherently Beautiful NativeBase ships with a default theme that provides beautiful components, out of the box. Accessible This version has out of the box accessibility including focus management, keyboard navigation and more. Customisable The default theme can be extended as you desire. You can also customise specific components for your app needs.
https://docs.nativebase.io/?utm_source=HomePage&utm_medium=header&utm_campaign=NativeBase_3
2021-11-27T15:17:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.nativebase.io
Warning This version of the documentation is NOT an official release. You are looking at ‘latest’, which is in active and ongoing development. You can change versions on the bottom left of the screen. Weight normalization¶ Preliminaries¶ Suppose that the incoming synaptic weights of a neuron are given as \(\mathbf{w}=w_1, w_2, \ldots, w_n\). A plasticity rule might require that the vector norm \(|\mathbf{w}|\) remains constant. For example, the L1-norm \(|\mathbf{w}|_1\) is used in 1, 2: Keeping this norm constant at a desired target value, say \(w_{target}\), is typically done as an extra step after the main weights plasticity step (for example, after an STDP weight update). First, the norm is computed, and second, all weights \(w_1, \ldots, w_n\) are updated according to: Implementation in NEST¶ Because of the way that the data structures are arranged in NEST, normalizing the weights is a costly operation (in terms of time spent). One has to iterate over all the neurons, then for each neuron fetch all of its incoming connections, calculate the vector norm and perform the actual normalization, and finally to write back the new weights. This would look something like: def normalize_weights(neuron_gids_to_be_normalized, w_target=1): for neur in neuron_gids_to_be_normalized: conn = nest.GetConnections(target=[neur]) w = np.array(conn.weight) w_normed = w / sum(abs(w)) # L1-norm conn.weight = w_target * w_normed To apply normalization only to a certain synapse type, GetConnections() can be restricted to return only synapses of that type by specifying the model name, for example GetConnections(..., synapse_model="stdp_synapse"). To be formally correct, weight normalization should be done at each simulation timestep, but weights typically evolve on a much longer timescale than the timestep that the network is simulated at, so this would be very inefficient. Depending on how fast your weights change, you may want to perform a weight normalization, say, every 100 ms of simulated time, or every 1 s (or even less frequently). The duration of this interval can be chosen based on how far the norm is allowed to drift from \(w_{target}\): longer intervals allow for more drift. The magnitude of the drift can be calculated at the end of each interval, by subtracting the norm from its target, before writing back the normed vector to the NEST connection objects. To summarize, the basic strategy is to divide the total simulated time into intervals of, say, 100 ms. You simulate for 100 ms, then pause the simulation and normalize the weights (using the code above), and then continue simulating the next interval. References¶ - 1 Lazar, A. et al. (2009). SORN: a Self-organizing Recurrent Neural Network. Frontiers in Computational Neuroscience, 3. DOI: 10.3389/neuro.10.023.2009 - 2 Klos, C. et al. Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLoS Computational Biology, 14(6). DOI: 10.1371/journal.pcbi.1006187
https://nest-simulator.readthedocs.io/en/v3.1/guides/weight_normalization.html
2021-11-27T14:24:02
CC-MAIN-2021-49
1637964358189.36
[]
nest-simulator.readthedocs.io
About Since 1975, our practice has offered General Internal Medicine, and Allergy, Asthma and Immunology. Internal Medicine care is offered to patients 12 years and up. Allergy and Immunology is offered to all ages. All practitioners are board certified. We offer the unique Rainbow Asthma Home Care Program, which has received praise for dramatically reducing asthma hospitalizations, emergency room visits, and costs. We sub-specialize in complete asthma care for all age groups.Immune deficiency diseases are part of the practice of allergy, asthma and immunology. We also specialize in recognizing the presence of immune deficiency diseases, and together with our colleagues in Boston, provide comprehensive services in this regard. Board certification: American Board of Allergy and Immunology Hospital affiliation: Holy Family Hospital
https://app.uber-docs.com/Specialists/SpecialistProfile/Thomas-Johnson-MD/New-England-Allergy
2021-11-27T14:34:46
CC-MAIN-2021-49
1637964358189.36
[]
app.uber-docs.com
You're viewing Apigee Edge documentation. View Apigee X documentation. On Tuesday, June 25, 2019, we began releasing a new version of Analytics for Apigee Edge for Public Cloud. New features and updates The following are the new features and updates in this release. Public release of asynchronous custom analytics reports Until now, you could only run analytics custom reports synchronously. For a synchronous report, you run the report request and the request is blocked until the analytics server provides a response. However, because a report might need to process a large amount of data (for example, 100's of GB), a synchronous report might fail because of a time out. This release adds support for running custom reports asynchronously. For an asynchronous report, you issue a report request and retrieve the results at a later time. Some situations when asynchronous query processing might be a good alternative include: - Analyzing and creating reports that span large time intervals. - Analyzing data with a variety of grouping dimensions and other constraints that add complexity to the query. - Managing queries when you find that data volumes have increased significantly for some users or organizations. You can run a custom report asynchronously from the Edge UI or by using the Edge API. You can also run a Monetization report asynchronously, as described in Manage reports. Public.
https://docs.apigee.com/release/notes/19062500-apigee-edge-public-cloud-release-notes-ui?authuser=0&hl=ja
2021-11-27T15:33:05
CC-MAIN-2021-49
1637964358189.36
[]
docs.apigee.com
The Split transaction feature allows you to divide up a transaction into multiple categories, representing, for example, the different items bought with a single purchase at a store. To enter a split transaction, using either the transaction form or the transaction list, start a new transaction, including entering the total amount. Then, instead of selecting a category, click the button to the right of the Category field. button to save the entire transaction. If there is still an unassigned amount, you will be prompted to either return to editing the splits, change the total transaction amount, or leave part of the transaction unassigned. Note that the category field in the transaction form or the transaction list now displays Split transaction.
https://docs.kde.org/trunk5/en/kmymoney/kmymoney/details.ledgers.split.html
2021-11-27T15:07:12
CC-MAIN-2021-49
1637964358189.36
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['split_transaction.png', 'Split transaction'], dtype=object)]
docs.kde.org
this function is a wrapper around getaddrinfo (for ipv6) More... this function is a wrapper around getaddrinfo (for ipv6) in the event this code is using ipv6, it calls getaddrinfo, and it tries to start the connection on each iteration in the linked list returned by getaddrinfo. if pSock is not NULL the following behavior happens. Process can be called in a thread, but Init and Finish must only be called from the parent once the thread is complete ctor finalizes and sets up csSockAddr (and pSock if not NULL), only needs to be called if Process returns 0, but can be called anyways if flow demands it simply sets up m_cHints for use in process the simplest part of the function, only calls getaddrinfo and uses only m_sHostname, m_pAddrRes, and m_cHints.
http://docs.znc.in/classCGetAddrInfo.html
2021-11-27T14:15:54
CC-MAIN-2021-49
1637964358189.36
[]
docs.znc.in
Fedora Infrastructure Kpartx Notes How to mount virtual partitions There can be multiple reasons you need to work with the contents of a virtual machine without that machine running. You have decommisioned the system and found you need to get something that was not backed up. The system is for some reason unbootable and you need to change some file to make it work. Forensics work of some sort. In the case of 1 and 2 the following commands and tools are invaluable. In the case of 3, you should work with the Fedora Security Team and follow their instructions completely. Steps to Work With Virtual System Find out what physical server the virtual machine image is on. Log into batcave01.iad2.fedoraproject.org /var/log/virthost-lists.out: $ grep proxy01.phx2.fedoraproject.org /var/log/virthost-lists.out virthost05.phx2.fedoraproject.org:proxy01.phx2.fedoraproject.org:running:1 If the image does not show up in the list then most likely it is an image which has been decommissioned. You will need to search the virtual hosts more directly: # for i in `awk -F: '{print $1}' /var/log/virthost-lists.out | sort -u`; do ansible $i -m shell -a 'lvs | grep proxy01.phx2' done Log into the virtual server and make sure the image is shutdown. Even in cases where the system is not working correctly it may have still have a running qemu on the physical server. It is best to confirm that the box is dead. # virsh destroy <hostname> We will be using the kpartx command to make the guest image ready for mounting. # lvs | grep <hostname> # kpartx -l /dev/mapper/<volume>-<hostname> # kpartx -a /dev/mapper/<volume>-<hostname> # vgscan # vgchange -ay /dev/mapper/<new volume-name> # mount /dev/mapper/<partition we want> /mnt Edit the files as needed. Tear down the tree. # umount /mnt # vgchange -an <volume-name> # vgscan # kpartx -d /dev/mapper/<volume>-<hostname>
https://docs.fedoraproject.org/si/infra/sysadmin_guide/virt-image/
2021-11-27T14:43:18
CC-MAIN-2021-49
1637964358189.36
[]
docs.fedoraproject.org
Scheduled Tasks for Inventory Beacon FlexNet Manager Suite 2020 R2 (On-Premises) The FlexNet Beacon installer creates two entries in the Windows Task Scheduler, to help automate the operation of the inventory beacon. Most inventory beacon activities are scheduled within the inventory beacon interface and do not use Windows Scheduled Tasks. By default, the inventory beacon tasks are configured to run frequently throughout the day. The purpose of each task is shown in the following table. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/reference/FIB-SchedTasks.html
2021-11-27T15:08:10
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Why Referer Tracking can be Inaccurate Referrer (originally misspelled referer:), can often be inaccurate when using this as a filter when analyzing traffic or marketing campaigns. Because this field is an optional part of the HTTP request, there can be several reasons why using this filter can produce inaccurate results. Referrer Blocking One such reason is due to referrer hiding. With the increase in privacy concerns, many servers and browsers will not send the referrer data, or can even send false data. Additionally, browsers will not send the referrer field when they are redirected using the “Refresh” field. Another reason is due to Secure (HTTPS) access. If the user connects from HTTPS to HTTP, this will also cause the referrer to not be sent. This is implemented to increase security for end users. Also, some pages that use HTML5 can add attributes (rel=”noreferrer”) which will instruct the user agent to not send a referrer. These are just some of the reasons why using referrer can be inaccurate. While they can be useful to see a general or overall view of where you're traffic is coming from, they might not be entirely correct. So, how does one get more accurate data on where a user comes from? More Accurate Results Using UTM Tags In Woopra, we recommend the use of UTM tags when linking traffic to your site. If you are running a campaign, Woopra can use these UTM tags to automatically track campaigns and allow you to filter more accurately on the traffic to your site. To get the most out of UTM tags and campaign tracking, please see our documentation on how you can set up UTM tags: Updated about 2 years ago
https://docs.woopra.com/docs/why-referer-tracking-can-be-inaccurate
2021-11-27T13:46:42
CC-MAIN-2021-49
1637964358189.36
[]
docs.woopra.com
Searching data / Building a query / Operations reference / Geolocation group / Geolocated Country (mmcountry) Geolocated Country (mmcountry) Description Geolocates an IPv4 address and returns its corresponding country code. This operation returns data for public IP addresses. If an IP is private, it will return null. Use the Geolocated Country with MaxMind GeoIP2 (mm2country) operation if you want to get the country codes of an IPv6 address (ip6 data type). How does it work in the search window? This operation needs only one argument: The data type of the new column is string. Example We want to get the country codes corresponding to the IP addresses in our clientIpAddress column, so we click Create column and select the Geolocated country operation. Select clientIpAddress as the argument and assign a name to the new column - let's call it country. You will get the following result: How does it work in LINQ? Use the operator as... and add the operation syntax to create the new column. The syntax is as follows: mmcountry(ip) Example Copy the following LINQ script and try the above example on the demo.ecommerce.data table. from demo.ecommerce.data select mmcountry(clientIpAddress) as country
https://docs.devo.com/confluence/ndt/v7.5.0/searching-data/building-a-query/operations-reference/geolocation-group/geolocated-country-mmcountry
2021-11-27T14:55:06
CC-MAIN-2021-49
1637964358189.36
[]
docs.devo.com
Blockerbugs Infrastructure SOP Blockerbugs is an app developed by Fedora QA to aid in tracking items related to release blocking and freeze exception bugs in branched Fedora releases. Contents Contact Information - Owner Fedora QA Devel #fedora-qa - Location iad2 - Servers blockerbugs01.iad2, blockerbugs02.iad2, blockerbugs01.stg.iad2 - Purpose Hosting the blocker bug tracking application for QA File Locations /etc/blockerbugs/settings.py - configuration for the app Building for Infra Do not use mock For whatever reason, the epel7-infra koji tag rejects SRPMs with the el7.centos dist tag. Make sure that you build SRPMs with: rpmbuild -bs --define='dist .el7' blockerbugs.spec Also note that this expects the release tarball to be in ~/rpmbuild/SOURCES/. Building with Koji You’ll need to ask someone who has rights to build into epel7-infra tag to make the build for you: koji build epel7-infra blockerbugs-0.4.4.11-1.el7.src.rpm Once the build is complete, it should be automatically tagged into epel7-infra-stg (after a ~15 min delay), so that you can test it on blockerbugs staging instance. Once you’ve verified it’s working well, ask someone with infra rights to move it to epel7-infra tag so that you can update it in production. Upgrading Blockerbugs is currently configured through ansible and all configuration changes need to be done through ansible. Upgrade Preparation (all upgrades) Blockerbugs is not packaged in epel, so the new build needs to exist in the infrastructure stg repo for deployment to stg or the infrastructure repo for deployments to production. See the blockerbugs documentation for instructions on building a blockerbugs RPM. Minor Upgrades (no database changes) Run the following on both blockerbugs01.iad2 and blockerbugs02.iad2 if updating in production. Update ansible with config changes, push changes to the ansible repo: roles/blockerbugs/templates/blockerbugs-settings.py.j2 Clear yum cache and update the blockerbugs RPM: yum clean expire-cache && yum update blockerbugs Restart httpd to reload the application: service httpd restart Major Upgrades (with database changes) Run the following on both blockerbugs01.phx2 and blockerbugs02.phx2 if updating in production. Update ansible with config changes, push changes to the ansible repo: roles/blockerbugs/templates/blockerbugs-settings.py.j2 Stop httpd on all relevant instances (if load balanced): service httpd stop Clear yum cache and update the blockerbugs RPM on all relevant instances: yum clean expire-cache && yum update blockerbugs Upgrade the database schema: blockerbugs upgrade_db Check the upgrade by running a manual sync to make sure that nothing unexpected went wrong: blockerbugs sync Start httpd back up: service httpd start
https://docs.fedoraproject.org/mr/infra/sysadmin_guide/blockerbugs/
2021-11-27T14:50:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.fedoraproject.org
8.5.010.99 Voice Platform CTI Connector Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new features and enhancements: - Two new parameters allow CTIC (ICM) to use a REFER message to transfer an ongoing call (a caller leg to CTIC is established) to a destination when CTIC detects that Cisco ICM is unavailable. Enable this functionality by setting the CTIC option [ICMC] ICMUnavailableAction to Transfer. The default is Hangup. Then use the IVR Profile option [gvp.service-parameters] cti.FailoverNumber to specify the transfer destination (a number). Resolved Issues This release contains no resolved issues. Upgrade Notes No special procedure is required to upgrade to release 8.5.010.99. This page was last edited on November 20, 2015, at 19:29.
https://docs.genesys.com/Documentation/RN/8.5.x/gvp-ctic85rn/gvp-ctic8501099
2021-11-27T13:34:10
CC-MAIN-2021-49
1637964358189.36
[]
docs.genesys.com
lightkurve.LightCurve.search_neighbors¶ - LightCurve.search_neighbors(limit: int = 10, radius: float = 3600.0, **search_criteria)[source]¶ Search the data archive at MAST for the most nearby light curves. By default, the 10 nearest neighbors located within 3600 arcseconds are returned. You can override these defaults by changing the limitand radiusparameters. If the LightCurve object is a Kepler, K2, or TESS light curve, the default behavior of this method is to only return light curves obtained during the exact same quarter, campaign, or sector. This is useful to enable coeval light curves to be inspected for spurious noise signals in common between multiple neighboring targets. You can override this default behavior by passing a mission, quarter, campaign, or sectorargument yourself. Please refer to the docstring of search_lightcurvefor a complete list of search parameters accepted. - Parameters - limitint Maximum number of results to return. - radiusfloat or astropy.units.Quantityobject Conesearch radius. If a float is given it will be assumed to be in units of arcseconds. - **search_criteriakwargs Extra criteria to be passed to search_lightcurve. - Returns - result SearchResultobject Object detailing the neighbor light curves found, sorted by distance from the current light curve.
https://docs.lightkurve.org/reference/api/lightkurve.LightCurve.search_neighbors.html
2021-11-27T15:31:11
CC-MAIN-2021-49
1637964358189.36
[]
docs.lightkurve.org
Git Revision as of 16:37, 17 May 2012, that's what Wikipedia says about Git. And it is of course all true. Unfortunately, that definition is all very clinical. That perspective is not gonna do much to help you know why you should even bother with Git. To answer that question, you need to know what Git does for you. You need to know what problems Git solves for you and how to get yourself out of a pickle when you find out you've been using your hammer all wrong. You need to be able to learn while using Git, to fix mistakes you discover along the way. Git is a tool to helping you keep track of your content. Your data, The information that drives your world.
https://docs.uabgrid.uab.edu/w/index.php?title=Git&diff=next&oldid=4101&printable=yes
2021-11-27T13:56:39
CC-MAIN-2021-49
1637964358189.36
[]
docs.uabgrid.uab.edu
Database Search Navigate to the Database Search Find your microbe of interest in a CosmosID database. Database search is located on the left hand side vertical navigation bar. Enter your Query The database search allows querying at both genus and species level. Furthermore, the database query also displays the number of strains available for both the genus and the species level search. Entering your query will produce the top 2 results followed by the tax id. Updated 4 months ago
https://docs.cosmosid.com/docs/database-search
2021-11-27T15:13:33
CC-MAIN-2021-49
1637964358189.36
[array(['https://files.readme.io/b76e533-Database_Search_2021-07-22_at_10.24.04_AM.jpeg', 'Database Search 2021-07-22 at 10.24.04 AM.jpeg'], dtype=object) array(['https://files.readme.io/b76e533-Database_Search_2021-07-22_at_10.24.04_AM.jpeg', 'Click to close...'], dtype=object) array(['https://files.readme.io/14f5867-Database_Search_2021-07-22_at_10.28.46_AM.jpeg', 'Database Search 2021-07-22 at 10.28.46 AM.jpeg'], dtype=object) array(['https://files.readme.io/14f5867-Database_Search_2021-07-22_at_10.28.46_AM.jpeg', 'Click to close...'], dtype=object) ]
docs.cosmosid.com
2.8.0¶ The Mirantis Container Cloud GA release 2.8.0: Introduces support for the Cluster release 5.8.0. - Enhancements - Addressed issues - Known issues - AWS - vSphere - OpenStack - Bare metal - Storage - IAM - LCM - Upgrade - Container Cloud web UI - Components versions - Artifacts
https://docs.mirantis.com/container-cloud/latest/release-notes/releases/2-8-0.html
2022-05-16T14:59:33
CC-MAIN-2022-21
1652662510138.6
[]
docs.mirantis.com
Retry Strategies To save resources, consider adapting to error conditions when polling query endpoints such as Book status, or with long running download endpoints such as Analytics (JSON) and Analytics (Excel). This strategy mitigates the kind of transient network fluctuations that can cause HTTP Status code errors. These intermittent interruptions are often the result of minor network hiccups. Although the interruptions are most likely small, they can still cause disruptions to your integration components. To guarantee data extraction: - Wrap all invocations in the appropriate error handler. - Implement a retry strategy for your HTTP calls with a reasonable back-off period. This ensures a high rate of consistent and successful API calls as the system usually fixes itself within a couple of retries. Updated 2 months ago Did this page help you?
https://docs.ocrolus.com/docs/retry-strategies
2022-05-16T14:54:18
CC-MAIN-2022-21
1652662510138.6
[]
docs.ocrolus.com
This section describes how to back up and restore an on-premises installation of Apigee Developer Services portal (or simply, the portal) using the Postgres pg_dump and pg_restore commands. Before you back up Before you can back up the portal, you must know the name of the portal's database. The PG_NAME property in the portal installation config file specifies the name of the portal's database. The example configuration file in the portal installation instructions uses the name "devportal". If you are unsure of the database name, check the config file, or use the following psql command to show the list of databases: psql -h localhost -d apigee -U postgres -l Where -U specifies the Postgres username used by the portal to access the database. This is the value of the DRUPAL_PG_USER property in the portal installation config file. You will be prompted for the database password. This command displays the following list of databases: Name | Owner | Encoding | Collate | Ctype | Access privileges -------------+--------+----------+-------------+-------------+--------------------- apigee | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/apigee + | | | | | apigee=CTc/apigee + | | | | | postgres=CTc/apigee devportal | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | newportaldb | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | postgres | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/apigee + | | | | | apigee=CTc/apigee template1 | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/apigee + | | | | | apigee=CTc/apigee Back up the portal To backup the portal: - Change to the Drupal directory, /opt/apigee/apigee-drupalby default: cd /opt/apigee/apigee-drupal - Back up your Drupal database instance with the pg_dumpcommand: pg_dump --dbname=portal_db --host=host_IP_address --username=drupaladmin --password --format=c > /tmp/portal.bak Where: - portal_db is the database name. This is the PG_NAMEproperty in the portal installation configuration file. If you are unsure of the database name, see Before you back up. - host_IP_address is the IP address of the portal node. - drupaladmin is the Postgres username used by the portal to access the database. You defined this with the DRUPAL_PG_USERproperty in the portal installation configuration file. When pg_dumpprompts you for the Postgres user password, use the password that you specified with the DRUPAL_PG_PASSproperty in the portal installation configuration file. The pg_dumpcommand creates a copy of the database. - Make a backup of your entire Drupal web root directory. The default webroot location is /opt/apigee/apigee-drupal/wwwroot. - Make a backup of the public files. By default, these files are located in /opt/apigee/apigee-drupal/wwwroot/sites/default/files. If that is the correct location, they will be backed up in Step 3. You must explicitly back them up if you moved them from the default location. - Make a backup of the private files in /opt/apigee/data/apigee-drupal-devportal/private. If you are unsure of the location of this directory, use the drush statuscommand to determine the location of the private file system. Restore the portal After you have backed up the portal, you can restore from your backup using the pg_restore command. To restore from the backup to an existing database, use the following command: pg_restore --clean --dbname=portal_db --host=localhost --username=apigee < /tmp/portal.bak To restore from the backup and create a new database, use the following command: pg_restore --clean --create --dbname=portal_db --host=localhost --username=apigee < /tmp/portal.bak You can also restore the backup files to the Drupal web root directory and the private files.
https://docs.apigee.com/private-cloud/v4.51.00/backup-portal?hl=pt-br
2022-05-16T15:20:35
CC-MAIN-2022-21
1652662510138.6
[]
docs.apigee.com
MBStyle Styling¶ This module allows GeoServer to use Mapbox style documents directly. A Mapbox style document is a JSON based language that defines the visual appearance of a map, what data is drawn, and the order data and styling to use when drawing. A Mapbox style document is an alternative to SLD, with different strengths and weaknesses: Both Mapbox style and SLD documents can define an entire Map, selecting what data is drawn and in what order. As both these documents define the order in which layers are drawn they can be used to define a Layer Group (using the Add Style Group link). Mapbox style documents provide less control then the GeoServer SLD vendor options or accomplish a result using a different approach. A GeoServer SLD TextSymbolizers allows a label priority used when drawing labels. This priority can even be generated on the fly using an expression. A MapBox style document producing the same effect would use several symbol layers, each drawing labels of different importance, and rely on draw order to ensure that the most important labels are drawn first (and are thus shown). The key advantage of Mapbox style documents is their compatibility with Mapbox GL JS and OpenLayers. GeoServer publishes the styles used for rendering, web mapping clients or mobile apps to make use of the same Mapbox style document used by GeoServer. Feel free to experiment with Mapbox style documents, use the GeoServer REST API to convert to SLD (complete with GeoServer vendor options). Mapbox style document support is not a part of GeoServer by default, but is available as an optional extension to install.
https://docs.geoserver.org/stable/en/user/styling/mbstyle/index.html
2022-05-16T14:27:40
CC-MAIN-2022-21
1652662510138.6
[]
docs.geoserver.org
Summary Scoping your report design requirements is imperative. A clear understanding of the business goals and your report audiences is essential in delivering an effective reporting solution. Other design aspects that you should consider include selecting the most fitting report type and factoring in user interface requirements. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/power-bi-effective-requirements/8-summary
2022-05-16T14:31:34
CC-MAIN-2022-21
1652662510138.6
[]
docs.microsoft.com
Ingest: Accessing Storage Account Keys Under Home Screen search for storage accounts or select it from the azure services bar. Select the storage account that was created from the "creating a storage account doc" by searching or finding it in the list. Once you have selected the storage account on the left navigation panel under Security + networking click "Access Keys". In the access keys tab at the top click "show keys". Under key1 the value listed under "key" is your storage_key that will be used as the input for the ingest portion of the config.json file.
https://docs.scuba.io/scuba-lite/Ingest:-Accessing-Storage-Account-Keys.1845461027.html
2022-05-16T15:14:43
CC-MAIN-2022-21
1652662510138.6
[]
docs.scuba.io
The Management Pack for Google Cloud Platform supports the following services. Service Object Description CE Instance (K8s Nodes included) Persistent Disk Provides secure and customizable compute service that lets you create and run virtual machines on Google’s infrastructure Google Kubernetes Engine K8s Clusters K8s Container K8s Pods Provides managed environment for running containerized apps Big Query Big Query Dataset Big Query Table Provides data warehouse for business agility and insights Cloud VPN VPN Gateways VPN Tunnels Allows you to connect your infrastructure to Google Cloud Platform (GCP) on your terms, from anywhere. (Part of Hybrid Connectivity) VPC Network VPC Network Provides virtual network for Google Cloud resources and cloud-based services Cloud Storge Storage Buckets Provides object storage that’s secure, durable, and scalable Cloud SQL Cloud SQL Provides fully managed database for MySQL, PostgreSQL, and SQL Server Memorystore Memorystore Redis Memorystore Memcached Memcached Node Provides in-memory database for managed Redis and Memcached Cloud Spanner Cloud Spanner Provides cloud-native relational database with unlimited scale and 99.999% availability Sole-tenant Node Group Sole-tenant Node Group Provides dedicated hardware for compliance, licensing, and management Filestore Filestore Provides file storage that is highly scalable and secure Node Pool Node Pool Provides collection of CE Instances (Node) that are created by the K8 Cluster. Cloud Bigtable Cloud BigTable Cluster Cloud BigTable Cloud BigTable Table Provides cloud-native wide-column database for large-scale, low-latency workloads Firebase Realtime Database Firebase Realtime Database Provides NoSQL database for storing and syncing data in real time Firestore Database Firestore Database Provides cloud-native document database for building rich mobile, web, and IoT apps Parent topic: Google Cloud Platform
https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-E7F6E8B9-3AE0-4692-820B-028430A4F1F7.html
2022-05-16T16:44:03
CC-MAIN-2022-21
1652662510138.6
[]
docs.vmware.com
The "sys_anim_init" Function #Syntax sys_anim_init(entity, index, anim, duration, loop, reverse, ease); #Description Initializes a given scripted animation for the input entity which can be performed with sys_anim_perform. As this script is run automatically by any animation scripts tailored to specific entity types, it is almost never necessary to run sys_anim_init manually. See Animations for a list of included animation scripts and how to make your own.
https://docs.xgasoft.com/vngen/reference-guide/engine/animations/sys_anim_init/
2022-05-16T14:25:12
CC-MAIN-2022-21
1652662510138.6
[]
docs.xgasoft.com
Speed Up Model Training¶ When you are limited with the resources, it becomes hard to speed up model training and reduce the training time without affecting the model’s performance. There are multiple ways you can speed up your model’s time to convergence. Training on Accelerators¶ Use when: Whenever possible! With Lightning, running on GPUs, TPUs, IPUs on multiple nodes is a simple switch of a flag. GPU Training¶ Lightning supports a variety of plugins to speed up distributed GPU training. Most notably: # run on 1 gpu trainer = Trainer(accelerator="gpu", devices=1) # train on 8 GPUs, using the DDP strategy trainer = Trainer(accelerator="gpu", devices=8, strategy="ddp") # train on multiple GPUs across nodes (uses 8 GPUs in total) trainer = Trainer(accelerator="gpu", devicesStrategy performs three GPU transfers for EVERY batch: Copy the model to the device. Copy the data to the device. Copy the outputs of each device back to the main device. Whereas DDPStrategy only performs two transfer operations, making DDP much faster than DP: Moving data to the device. Transfer and sync gradients. For more details on how to tune performance with DDP, please see the DDP Optimizations section., and our suggestions: num_workers=0means ONLY the main process will load batches (that can be a bottleneck). num_workers=1means ONLY one worker (just not the main process) will load data, but it will still be slow. The performance of high there is", ".*Consider increasing the value of the `num_workers` argument*") # or to ignore all warnings that could be false positives from pytorch_lightning.utilities.warnings import PossibleUserWarning warnings.filterwarnings("ignore", category=PossibleUserWarning) Spawn¶ When using strategy="ddp_spawn" or training on TPUs, the way multiple GPUs/TPU cores are used is by calling torch.multiprocessing .spawn() under the hood. The problem is that PyTorch has issues with num_workers>0 when using .spawn(). For this reason, we recommend you use strategy="ddp" so you can increase the num_workers, however since DDP doesn’t work in an interactive environment like IPython/Jupyter notebooks your script has to be callable like so: python my_program.py However, using strategy="ddp_spawn" enables to reduce memory usage with In-Memory Dataset and shared memory tensors. For more info, checkout Sharing Datasets Across Process Boundaries section. Persistent Workers¶ When using strategy="ddp_spawn" and num_workers>0, consider setting persistent_workers=True inside your DataLoader since it can result in data-loading bottlenecks and slowdowns. This is a limitation of Python .spawn() and PyTorch. TPU Training¶ You can set the devices trainer argument to 1, [7] (specific core) or eight cores. # train on 1 TPU core trainer = Trainer(accelerator="tpu", devices=1) # train on 7th TPU core trainer = Trainer(accelerator="tpu", devices=[7]) # train on 8 TPU cores trainer = Trainer(accelerator="tpu", devices=8) To train on more than eight cores Speed Up Model Training and Plugins guides. Early Stopping¶ Usually, long training epochs can lead to either overfitting or no major improvements in your metrics due to no limited convergence. Here EarlyStopping callback can help you stop the training entirely by monitoring a metric of your choice. You can read more about it here. Mixed Precision (16-bit) Training¶ Lower precision, such as the 16-bit floating-point, enables the training and deployment of large neural networks since they require less memory, enhance data transfer operations since they required less memory bandwidth and run match operations much faster on GPUs that support Tensor Core. upto +3X speedups on modern GPUs. Lightning offers mixed precision training for GPUs and CPUs, as well as bfloat16 mixed precision training for TPUs. # 16-bit precision trainer = Trainer(precision=16, accelerator="gpu", devices=4) Read more about mixed-precision training. it to a max number of epochs. Use the min_epochs and max_epochs Trainer flags to set the number of epochs to run. Setting min_epochs=N makes sure that the training will run for at least N epochs. Setting max_epochs=N will ensure that training won’t happen after N epochs. # interrupt fewer validation checks. You can limit validation check to only run every n epochs using the check_val_every_n_epoch Trainer flag. # default trainer = Trainer(check_val_every_n_epoch=1) # runs validation after every 7th Epoch trainer = Trainer(check_val_every_n_epoch=7) Validation Within Training Epoch¶ Use when: You have a large training dataset and want to run mid-epoch validation checks. For large datasets, it’s often desirable to check validation multiple times within a training epoch. Pass in a float to check that often within one training epoch. Pass in an int K to check every K training batch. Must use an int if using an IterableDataset. # default trainer = Trainer(val_check_interval=1.0) # check every 1/4 th of an epoch trainer = Trainer(val_check_interval=0.25) # check every 100 train batches (ie: for IterableDatasets or fixed frequency) trainer = Trainer(val_check_interval=100) on, which means all parameters from B exclusive to A will have their requires_gradattribute set to False. Restoring their original state of an advanced use case: # Scenario for a GAN with gradient accumulation every two improve performance, you can override optimizer_zero_grad(). For a more detailed explanation of. Clear Cache¶ Don’t call torch.cuda.empty_cache() unnecessarily! Every time you call this, ALL your GPUs have to wait to sync. Transferring module’s __init__ method: # bad self.t = torch.rand(2, 2, device=self.device) # good self.register_buffer("t", torch.rand(2, 2))
https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html
2022-05-16T14:50:42
CC-MAIN-2022-21
1652662510138.6
[]
pytorch-lightning.readthedocs.io
This article explains on a conceptual level how Airlock 2FA mobile-only authentication works. It covers both the app-to-app and the SDK usage cases. Mobile-only authentication is sometimes also referred to as single device authentication and refers to the Airlock 2FA authentication scheme where both the business application (e.g. banking mobile app) and the Airlock 2FA authentication functions run on the same smartphone. Mobile-only authentication uses IAM's authentication flow REST API. Goal - Understand mobile-only authentication in general. - Understand the differences between SDK-usage and app-to-app. - Understand the interaction between involved components. - Learn details about prerequisites and limitations. All following procedures are exemplary and will vary according to your setup or needs. Initial thoughts Mobile-only authentication is required if the target application - the one the user is authenticated for - is on the same mobile device as the Airlock 2FA authenticating app. There are three cases:. - The REST authentication flow is configured to support mobile-only authentication. - The user's smartphone is connected to the internet and is able to connect to the Futurae cloud. - The business app has enrolled an Airlock 2FA factor. Limitations The following limitations apply: - The IAM Loginapp web interfaces do not support browser applications with app-to-app communication. This applies to IAM version 7.6 and might be improved in future versions. However, the IAM REST API provides the necessary functions. Thus, a custom login front-end may implement the use-case.
https://docs.airlock.com/iam/7.6/data/mobileonlyau.html
2022-05-16T14:38:59
CC-MAIN-2022-21
1652662510138.6
[]
docs.airlock.com
This event occurs when an error has arisen in the connection. property OnError: TDAConnectionErrorEvent; Write the OnError event handler to respond to errors that arise with connection. Check the E parameter to get the error code. Set the Fail parameter to False to prevent an error dialog from being displayed and to raise the EAbort exception to cancel current operation. The default value of Fail is True.
https://docs.devart.com/mydac/Devart.Dac.TCustomDAConnection.OnError.htm
2022-05-16T15:23:34
CC-MAIN-2022-21
1652662510138.6
[]
docs.devart.com
What is. A StereoMode value is also available to display the old stereo 3D videos by separating cyan and red colors (AnaGlyph) Technical Details The 3D videos can be compressed in the following two ways: - Separate track for each eye. - Combine each eye tracking into a single track. The MKV file container supports both of these. For the single-track videos which are easier with the 3D material inside them, you have to set the StereoMode field which decides either the planes are assembled in the mono or left right combined track. You can use one of the following StereoMode field values: For the multiple tracks, the MKV container needs to decide the functionality of each track separately. The TrackOperation with TrackCombinePlanes are used to do the job.
https://docs.fileformat.com/video/mk3d/
2022-05-16T14:58:27
CC-MAIN-2022-21
1652662510138.6
[]
docs.fileformat.com
Use SQL Server Profiler to Create a SQL Trace Collection Set Applies to: SQL Server (all supported versions) In SQL Server you can exploit the server-side trace capabilities of SQL Server Profiler to export a trace definition that you can use to create a collection set that uses the Generic SQL Trace collector type. There are two parts to this process: Create and export a SQL Server Profiler trace. Script a new collection set based on an exported trace. The scenario for the following procedures involves collecting data about any stored procedure that requires 80 milliseconds or longer to complete. In order to complete these procedures you should be able to: Use SQL Server Profiler to create and configure a trace. Use SQL Server Management Studio to open, edit, and execute a query. Create and export a SQL Server Profiler trace In SQL Server Management Studio, open SQL Server Profiler. (On the Tools menu, click SQL Server Profiler.) In the Connect to Server dialog box, click Cancel. For this scenario, ensure that duration values are configured to display in milliseconds (the default). To do this, follow these steps: On the Tools menu, click Options. In the Display Options area, ensure that the Show values in Duration column in microseconds check box is cleared. Click OK to close the General Options dialog box. On the File menu, click New Trace. In the Connect to Server dialog box, select the server that you want to connect to, and then click Connect. The Trace Properties dialog box appears. On the General tab, do the following: In the Trace name box, type the name that you want to use for the trace. For this example, the trace name is SPgt80. In the Use the template list, select the template to use for the trace. For this example, click TSQL_SPs. On the Events Selection tab, do the following: Identify the events to use for the trace. For this example, clear all check boxes in the Events column, except for ExistingConnection and SP:Completed. In the lower-right corner, select the Show all columns check box. Click the SP:Completed row. Scroll across the row to the Duration column, and then select the Duration check box. In the lower-right corner, click Column Filters to open the Edit Filter dialog box. In the Edit Filter dialog box, do the following: In the filter list, click Duration. In the Boolean operator window, expand the Greater than or equal node, type 80 as the value, and then click OK. Click Run to start the trace. On the toolbar, click Stop Selected Trace or Pause Selected Trace. On the File menu, point to Export, point to Script Trace Definition, and then click For SQL Trace Collection Set. In the Save As dialog box, type the name that you want to use for the trace definition in the File name box, and then save it in the location that you want. For this example, the file name is the same as the trace name (SPgt80). Click OK when you receive a message that the file was successfully saved, and then close SQL Server Profiler. Script a new collection set from a SQL Server Profiler trace In SQL Server Management Studio, on the File menu, point to Open, and then click File. In the Open File dialog box, locate and then open the file that you created in the previous procedure (SPgt80). The trace information that you saved is opened in a Query window and merged into a script that you can run to create the new collection set. Scroll through the script and make the following replacements, which are noted in the script comment text: Replace SQLTrace Collection Set Name Here with the name that you want to use for the collection set. For this example, name the collection set SPROC_CollectionSet. Replace SQLTrace Collection Item Name Here with the name that you want to use for the collection item. For this example, name the collection item SPROC_Collection_Item. Click Execute to run the query and to create the collection set. In Object Explorer, verify that the collection set was created. To do this, follow these steps: Right-click Management, and then click Refresh. Expand Management, and then expand Data Collection. The SPROC_CollectionSet collection set appears at the same level as the System Data Collection Sets node. By default, the collection set is disabled. Use Object Explorer to edit the properties of SPROC_CollectionSet, such as the collection mode and upload schedule. Follow the same procedures that you would for the System Data collection sets that are provided with the data collector. Example The following code sample is the final script resulting from the steps documented in the preceding procedures. /*************************************************************/ -- SQL Trace collection set generated from SQL Server Profiler -- Date: 11/19/2007 12:55:31 AM /*************************************************************/'SPROC_CollectionSet', variables for the collection item. DECLARE @trace_definition xml; DECLARE @collection_item_id int; -- Define the trace parameters as an XML variable SELECT @trace_definition = convert(xml, N'<ns:SqlTraceCollector xmlns:ns"DataCollectorType" use_default="0"> <Events> <EventType name="Sessions"> <Event id="17" name="ExistingConnection" columnslist="1,2,14,26,3,35,12" /> </EventType> <EventType name="Stored Procedures"> <Event id="43" name="SP:Completed" columnslist="1,2,26,34,3,35,12,13,14,22" /> </EventType> </Events> <Filters> <Filter columnid="13" columnname="Duration" logical_operator="AND" comparison_operator="GE" value="8'SPROC_Collection_Item', Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/sql/relational-databases/data-collection/use-sql-server-profiler-to-create-a-sql-trace-collection-set?view=sql-server-2017
2022-05-16T17:07:40
CC-MAIN-2022-21
1652662510138.6
[]
docs.microsoft.com
Storing a coverage in a JDBC database¶ Warning The screenshots on this tutorial have not yet been updated for the 2.0.x user interface. But most all the rest of the information should be valid, and the user interface is roughly the same, but a bit more easy to use. Warning The imagemosaic-jdbc module is a community extension, thus, unsupported. Introduction¶ This tutorial describes the process of storing a coverage along with its pyramids in a jdbc database. The ImageMosaic JDBC plugin is authored by Christian Mueller and is part of the geotools library. The full documentation is available in GeoTools Image Mosaic JDBC <library/coverage/jdbc/index.html>. This tutorial will show one possible scenario, explaining step by step what to do for using this module in GeoServer (since Version 1.7.2) Getting Started¶ We use postgis/postgres as database engine, a database named “gis” and start with an image from openstreetmap. We also need this utility . The best way to install with all dependencies is downloading from here Create a working directory, lets call it working ,download this image with a right mouse click (Image save as …) and save it as start_rgb.png Check your image with: gdalinfo start_rgb.png This image has 4 Bands (Red,Green,Blue,Alpha) and needs much memory. As a rule, it is better to use images with a color table. We can transform with rgb2pct (rgb2pct.py on Unix).: rgb2pct -of png start_rgb.png start.png Compare the sizes of the 2 files. Afterwards, create a world file start.wld in the working directory with the following content.: 0.0075471698 0.0000000000 0.0000000000 -0.0051020408 8.9999995849 48.9999999796 Preparing the pyramids and the tiles¶ If you are new to tiles and pyramids, take a quick look here How many pyramids are needed ?¶ Lets do a simple example. Given an image with 1024x1024 pixels and a tile size with 256x256 pixels.We can calculate in our brain that we need 16 tiles. Each pyramid reduces the number of tiles by a factor of 4. The first pyramid has 16/4 = 4 tiles, the second pyramid has only 4/4 = 1 tile. Solution: The second pyramid fits on one tile, we are finished and we need 2 pyramids. The formula for this: number of pyramids = log(pixelsize of image) / log(2) - log (pixelsize of tile) / log(2). Try it: Go to Google and enter as search term “log(1024)/log(2) - log(256)/log(2)” and look at the result. If your image is 16384 pixels , and your tile size is 512 pixels, it is log(16384)/log(2) - log(512)/log(2) = 5 If your image is 18000 pixels, the result = 5.13570929. Thake the floor and use 5 pyramids. Remember, the last pyramid reduces 4 tiles to 1 tile, so this pyramid is not important. If your image is 18000x12000 pixel, use the bigger dimension (18000) for the formula. For creating pyramids and tiles, use from the gdal project. The executeable for Windows users is gdal_retile.bat or only gdal_retile, Unix users call gdal_retile.py Create a subdirectory tiles in your working directory and execute within the working directory: gdal_retile -co "WORLDFILE=YES" -r bilinear -ps 128 128 -of PNG -levels 2 -targetDir tiles start.png What is happening ? We tell gdal_retile to create world files for our tiles (-co “WORLDFILE=YES”), use bilinear interpolation (-r bilinear), the tiles are 128x128 pixels in size (-ps 128 128) , the image format should be PNG (-of PNG), we need 2 pyramid levels (-levels 2) ,the directory for the result is tiles (-targetDir tiles) and the source image is start.png. Note A few words about the tile size. 128x128 pixel is proper for this example. Do not use such small sizes in a production environment. A size of 256x256 will reduce the number of tiles by a factor of 4, 512x512 by a factor of 16 and so on. Producing too much tiles will degrade performance on the database side (large tables) and will also raise cpu usage on the client side ( more image operations). Now you should have the following directories workingcontaining start.png, start.wldand a subdirectory tiles. working/tilescontaining many *.pngfiles and associated *.wldfiles representing the tiles of start.png working/tiles/1containing many *.pngfiles and associated *.wldfiles representing the tiles of the first pyramid working/tiles/2containing many *.pngfiles and associated *.wldfiles representing the tiles of the second pyramid Configuring the new map¶ The configuration for a map is done in a xml file. This file has 3 main parts. The connect info for the jdbc driver The mapping info for the sql tables Configuration data for the map Since the jdbc connect info and the sql mapping may be reused by more than one map, the best practice is to create xml fragments for both of them and to use xml entity references to include them into the map xml. First, find the location of the GEOSERVER_DATA_DIR. This info is contained in the log file when starting GeoServer.: ---------------------------------- - GEOSERVER_DATA_DIR: /home/mcr/geoserver-1.7.x/1.7.x/data/release ---------------------------------- Put all configuration files into the coverages subdirectory of your GeoServer data directory. The location in this example is /home/mcr/geoserver-1.7.x/1.7.x/data/release/coverages Create a file connect.postgis.xml.incwith the following content <connect> <!-- value DBCP or JNDI --> <dstype value="DBCP"/> <!-- <jndiReferenceName value=""/> --> <username value="postgres" /> <password value="postgres" /> <jdbcUrl value="jdbc:postgresql://localhost:5432/gis" /> <driverClassName value="org.postgresql.Driver"/> <maxActive value="10"/> <maxIdle value="0"/> </connect> The jdbc user is “postgres”, the password is “postgres”, maxActive and maxIdle are parameters of the apache connection pooling, jdbcUrl and driverClassName are postgres specific. The name of the database is “gis”. If you deploy GeoServer into a J2EE container capable of handling jdbc data sources, a better approach is <connect> <!-- value DBCP or JNDI --> <dstype value="JNDI"/> <jndiReferenceName value="jdbc/mydatasource"/> </connect> For this tutorial, we do not use data sources provided by a J2EE container. The next xml fragment to create is mapping.postgis.xml.inc <!-- possible values: universal,postgis,db2,mysql,oracle --> <spatialExtension name="postgis"/> <mapping> <masterTable name="mosaic" > <coverageNameAttribute name="name"/> <maxXAttribute name="maxX"/> <maxYAttribute name="maxY"/> <minXAttribute name="minX"/> <minYAttribute name="minY"/> <resXAttribute name="resX"/> <resYAttribute name="resY"/> <tileTableNameAtribute name="TileTable" /> <spatialTableNameAtribute name="SpatialTable" /> </masterTable> <tileTable> <blobAttributeName name="data" /> <keyAttributeName name="location" /> </tileTable> <spatialTable> <keyAttributeName name="location" /> <geomAttributeName name="geom" /> <tileMaxXAttribute name="maxX"/> <tileMaxYAttribute name="maxY"/> <tileMinXAttribute name="minX"/> <tileMinYAttribute name="minY"/> </spatialTable> </mapping> The first element <spatialExtension> specifies which spatial extension the module should use. “universal” means that there is no spatial db extension at all, meaning the tile grid is not stored as a geometry, using simple double values instead. This xml fragment describes 3 tables, first we need a primary table where information for each pyramid level is saved. Second and third, the attribute mappings for storing image data, envelopes and tile names are specified. To keep this tutorial simple, we will not further discuss these xml elements. After creating the sql tables things will become clear. Create the configuration xml osm.postgis.xmlfor the map (osm for “open street map”) <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE ImageMosaicJDBCConfig [ <!ENTITY mapping PUBLIC "mapping" "mapping.postgis.xml.inc"> <!ENTITY connect PUBLIC "connect" "connect.postgis.xml.inc">]> <config version="1.0"> <coverageName name="osm"/> <coordsys name="EPSG:4326"/> <!-- interpolation 1 = nearest neighbour, 2 = bilinear, 3 = bicubic --> <scaleop interpolation="1"/> <verify cardinality="false"/> &mapping; &connect; </config> This is the final xml configuration file, including our mapping and connect xml fragment. The coverage name is “osm”, CRS is EPSG:4326. <verify cardinality="false"> means no check if the number of tiles equals the number of rectangles stored in the db. (could be time consuming in case of large tile sets). This configuration is the hard stuff, now, life becomes easier :-) Using the java ddl generation utility¶ The full documentation is in GeoTools User Manual: Using the java ddl generation utility. To create the proper sql tables, we can use the java ddl generation utility. This utility is included in the gt-imagemosaic-jdbc-version.jar. Assure that this jar file is in your WEB-INF/lib directory of your GeoServer installation. Change to your working directory and do a first test: java -jar <your_geoserver_install_dir>/webapps/geoserver/WEB-INF/lib/gt-imagemosaic-jdbc-{version}.jar The reply should be: Missing cmd import | ddl Create a subdirectory sqlscripts in your working directory. Within the working directory, execute: java -jar <your_geoserver_install_dir>/webapps/geoserver/WEB-INF/lib/gt-imagemosaic-jdbc-{version}.jar ddl -config <your geoserver data dir >/coverages/osm.postgis.xml -spatialTNPrefix tileosm -pyramids 2 -statementDelim ";" -srs 4326 -targetDir sqlscripts Explanation of parameters In the directory working/sqlscripts you will find the following files after execution: createmeta.sql dropmeta.sql add_osm.sql remove_osm.sql Note IMPORTANT: Look into the files createmeta.sql and add_osm.sql and compare them with the content of mapping.postgis.xml.inc. If you understand this relationship, you understand the mapping. The generated scripts are only templates, it is up to you to modify them for better performance or other reasons. But do not break the relationship to the xml mapping fragment. Executing the DDL scripts¶ For user “postgres”, databae “gis”, execute in the following order: psql -U postgres -d gis -f createmeta.sql psql -U postgres -d gis -f add_osm.sql To clean your database, you can execute remove_osm.sql and dropmeta.sql after finishing the tutorial. Importing the image data¶ The full documentation is in GeoTools User Manual: Using the java ddl generation utility. First, the jdbc jar file has to be in the lib/ext directory of your java runtime. In my case I had to copy postgresql-8.1-407.jdbc3.jar. Change to the working directory and execute: java -jar <your_geoserver_install_dir>/webapps/geoserver/WEB-INF/lib/gt-imagemosaic-jdbc-{version}.jar import -config <your geoserver data dir>/coverages/osm.postgis.xml -spatialTNPrefix tileosm -tileTNPrefix tileosm -dir tiles -ext png This statement imports your tiles including all pyramids into your database. Configuring GeoServer¶ Start GeoServer and log in.Underyou should see If there is no line starting with “ImageMosaicJDBC”, the gt-imagemosiac-jdbc-version.jar file is not in your WEB-INF/lib folder. Go to and fill in the formular Press New and fill in the formular Press Submit. Press Apply, then Save to save your changes. Next selectand select “osm”. Press New and you will enter the Coverage Editor. Press Submit, Apply and Save. Underyou will find a new layer “topp:osm”. Select it and see the results If you think the image is stretched, you are right. The reason is that the original image is georeferenced with EPSG:900913, but there is no support for this CRS in postigs (at the time of this writing). So I used EPSG:4326. For the purpose of this tutorial, this is ok. Conclusion¶ There are a lot of other configuration possibilities for specific databases. This tutorial shows a quick cookbook to demonstrate some of the features of this module. Follow the links to the full documentation to dig deeper, especially if you are concerned about performance and database design. If there is something which is missing, proposals are welcome.
https://docs.geoserver.org/stable/en/user/tutorials/imagemosaic-jdbc/imagemosaic-jdbc_tutorial.html
2022-05-16T15:23:34
CC-MAIN-2022-21
1652662510138.6
[]
docs.geoserver.org
Logistics. But now you can easily edit your shipment parameters even if it’s already added to the list. Technical improvements - Tracking links for several packages are now sent in one email. - Refined API access rights for users with different roles. Bug fixes - Fixed an issue that duplicated some of the triggered alert events. - Fixed an issue that caused an error after following a tracking link from an email. - Fixed an issue that caused an error while trying to delete all the alerts. - Fixed an issue with a wrong error message while creating a device group.
https://docs.moeco.io/logistics/rn/3.3/
2022-05-16T15:35:42
CC-MAIN-2022-21
1652662510138.6
[]
docs.moeco.io
Price-info Widgets on Magento 1 For adding Price-info Widget to your Magento 1 Website, follow these instructions Step 1: Find view.phtml from the following location in your hosting (Magento installed Folder) and open it with your desired text-editor YOUR_MAGENTO_FOLDER/app/design/frontend/rwd/YOUR_TEMPLATE_FOLDER/template/catalog/product/view.phtml Step 2: Find "getPriceHtml" There should be only one occurrence of the "getPriceHtml" string, and it may look similar to: <?php echo $this->getPriceHtml($_product); ?> Step 3: Place the following <script> tag below the "getPriceHtml" code. Replace PLACE_YOUR_MERCHANT_ID with your unique merchant ID. This will have been provided to you in your welcome email. If you are unsure of your merchant ID, please reach out to [email protected] <script src=" echo $_product->getFinalPrice(); ?>&merchantId=PLACE_YOUR_MERCHANT_ID"></script> You may try inserting the code a few lines below the "getPriceHtml" code, or below some other elements. Try different places and view the visual appearance to find the most suitable place for your site. You may need to flush cache for the changes to take effect. Step 4: Save and you should see a working widget on your website.
https://docs.shophumm.com.au/widgets/price-info/magento_1.html
2022-05-16T15:09:52
CC-MAIN-2022-21
1652662510138.6
[]
docs.shophumm.com.au
Summary Include incubator development builds mediaCLOUD, mediaBOX and mediaSERVER The software documentation covers the MediaSignage products including the free mediaCLOUD service, the subscription based mediaCLOUD Enterprise edition as well as the mediaSERVER. Documentation is updated on regular basis in accordance with the current public release of software and hardware revisions. Additional resources Disclaimer While making every attempt to present information accurately in this publication the company disclaims liability for any loss or damage arising from its use. This publication should not be relied upon as a substitute for legal, technical or other professional advice. The publication should only be used with the supplied version of the software and hardware. Any other use may result in loss or damage to the operating system or to the device itself.
http://docs.mediasignage.com/body_index.html
2022-05-16T15:40:50
CC-MAIN-2022-21
1652662510138.6
[]
docs.mediasignage.com
9 amand close time is set as 1 pm. IF: Order Scheduling is enabled and the "Allow today order to be scheduled after" field is set to 120 Minutes Then: For Today, two Order Scheduling Slots will be displayed to the customers i.e 12:00 pm - 12:30 pm 12:30 pm - 1:00 pm
https://docs.foodomaa.com/premium-modules/order-schedule-module
2022-05-16T15:19:08
CC-MAIN-2022-21
1652662510138.6
[]
docs.foodomaa.com
Search This Document Change, delete, or reorder saved Wi-Fi networks Your BlackBerry PlayBook tablet remembers the Wi-Fi networks you connect to and automatically connects whenever you're in range. If multiple networks are available, your tablet connects to the one closest to the top of your list of saved networks. - On the status bar, tap > Wi-Fi. - In the drop-down list, tap Saved Networks. - To change options for a saved network, tap the network. - To move a saved network up or down in the list, touch and hold the network. Drag it to where you want it. - To delete a saved network, tap . Beside the network, tap . - To stop your tablet from automatically connecting to a saved network, tap the network. Clear the Enable Profile checkbox. Tap Save. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/45744/Change_reorder_delete_wifi_network_1390101_11.jsp
2013-05-18T11:42:33
CC-MAIN-2013-20
1368696382360
[]
docs.blackberry.com
What is RFC? RFC stands for Response For Class. RFC measures the complexity of the class in terms of method calls. For each class, it counts: - +1 for each method - +1 for each call of a distinct method (note that getters and setters are not considered as methods) Example: How to Hunt for Bad RFC? Add the Chidamber & Kemerer widget on your dashboard: and drill down. Labels
http://docs.codehaus.org/pages/viewpage.action?pageId=229739500
2013-05-18T11:53:29
CC-MAIN-2013-20
1368696382360
[array(['/s/fr_FR/3278/15/_/images/icons/emoticons/warning.png', None], dtype=object) ]
docs.codehaus.org
may be a bytes object specifying additional data to send(). Warning If neither cafile nor capath is specified, an HTTPS request will not do any verification of the server’s certificate.). In addition, default installed ProxyHandler makes sure the requests are handled through the proxy when they are set. The legacy urllib.urlopen function from Python 2.6 and earlier has been discontinued; may be a string..). Return the URL given in the constructor. Return the type of the URL — also known as the scheme. Return the host to which a connection will be made. Return the selector — the part of the URL that is sent to the server..'. This method is applicable only for local hostnames. When a remote hostname is given, an URLError is raised. Changed in version 3.2. python.org website uses utf-8 encoding as specified in it’s meta tag, we will use same for decoding the bytes object. >>> >>> params = urllib.parse.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> params = params.encode('utf-8') >>> f = urllib.request.urlopen("", params) >>>. Clear the cache that may have been built up by previous calls to urlretrieve().¶.
http://docs.python.org/release/3.2.2/library/urllib.request.html
2013-05-18T11:52:13
CC-MAIN-2013-20
1368696382360
[]
docs.python.org
Document Type Dissertation Abstract The Mother Building is an architecturally-themed social experiment in most respects. It is an endeavor into understanding the mind of the architect, the creative drive and the particular aspect of motivation. Meanwhile, it is also an enterprise to reestablish high architecture as a primarily public art: to remove the more grandiose aspects of our practice from the ivory tower and back to the streets, to create a new dialogue between architect and society. In other words, how do we put together a building that acts to best facilitate the genesis of more buildings and stimulates public interest in a practice? How do we make a building that allows the people to build their city and turns the architects into their representatives…how do we turn architecture into a democratic practice? Recommended Citation Jeffrey, Richmond Downey, "Mother Building: Communal Architecture Incubator" (2009). Architecture Theses. Paper 16.
http://docs.rwu.edu/archthese/16/
2013-05-18T11:41:56
CC-MAIN-2013-20
1368696382360
[]
docs.rwu.edu
. There are no base classes. There are no implemented interfaces. There are no attributes in this class. filter(record) Determine if the specified record is to be logged. Is the specified record to be logged? Returns 0 for no, nonzero for yes. If deemed appropriate, the record may be modified in-place. There are no known subclasses.
http://docs.zope.org/zope3/Code/logging/Filter/index.html
2013-05-18T11:32:02
CC-MAIN-2013-20
1368696382360
[]
docs.zope.org
Installation and Configuration Guide Local Navigation Search This Document Remove the BlackBerry Enterprise Server software - On the taskbar, click Start > Settings > Control Panel > Add/Remove Programs. - Click BlackBerry Enterprise Server. - Click Remove. - Click Yes. Next topic: Delete registry keys Previous topic: Removing the BlackBerry Enterprise Server software Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/25747/Remove_the_BES_software_400701_11.jsp
2013-05-18T12:05:05
CC-MAIN-2013-20
1368696382360
[]
docs.blackberry.com
Development Guide Local Navigation Search This Document Elements implemented from the PAP standard The PAP standard describes the following elements. For more information about the PAP standard, visit to read WAP-247-PAP-20010429-a. Next topic: RIM-specific XML elements Previous topic: Standard messages and nonstandard messages Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/25167/PAP_messages_implemented_1254561_11.jsp
2013-05-18T12:03:35
CC-MAIN-2013-20
1368696382360
[]
docs.blackberry.com
Form validation Latest revision as of 10:41, 1 February 2013). [edit] Client-side validation .. is done via javascript while the user is filling in the form fields. It uses the HTML classes required and validate-[xxx] (with [xxx] being a joomla or custom rule; e.g. validate-numeric) More here: Client-side form validation [edit]
http://docs.joomla.org/index.php?title=Form_validation&diff=80273&oldid=75253
2013-05-18T11:27:03
CC-MAIN-2013-20
1368696382360
[]
docs.joomla.org
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up Ins 13.04(1) (1) Purposes. This rule is promulgated to implement, and set forth procedural requirements necessary to carry out the purpose and provisions of ss. 612.51 , 612.52 and 612.54 , Stats. Ins 13.04(2) (2) Scope. This rule shall apply to all corporations organized or operating under ch. 612 , Stats. Ins 13.04(3) (3) Undertaking to pay premiums and assessments . The undertaking to pay premiums and assessments to be signed by all prospective members shall be in form and substance substantially as follows, and may be a part of the application: UNDERTAKING TO PAY PREMIUMS AND ASSESSMENTS I, of , in consideration of insurance on my buildings and personal property, insured to myself, my heirs and assigns by the Insurance Company, bind myself, and to the extent of their interest in the property my heirs and assigns, to pay to the company the premiums for such insurance and, within the period of time stated in the notice of assessment, my share of all legal assessments, if any, levied by the company, together with all legal costs and charges incurred in legal proceedings to collect any assessment levied upon me and statutory penalties for nonpayment, according to the statutes and the terms and conditions in the policy and any renewals thereof or of the insurance thereunder. My property covered by the insurance, both personal and real, shall be liable for that share, waiving all exemptions. Dated this day of , 20 Witness Agent Applicants Ins 13.04(4) (4) Town mutual policy forms. Town mutual policy forms shall conform to ss. Ins 6.07 and 6.76 , and shall include provisions substantially as follows relating to articles of incorporation and bylaws and notice of annual meeting: Ins 13.04(4)(a) (a) ARTICLES OF INCORPORATION AND BYLAWS It is hereby mutually understood and agreed by and between this company and the insured, that this policy is made and accepted with reference to the Articles of Incorporation and Bylaws, which are hereby declared to be part of this contract. This provision applies whether or not the Articles of Incorporation and Bylaws are included in this policy. Ins 13.04(4)(b) (b) NOTICE The insured is notified that by virtue of this Policy he or she is a member of the Insurance Company, of _________________, County, Wisconsin, and that the annual meetings of said company are held in_______________, County, Wisconsin, on the (date) in (month) of each year at _____o'clock__M. Ins 13.04(5) (5) Permissible variations. Provisions of a town mutual policy may be so arranged in the policy as to provide for convenience in its preparation and issuance. Blank spaces may be changed or altered, spaces may be provided for the listing of rates and premiums for coverages insured under the policy or by riders or by endorsements attached to or printed thereon, and spaces may be utilized for reference to forms and for listing the amount of insurance, provisions as to coinsurance, provisions as to mortgage clause, descriptions and locations of the insured property and other matters advisable and necessary to indicate a delineation of the insurance effective under the contract, and other data as may be included for duplication of daily reports for office records. Ins 13.04(6) (6) Forms and endorsements. Riders, forms and endorsements may be attached to the town mutual policy to include perils in addition to fire and lightning and for other necessary purposes. Except when in contradiction with ch. 612 , Stats. , the contracts, endorsements, and other forms of town mutuals should be similar to like forms of insurers subject to chs. 631 and 632 , Stats. Ins 13.04(7) (7) Mortgagee clause. If a loss under a policy issued by a town mutual insurer is payable to a mortgagee who is not an insured, the mortgagee clause may provide: Ins 13.04(7)(a) (a) For payment by the insurer despite policy defense; or Ins 13.04(7)(b) (b) That the mortgagee is not liable for any premium or assessment, regardless of whether coverage has been extended after payment of a premium or assessment by the mortgagee. Ins 13.04 History History: Cr. Register, August, 1974, No. 224 , eff. 9-1-74; emerg. am. (4) (a) and (c), eff. 6-22-76; am. (4) (a) and (c), Register, September, 1976, No. 249 , eff. 10-1-76; am. (3), r. and recr. (4) and (5), cr. (6), Register, April, 1982, No. 316 , eff. 5-1-82; cr. (7), Register, May, 1986, No. 365 , eff. 6-1-86. Ins 13.05 Ins 13.05 Accounting records, accounting controls and reports. Ins 13.05(1) (1) Purpose. This rule is intended to implement and interpret s. 601.41 , Stats., for the purpose of setting minimum standards and techniques for accounting and reporting of data relating to company financial transactions and other operations. Ins 13.05(2) (2) Scope. This rule shall apply to all town mutual insurers organized or operating under ch. 612 , Stats. Ins 13.05(3) (3) Accounting records. The following journals, ledgers and subsidiary records or similar records from which the data indicated may be obtained shall be maintained: Ins 13.05(3)(a) (a) Policy Register: A register or other records which shall contain the policy number, policyholder's name, effective date of policy, term of policy, risk in force, amount of risk in force reinsured, premium amount, policy fee, reinsurance premium, and provision for miscellaneous data. Ins 13.05(3)(b) (b) Cash Receipts Journal: A journal which shall contain the date, payor, amount received, identification, and reference to the general ledger account and amount affected. All cash received by the company shall be recorded in the journal. Ins 13.05(3)(c) (c) Cash Disbursements Journal: A journal which shall contain the date, payee, check number, amount of check, and a reference to the general ledger account and amount affected. All cash disbursed by the company shall be recorded in the journal. Ins 13.05(3)(d) (d) General Journal: A journal for recording entries for all transactions affecting ledger items, which are not recorded in the cash receipts journal or cash disbursements journal. The general journal shall contain the date of the transaction, an explanation, the ledger account affected, and the amount of the transaction. Ins 13.05(3)(e) (e) General Ledger: A ledger which shall have an account for each asset and liability, surplus, income and expense items of the company. Each account shall contain an account title and/or number, a date for each transaction, a description, debit amounts, credit amounts and an account balance. Ins 13.05(3)(f) (f) Loss Claim Register: A register for recording all claims filed with the company. It shall list all claims in claim number order and contain the claimant's and policyholder's name, policy number, date of loss, date that loss was reported to the company, cause of the loss, estimated amount of the loss, and the date the claim was settled and the amount of loss payments, if any. Claims closed without payment should be so noted. Ins 13.05(4) (4) Accounting controls. The following minimum controls of records and data handling should be maintained: Ins 13.05(4)(a) (a) Cash Receipts: All cash receipts shall be recorded on a cash receipts journal. The cash receipts and cash funds of the company shall at all times be kept separate and distinct from any personal, agency or other funds. All cash received shall be deposited in the bank intact, in the company's name. A duplicate deposit ticket shall be retained in the company's office for each deposit. All checks in payment of premiums or received by the company for other purposes shall be endorsed for deposit immediately upon receipt. All cash receipts shall be deposited at least weekly. All cash deposits shall be prepared and made, whenever possible, by some individual other than the one who records the receipts or reconciles the bank accounts. Ins 13.05(4)(b) (b) Cash Disbursements: All disbursements except those made from the petty cash fund shall be made by check. All checks issued by the company shall be recorded in chronological and numerical order in a cash disbursements journal. Each disbursement shall be supported and explained in the records of the company. All checks used for disbursements shall be pre-numbered and properly accounted for. All checks shall be mailed or delivered immediately after being signed. All disbursements over a specified amount shall be approved by more than one officer, director or employee of the company. Whenever possible, a person other than the person maintaining the company's cash disbursement journal or reconciling the bank accounts shall sign the checks. Ins 13.05(4)(c) (c) Petty Cash Fund: A petty cash fund may be maintained for the payment of small bills or for making change. Each disbursement shall be supported by a signed voucher or receipted invoice. At any time the total of the cash, checks and paid vouchers in the fund shall exactly equal the total of the fund as originally set up. The petty cash fund shall be reimbursed at regular intervals and always on the last business day of each year. Ins 13.05(4)(d) (d) Reconciliation of Bank Accounts: Bank statements shall be obtained from each of the banks in which the company maintains checking accounts at the end of each calendar month. The balance appearing on the bank statement shall be reconciled with the cash balance appearing on the company's records at the end of each month. Whenever possible, bank reconciliations should be made or reviewed by an individual other than the individuals preparing and making bank deposits, recording income and disbursements, and individuals signing company checks. Ins 13.05(4)(e) (e) Loss Claims: All claims reported to the company shall be assigned a claim number when reported. Claims in excess of a specified amount shall be approved by more than one officer, director or employee of the company. All claims shall be adequately documented so that amounts for settlement and coverage can be verified. The claim file shall contain the reason for denial if the claim is denied. Ins 13.05(4)(f) (f) General Internal Controls: Non-negotiable evidences of company investments such as registered bonds, certificates of deposits, notes, etc., shall be maintained to ensure their safekeeping with adequate safety controls. Negotiable evidences of company investments shall be maintained in a safety deposit box in a bank, or under a safekeeping agreement with a bank or banking and trust company pursuant to s. 610.23 , Stats. Access to a company safety deposit box containing negotiable securities shall require the presence and signature of at least 2 officers, directors or employees of the company. Company accounting records shall be maintained in such detail that verification can be made to source documents supporting each transaction. Ins 13.05(5) (5) Financial statements. Financial statements shall be prepared by the secretary and treasurer of the company showing the financial condition of the company as of December 31, of each year or whenever requested by the commissioner. The report shall be prepared as prescribed by the commissioner. Ins 13.05(6) (6) Fidelity bond requirements. All insurers subject to this rule shall procure and maintain in force a fidelity bond or honesty insurance as a guaranty against financial loss caused by employee dishonesty. The bond shall cover all fraudulent or dishonest acts, including larceny, theft, embezzlement, forgery, misappropriation, wrongful abstraction or willful application, committed by employees acting alone or in collusion. The bond shall cover all officers, directors and employees having direct access to the company's assets and with responsibility for the handling and processing of income of the company and disbursements of the company. A blanket bond covering all officers, directors and employees satisfies this requirement. The minimum amount of the bond shall be determined on the basis of total admitted assets, plus gross income of the company as set forth in the following schedule: - See PDF for table Ins 13.05 History History: Cr. Register, August, 1974, No. 224 , eff. 9-1-74; reprinted to correct error, Register, March, 1980, No. 291 ; am. (3)(e), Register, April, 1982, No. 316 , eff. 5-1-82; am. (3) (a) to (f), (4) and (6), Register, July, 1991, No. 427 , eff. 8-1-91; am. (6), Register, June, 2001, No. 546 , eff. 1-1-02. Ins 13.06 Ins 13.06 Surplus requirements. Ins 13.06(1) (1) Purpose. This rule implements and interprets ss. 612.31 and 612.33 , Stats., for the purpose of setting minimum surplus requirements as a condition for the transaction of specified types of business. Ins 13.06(2) (2) Scope. This rule shall apply to all town mutual insurers subject to ch. 612 , Stats. Ins 13.06(3) (3) Nonproperty insurance. Ins 13.06(3)(a) (a) If a town mutual insurer retains any portion of a risk covered by nonproperty insurance, the town mutual shall obtain reinsurance on that nonproperty business with an insurer authorized to do business in this state. The maximum aggregate liability for incurred losses on nonproperty coverage retained by a town mutual insurer for any calendar year or contract year may not exceed the lesser of $200,000 or 20% of its surplus as of the preceding December 31. Ins 13.06(3)(b) (b) A town mutual may retain nonproperty insurance coverage not to exceed a proportional share of each limit of liability as shown in the following schedule: - See PDF for table Ins 13.06(4) (4) Surplus requirements. A town mutual insurer shall maintain a surplus of the greater of $200,000 or 20% of the net written premiums and assessments in the 12-month period ending on or not more than 60 days before the date as of which the calculation is made. Every town mutual shall achieve and maintain this minimum surplus by December 31, 2001. Ins 13.06(5) (5) Individual circumstances. The commissioner may take into consideration the experience, management and any other significant information about an individual town mutual insurer in determining whether to approve or disapprove town mutual property and nonproperty reinsurance and in setting of minimum surplus requirements. Ins 13.06 History History: Cr. Register, December, 1974, No. 228 , eff. 1-1-75; cr. (4) to (6), Register, July, 1984, No. 343 , eff. 8-1-84; am. (3) and (5), r. and recr. (6), cr. (3) (b) and (c), Register, December, 1984, No. 348 , eff. 1-1-85; r. (3) (a) and (5), renum. (3) (b) and (c) to be (3) (a) and (b), and (6) to be (5), and am. (4), Register, June, 2001, No. 546 , eff. 1-1-02; except (4), eff. 7-1-01. Ins 13.08 Ins 13.08 Valuation of liabilities. Ins 13.08(1) (1) Purpose. This rule implements and interprets s. 623.04 , Stats., for the purpose of determining liabilities for financial statements filed with the commissioner. Ins 13.08(2) (2) Scope. This rule shall apply to all town mutual insurers subject to ch. 612 , Stats. Ins 13.08(3) (3) Unearned premium reserve. The financial statements of town mutuals which charge advance premiums shall show as a liability an unearned premium reserve. The unearned premium reserve must be calculated on all advance premiums, on the original or full-term premium basis, plus all advance premiums on reinsurance assumed from other town mutual insurers, less advance premiums on risks assumed by other insurers under reinsurance contract. The minimum unearned premium reserves shall be calculated on the premiums in force as follows: Ins 13.08(3)(a) (a) One year policies or policies on which premiums are paid annually. Ins 13.08(3)(a)1. 1. 50% of the net advanced premium. Ins 13.08(3)(b) (b) Two year policies on which the entire premium is paid in advance. Ins 13.08(3)(b)1. 1. 75% on policies in first year of term. Ins 13.08(3)(b)2. 2. 25% on policies in second year of term. Ins 13.08(3)(c) (c) Three year policies on which entire premium is paid in advance. Ins 13.08(3)(c)1. 1. 83% on policies in first year of term. Ins 13.08(3)(c)2. 2. 50% on policies in second year of term. Ins 13.08(3)(c)3. 3. 17% on policies in third year of term. Ins 13.08(4) (4) The unearned premium reserve shall be the sum of the amounts as calculated above. Any other method of calculating the unearned premium reserve must be approved by the commissioner. Ins 13.08 History History: Cr. Register, December, 1974, No. 228 , eff. 1-1-75; am. (3) (intro.), Register, April, 1982, No. 316 , eff. 5-1-82; r. (3) (d) and (e), Register, June, 2001, No. 546 , eff. 1-1-02. Ins 13.09 Ins 13.09 Reinsurance. Ins 13.09(1) (1) Purpose. This rule implements and interprets s. 612.33 , Stats., for the purpose of setting rules or guidelines for permitted and prohibited reinsurance and required reinsurance. Ins 13.09(2) (2) Scope. This rule shall apply to all town mutual insurers subject to ch. 612 , Stats. Ins 13.09(3) (3) Definitions. For the purpose of this section only: Ins 13.09(3)(a) (a) ``Maximum attachment point" means the amount of losses, expressed as a percentage of net premiums written, which constitutes the limit of the town mutual's retention under the aggregate excess of loss reinsurance required by sub. (4) . Ins 13.09(3)(b) (b) ``Net premiums written" means gross premiums written less premiums ceded for reinsurance inuring to the benefit of an aggregate excess of loss reinsurance contract. Reinsurance premiums ceded for aggregate excess of loss reinsurance, reinsurance premiums paid or recovered related to coverage for other years, and dividends paid to policyholders shall not be considered in determining net premiums written. Ins 13.09(4) (4) Required reinsurance. Ins 13.09(4)(a) (a) Aggregate excess of loss reinsurance. Every town mutual shall obtain and continuously maintain unlimited aggregate excess of loss reinsurance for all risks covered by property and nonproperty insurance that is not otherwise ceded under another reinsurance contract. The aggregate excess of loss reinsurance shall provide a maximum attachment point expressed as a percentage of net premiums written, which is based on the relationship of the town mutual's prior year-end surplus to prior year-end gross premiums written, as set forth in the following schedule: - See PDF for table Ins 13.09(4)(a)2. 2. For purposes of this section 13.09, all calculations shall be based on the final annual statement filed with the commissioner. Ins 13.09(4)(a)3. 3. The aggregate excess of loss reinsurance contract shall warrant by specific reference that it complies with this section. Ins 13.09(4)(a)4. 4. Any town mutual that fails to comply, or has reason to believe that it is in imminent risk of failure to comply, with this section after its effective date shall notify the commissioner within 5 days of such failure or awareness. Ins 13.09(4)(b) (b) Reinsurance of nonproperty insurance. Any town mutual which provides nonproperty insurance coverage shall obtain reinsurance as required by s. 612.33 (2) (b) , Stats. Ins 13.09 History History: Cr. Register, December, 1974, No. 228 , eff. 1-1-75; r. and recr., Register, June, 2001, No. 546 , eff. 1-1-02. Next file: Chapter Ins 14 /code/admin_code/ins/13 true administrativecode /code/admin_code/ins/13/05/4/a administrativecode/Ins 13.05(4)(a) administrativecode/Ins 13.05?
http://docs.legis.wisconsin.gov/code/admin_code/ins/13/05/4/a
2013-05-18T12:03:30
CC-MAIN-2013-20
1368696382360
[]
docs.legis.wisconsin.gov
Help Center Local Navigation Search This Document Enroll a user in content - Change the enrollment status for a user - Change the content completion status for a user to Completed - Unenroll a user from content Next topic: Change the enrollment status for a user Previous topic: Require a user to get approval to enroll in content Was this information helpful? Send us your comments.
http://docs.blackberry.com/nl-nl/admin/deliverables/22986/Enroll_a_user_in_content_869966_11.jsp
2013-05-18T11:16:35
CC-MAIN-2013-20
1368696382360
[]
docs.blackberry.com
Applying custom module chrome From Joomla! Documentation (Difference between revisions) Revision as of 12:02, 20 January 2008 Applying custom Module chrome To define custom Module chrome in your template you need to create a file called modules.php in your template html directory. For example, this might be [path-to-Joomla!]/templates/my_template/html/modules.php. In this file you should define a function called modChrome_style where "style" is the name of your custom Module chrome. This function will take three arguments: -
http://docs.joomla.org/index.php?title=Applying_custom_module_chrome&diff=1714&oldid=1403
2013-05-18T11:51:56
CC-MAIN-2013-20
1368696382360
[]
docs.joomla.org
Web Server Information - 1 minute to read - - DarkLight Web Server Information provides information about the web server for a given website domain, including the server type, its version and operating system, the content management system (CMS) name and its version, the installed CMS plugins, versions and more. The Web Server Information adapter connection requires the following parameters: - Web Server Domain - Website URL, hostname or an IP address. - Web Server Port - Use default 443 port or specify a different port. - Choose Instance - If you are using multi-nodes, choose the Axonius node that is integrated with the adapter. By default, the 'Master' Axonius node (instance) is used. For details, see Connecting Additional Axonius Nodes Was this article helpful?
https://docs.axonius.com/docs/web-server-information
2019-11-12T01:23:24
CC-MAIN-2019-47
1573496664469.42
[array(['https://cdn.document360.io/95e0796d-2537-45b0-b972-fc0c142c6893/Images/Documentation/image%281023%29.png', 'image.png'], dtype=object) ]
docs.axonius.com
PageSplit super: UISplitViewController (on iOS) The SplitView class is a container that presents a master-detail interface. In a master-detail interface, changes in the master window drive changes in the detail window. The two windows can be arranged so that they are side-by-side, so that only one at a time is visible, or so that one only partially hides the other. You can use the SplitView class on all devices. When building your app’s user interface, the split view is typically the root container of your app. The split view has no significant appearance of its own. Most of its appearance is defined by the child windows you set.ChangeToDisplayMode(displayMode: Int) Use this event to be notified when the display mode for the PageSplit is about to change windows: List Array of windows currently managed by the navigation. var selectedWindow: Window or NavigationBar Current Window in the details view. If the Window does not exit in the view hierarchy then it is automatically added. var collapsed: Bool A Boolean value indicating whether only one of the child view controllers is displayed. (read-only) var preferredDisplayMode: SplitViewDisplayMode The preferred arrangement of the split view controller interface. var displayMode: SplitViewDisplayMode The current arrangement of the split view controller’s contents. (read-only) var preferredPrimaryColumnWidthFraction: Float The relative width of the primary view controller’s content. var minimumPrimaryColumnWidth: Float The minimum width (in points) required for the primary view controller’s content. var maximumPrimaryColumnWidth: Float The maximum width (in points) allowed for the primary view controller’s content. var primaryColumnWidth: Float The width (in points) of the primary view controller’s content. (read-only). TargetWindow.openIn(ContainerWindow);Modal(TransitionStyle: TransitionStyle, completion: Closure = null) Open window modally usign the specified transition style. func close(animated: Bool = true) Close window if modally opened. Enums StatusBarVisibility - .Default - .Hidden - .Visible StatusBarStyle - .DarkContent - .Default - .LightContent SplitViewDisplayMode - .AllVisible - .Automatic - .PrimaryHidden - .PrimaryOverlay TransitionStyle - .Cards - .CoverVertical - .CrossDissolve - .Crossfade - .Cube - .Default - .Explode - .Flip - .FlipHorizontal - .Fold - .NatGeo - .NotAnimated - .PartialCurl - .Portal - .Turn
https://docs.creolabs.com/classes/PageSplit.html
2019-11-12T02:01:23
CC-MAIN-2019-47
1573496664469.42
[array(['../documentation/docs/images/classes/pagesplit.png', None], dtype=object) ]
docs.creolabs.com
Voice CTI Blocks CTI (which stands for Computer Telephony Integration) blocks provide interfaces between Genesys Voice Platform (GVP) and Genesys Framework components and SIP Server. There are six CTI blocks: - Get Access Number Block for using Get access number to retrieve the access code (number) of a remote site from an IVR Server. - Interaction Data Block for sending attached data. Get and Put operations are supported. - Route Request Block for sending route requests. It uses the Userdata extension attribute for sending back data attached to an interaction (attached data). - Statistics Block to retrieve statistics from Stat Server via IServer. - ICM Interaction Data Block to work with a Cisco product called Intelligent Contact Management (ICM), which provides intelligent routing and Computer Telephony Integration. You can use the GVP ICM Adapter in VoiceXML applications when invoking services, responding to requests, and sharing data. - ICM Route Request Block to transfer a call to Intelligent Contact Management. Also see Working with CTI Applications. CTI Scenarios: SIPS versus CTIC Composer will generate code for both SIP Server and CTI Connection scenarios simultaneously. The code to be executed at runtime depends on which scenario is active when the voice application runs. No decision is required at design time. For more information, see the topic CTI Scenarios. Also see the VoiceXML Reference on the Genesys Voice Platform Wiki. This page was last edited on June 1, 2018, at 07:44. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/8.1.4/Help/CTIBlocks
2019-11-12T00:19:46
CC-MAIN-2019-47
1573496664469.42
[]
docs.genesys.com
Container control in PowerApps (experimental) Provides the ability to create hierarchy. Important This is an experimental feature. Experimental features can radically change or completely disappear at any time. For more information, read Understand experimental and preview features in PowerApps. Description The container can hold a set of controls and has its own properties. You can start with inserting a blank container, then customize it by adding controls to it, resizing it, moving it, hiding it, and making other changes. You can also start with a number of controls, select them and add them into a container through the context menu in the tree view or right click on the canvas.). Known limitations Containers do not work with canvas components or within forms. Frequently asked questions What is the difference between a container and a group? The authoring group is a lightweight concept used for moving around controls and bulk editing similar properties of controls within the group. The authoring group does not affect the layout of the app. The container control previously shipped in experimental as a replacement for the authoring group renamed as the enhanced group. It was renamed to the container control as there is value in both a lightweight authoring group and a strutured container control with additional properties. Feedback
https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/controls/control-container
2019-11-12T00:31:40
CC-MAIN-2019-47
1573496664469.42
[]
docs.microsoft.com
Read Event Occurs when an existing Microsoft Outlook item is opened for editing by the user. The Read event differs from the Open event in that Read occurs whenever the user selects the item in a view that supports in-cell editing as well as when the item is being opened in an Inspector. Subobject**_Read()** object An object that evaluates to one of the objects in the Applies To list. In Microsoft Visual Basic Scripting Edition (VBScript), use the word Item. Example This Visual Basic for Applications (VBA) example uses the Read event to increment a counter that tracks how often an item is read. Public WithEvents myItem As Outlook.MailItem Sub Initialize_handler() Set myItem = Application.ActiveExplorer.CurrentFolder.Items(1) myItem.Display End Sub Sub myItem_Read() Dim myProperty As Outlook.UserProperty Set myProperty = myItem.UserProperties("ReadCount") If (myProperty Is Nothing) Then Set myProperty = myItem.UserProperties.Add("ReadCount", olNumber) End If myProperty.Value = myProperty.Value + 2 myItem.Save | Open Event | PropertyChange Event | Reply Event | ReplyAll Event | Send Method | Using events with Automation | Write Event
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa171358%28v%3Doffice.11%29
2019-11-12T01:14:37
CC-MAIN-2019-47
1573496664469.42
[]
docs.microsoft.com
How to: Retrieve Many Objects At Once This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here. You can retrieve many objects in one query by using LoadWith. The following code uses the LoadWith method to retrieve both Customer and Order objects. using (NorthwindDbContext dbContext = new NorthwindDbContext()) { Telerik.OpenAccess.FetchOptimization.FetchStrategy fetchStrategy = new Telerik.OpenAccess.FetchOptimization.FetchStrategy(); fetchStrategy.LoadWith<Customer>(c => c.Orders); dbContext.FetchStrategy = fetchStrategy; IQueryable<Customer> query = from c in dbContext.Customers where c.Country == "Germany" select c; foreach (Customer customer in query) { Console.WriteLine(customer.CustomerID); foreach (Order order in customer.Orders) { Console.WriteLine("\t{0}", order.OrderID); } } } Using dbContext As New NorthwindDbContext() Dim fetchStrategy As New Telerik.OpenAccess.FetchOptimization.FetchStrategy() fetchStrategy.LoadWith(Of Customer)(Function(c) c.Orders) dbContext.FetchStrategy = fetchStrategy Dim query As IQueryable(Of Customer) = From c In dbContext.Customers Where c.Country = "Germany" Select c For Each customer_Renamed As Customer In query Console.WriteLine(customer_Renamed.CustomerID) For Each order_Renamed As Order In customer_Renamed.Orders Console.WriteLine(vbTab & "{0}", order_Renamed.OrderID) Next order_Renamed Next customer_Renamed End Using
http://docs.telerik.com/data-access/deprecated/developers-guide/crud-operations/defining-fetch-plans/data-access-tasks-fetch-plans-howto-retrieve-many-objects-at-once
2016-05-24T15:40:49
CC-MAIN-2016-22
1464049272349.32
[]
docs.telerik.com
Module Development PyroCMS is built to be modular, so creating modules is a pretty simple process. The core modules are stored in system/pyrocms/modules and you can install extra ones to addons/default/modules or addons/shared_addons/modules. Any module you create should go into one of those two locations, not in system/cms/modules. Each module can contain the following directories: - config/ - controllers/ - helpers/ - libraries/ - models/ - views/ - js/ - css/ - img/ If a module will need to have a front-end (something that displays to the user) then it should contain at least one controller, and that controller should be named after the module. addons/<site-ref>/modules/blog/controllers/blog.php The Module details.php FileThe Module details.php File Each module contains a details.php file which contains its name, description, version, whether it is available in the backend and/or frontend, admin menus, etc. If you set a module to backend => false then it will not show in the admin panel menu. Likewise if you set it to frontend => false it will not be available in places like Navigation where it shows a list of modules to link to. When the CP > Addons page is loaded or when PyroCMS is installed it indexes all details.php files and stores the data from the info() method in the default_modules table. If you make edits to this file the changes will not be seen until it is re-installed or you edit the table manually. One exception is the sections and shortcuts used by the admin panel. These are loaded each time they are needed so you can place permission checks around them and control the menus the way you need to. You must use $this->db->dbprefix('table_name') when running manual queries. This makes sure the module is using the correct database table as all table names are prefixed with a "site ref" which in most installations will simply be "default_". This ensures that you may easily upgrade to Professional if the need arises or that you may distribute the module for installation on both Community and Pro. $this->db->query()or similar. Active Record such as $this->db->where()and $this->db->get()add the prefix automatically. You can also manage your tables with dbforge to avoid this step as it automatically adds the prefix. If you wish to create a module that is available for use across all sites on a Multi-Site install then you can specify your own prefix before running queries to that table. Just remember to set it back in case PyroCMS makes more queries after your module. You may set the prefix using $this->db->set_dbprefix('custom'); and you may set it back by using $this->db->set_dbprefix(SITE_REF.'_'); Here is the basic structure for the details.php file: <?php defined('BASEPATH') or exit('No direct script access allowed'); class Module_Sample extends Module { public $version = '2.0'; public function info() { return array( 'name' => array( 'en' => 'Sample' ), 'description' => array( 'en' => 'This is a PyroCMS module sample.' ), 'frontend' => true, 'backend' => true, 'menu' => 'content', // You can also place modules in their top level menu. For example try: 'menu' => 'Sample', 'sections' => array( 'items' => array( //"items" will be the same in the Admin controller as protected $section filed 'name' => 'sample:items', // These are translated from your language file 'uri' => 'admin/sample', 'shortcuts' => array( 'create' => array( 'name' => 'sample:create', 'uri' => 'admin/sample/create', 'class' => 'add' ) ) ) ) ); } public function install() { $this->dbforge->drop_table('sample'); $this->db->delete('settings', array('module' => 'sample')); $sample = array( 'id' => array( 'type' => 'INT', 'constraint' => '11', 'auto_increment' => true ), 'name' => array( 'type' => 'VARCHAR', 'constraint' => '100' ), 'slug' => array( 'type' => 'VARCHAR', 'constraint' => '100' ), ); $sample_setting = array( 'slug' => 'sample_setting', 'title' => 'Sample Setting', 'description' => 'A Yes or No option for the Sample module', 'default' => '1', 'value' => '1', 'type' => 'select', 'options' => '1=Yes|0=No', 'is_required' => 1, 'is_gui' => 1, 'module' => 'sample' ); $this->dbforge->add_field($sample); $this->dbforge->add_key('id', true); // Let's try running our DB Forge Table and inserting some settings if ( ! $this->dbforge->create_table('sample') OR ! $this->db->insert('settings', $sample_setting)) { return false; } // No upload path for our module? If we can't make it then fail if ( ! is_dir($this->upload_path.'sample') AND ! @mkdir($this->upload_path.'sample',0777,true)) { return false; } // We made it! return true; } public function uninstall() { $this->dbforge->drop_table('sample'); $this->db->delete('settings', array('module' => 'sample')); // Put a check in to see if something failed, otherwise it worked return true; } public function upgrade($old_version) { // Your Upgrade Logic return true; } public function help() { // Return a string containing help info return "Here you can enter HTML with paragrpah tags or whatever you like"; // or // You could include a file and return it here. return $this->load->view('help', null, true); // loads modules/sample/views/help.php } } The array contains details that will be read and saved to the database on install. You can supply as many extra languages as you like, by default the en version of name and description will be used. This array will be available in your Public_Controller's and Admin_Controller's via $this->module_details['name']. Notice, name and description will use the active language, not return the whole array. Detail File ResourcesDetail File Resources Although it is likely that your third party module will be installed via the Add-ons section of the control panel, it is a good precaution to take note that your module may be installed when the PyroCMS installer runs. Modules that are in the shared_addons folder will be installed along with core modules during installation. Because the installer is a separate CodeIgniter application, you cannot load any module files such as configs or helpers when your module is being installed via the PyroCMS installer. Because of this, we recommend that your details.php file be independent of other configs, helpers, or other CodeIgniter-loaded resources. Public ControllersPublic Controllers In a normal CodeIgniter installation there is only one controller class. In PyroCMS there are four. Controller, MY_Controller, Admin_Controller and Public_Controller. To use one of these you can extend them like so: class News extends Public_Controller { function index() { $message = "Hello World!"; // Loads from addons/<site-ref>/modules/blog/views/view_name.php $this->template ->set('message' , $message) ->build('view_name'); } } This page will be available to anyone whether logged in or not and will use the frontend design. That means it will use the current active theme and show any login data and navigation, etc and can be viewed via "". Admin ControllersAdmin Controllers Admin controllers have a few different properties to them. It will automatically check that a user has permission to be there, and redirect them to a login page if not. This means they either need to have a user role of "admin" or be allowed specific permission to access the module. addons/<site-ref>/modules/<module-name>/controllers/admin.php class Admin extends Admin_Controller { protected $section = "item"; //This must match the name in the 'sections' field in details.php function index() { $message = "Hello logged in admin guy!"; // Loads from addons/modules/blog/views/admin/view_name.php $this->template ->set('message' , $message) ->build('admin/view_name'); } } This page can be accessed via "
http://docs.pyrocms.com/2.1/manual/index.php/developers/addons/modules/basic-structure
2016-05-24T15:32:13
CC-MAIN-2016-22
1464049272349.32
[]
docs.pyrocms.com
-For Axis2 Version 1.0 Axis2 comes with a module based on WSS4J [1] to provide WS-Security features, called "rampart". This document explains how to engage and configure rampart module. Since rampart module inserts handlers in the system specific pre-dispatch phase, it must be engaged globally. But it is possible to activate rampart module for the inflow or the outflow when required by the service or the clients. The rampart module (rampart.mar) is available with the Axis2 release.. Aegis module uses two parameters: An outflow configuration to add a timestamp, sing 1. Apache WSS4J -Home Example 1: An outflow configuration to add a timestamp, sing and encrypt the message once Example 2: An outflow configuration to sign the message twice and add a timestamp Example 3: An inflow configuration to decrypt, verify signature and validate timestamp
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/modules/wss4j/1_0/security-module.html
2008-05-16T19:56:32
crawl-001
crawl-001-011
[]
docs.huihoo.com
This document will assist you to write users settings. The following table summarizes the use of options. Please refer the ADB-How to document for the different generation modes and their descriptions. If the users want to override these settings manually, they need to use the following parameters in the command line (prefixed with -E) Note that these parameters have no relevant long names and MUST be prefixed with a -E to be processed by the code generator. For example WSDL2Java .... -Er true
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/1_0/adb/adb-codegen-integration.html
2008-05-16T19:58:15
crawl-001
crawl-001-011
[]
docs.huihoo.com
-Axis2 version 1.0 This document describes how to write Web Services and Web Service client using Axis2. It also describes how to write a custom module and engaging it within a Web Service. Samples shipped with the binary distribution of Axis2, is also discussed. It also contains a section on Advanced Topics. Pages: Content, 1, 2, 3, 4, 5 Writing Web Services by Code Generating Skeleton Web Service Clients Using Axis2 Writing Web Service Clients using Code Generation with Data Binding Support Advanced Topics Next Page Pages: Content, 1, 2, 3, 4, 5
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/1_0/userguide.html
2008-05-16T20:01:09
crawl-001
crawl-001-011
[]
docs.huihoo.com
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, one Front-Cover Texts, and one Back-Cover Text: "Nathan Boeger and Mana Tominaga wrote this book and asks for your support through donation. Contact the authors for more information" A copy of the license is included in the section entitled GNU Free Documentation License" Welcome to the FreeBSD System Programming book. Please note that this is a work in progress and feedback is appreciated! please note a few things first:
http://docs.huihoo.com/freebsd/freebsd_book/index.html
2008-05-16T20:01:45
crawl-001
crawl-001-011
[]
docs.huihoo.com
This document explains the usage of this code generator plug-in for eclipse. In other words, this document will guide you through the operations of generating a WSDL file from a java class and/or generating a java class file from a WSDL file. [Download Plugin Tool]. The easiest way to obtain the plug-in would be the binary distribution. The packages plug-in would be available from the tools section of the downloads page" directory of the source distribution) build file which will create two folders (the other one for the Service Archiver tool) and copy the necessary files to relevant folders. Then Eclipse should be configured to open the contents in a PDE project. Please go through the Eclipse documentation to learn how to open projects in the PDE format.. The provided screen shots may slightly differ with what the user would actually see but the functionality has not been changed. workspace projects by selecting the 'Browse Workspace projects only' 'checkbox.. If a message box pops up acknowledging the success, then you've successfully completed the Java2WSDL code generation.
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/tools/1_0/eclipse/wsdl2java-plugin.html
2008-05-16T20:02:17
crawl-001
crawl-001-011
[]
docs.huihoo.com
This document will describe the problems occuring in sending binary data with SOAP and how Axis2 has overcome those problems using MTOM or SOAP Message Transmission Optimization Mechanism. Despite the flexibility, interoperability and global acceptance of XML, there are times when serializing data into XML does not make sense. Web services users may need to transmit binary attachments of various sorts like images, drawings, xml docs, etc together with SOAP message. Such data are often in a particular binary format. Traditionally, two techniques have been used in dealing with opaque data in XML; Sending binary data by value is achieved by embedding opaque data (of course after some form of encoding) as element or attribute content of the XML component of data. The main advantage of this technique is that it gives applications the ability to process and describe data based and looking only on XML component of the data. XML supports opaque data as content through the use of either base64 or hexadecimal text encoding. Both these of the XML document and then embedding reference URI's to those entities as elements or attribute values. This prevents the unnecessary bloating of data and wasting of processing power. The primary obstacle for using these unparsed entities is their heavy reliance on DTDs, which impedes modularity as well, creating two data models. This scenario is like sending attachments with an e-mail message. Even though those attachments are related to the message content they are not inside the message. This causes the technologies for processing and description of data based on XML component of the data, to malfunction. One example is WS-Security. MTOM (SOAP Message Transmission Optimization Mechanism) is another specification which focuses on solving the "Attachments" problem. MTOM tries to leverage the advantages of above two techniques by trying to merge the two techniques. MTOM is actually a "by reference" method. Wire format of a MTOM optimized message is same as the Soap with Attachments message, which also makes it backward compatible with SwA endpoints. The most notable feature of MTOM is the use of XOP:Include element, which is defined in XML Binary Optimized Packaging (XOP) specification to reference the binary attachments (external unparsed general entities) of the message. With the use of this exclusive element the attached binary content logically become inline (by value) with the SOAP document even though actually it is attached separately. This merges the two realms by making it possible to work only with one data model. This allows the applications to process and describe by only looking at XML part making reliance on DTDs obsolete. On a lighter note MTOM has standardized the referencing mechanism of SwA. & MTOM (SOAP Message Transmission Optimization Mechanism). AXIOM is (and may be the first) Object Model which has the ability to hold binary data. It has been given this ability by allowing OMText "XOP" element (optimized content) to be send in a SOAP message. User can specify whether an OMText node which contains raw binary data or base64encoded binary data is qualified to be optimized or not at the construction time of that node or later. To take the optimum efficiency of MTOM a user is advised to send smaller binary attachments using base64encoding (None optimized) and larger attachments as optimized content. OMElement imageElement = fac.createOMElement("image", omNs); // Creating the Data Handler for the image. // User has the option to use a FileDataSource or a ImageDataSource // in this scenario... Image image; image = new ImageIO() .loadImage(new FileInputStream(inputImageFileName)); ImageDataSource dataSource = new ImageDataSource("test.jpg",image); DataHandler dataHandler = new DataHandler(dataSource); /_string"; OMText binaryNode = fac.createOMText(base64String,"image/jpg",true); Axis2 uses javax.activation.DataHandler to handle the binary data. All optimized binary content nodes will be serialized as Base64 Strings if "MTOM is not enabled". One can also create binary content nodes which will not be optimized at any case. They will be serialized and sendaHandler("someLocation")); OMText textData = fac.createOMText(dataHandler, false); image.addChild(textData); Set the "enableMTOM" property in the Options to true, when sending messages. ServiceClient serviceClient = new ServiceClient (); Options options = new Options(); options.setTo(targetEPR); options.setProperty(Constants.Configuration.ENABLE_MTOM, Constants.VALUE_TRUE); serviceClient .setOptions(options); When this property is set to true any SOAP envelope which contains optimizable content (OMText nodes containing binary content with optimizable flag "true") will be serialized as a MTOM optimized message. Messages will not be packaged as MTOM if they did not contain any optimizable content even though MTOM is enabled. But due considering the policy assertions, there may be a policy saying, all the request should be optimized eventhough there are any optimized contents. To support this phenomenon there is an entry called "forced mime" which has to be set as ServiceClient serviceClient = new ServiceClient (); Options options = new Options(); options.setTo(targetEPR); options.setProperty(Constants.Configuration.FORCE_MIME, Constants.VALUE_TRUE); serviceClient.setOptions(options); Axis2 serializes all binary content nodes as Base64 encoded strings regardless of they are qualified to be optimize or not, if, MTOM is *always enabled* in Axis2 when it comes to receiving messages. Axis2 will automatically identify and de-serialize any MTOM message it receives. Axis 2 server automatically identifies incoming MTOM optimized messages based on the content-type and de-serializes accordingly. User can enableMTOM in the server side for outgoing messages,. <parameter name="enableMTOM" locked="false">true</parameter> User must restart the server after setting this parameter. public class MTOMService { public OMElement mtomSample(OMElement element) throws Exception { OMElement _fileNameEle = null; OMElement _imageElement = null; for (Iterator _iterator = element.getChildElements(); _iterator.hasNext();) { OMElement _ele = (OMElement) _iterator.next(); if (_ele.getLocalName().equalsIgnoreCase("fileName")) { _fileNameEle = _ele; } if (_ele.getLocalName().equalsIgnoreCase("image")) { _imageElement = _ele; } } if (_fileNameEle == null || _imageElement == null ) { throw new AxisFault("Either Image or FileName is null"); } OMText binaryNode = (OMText) _imageElement.getFirstOMChild(); String fileName = _fileNameEle.getText(); //Extracting the data and saving DataHandler actualDH; actualDH = (DataHandler) binaryNode.getDataHandler(); Image actualObject = new ImageIO().loadImage(actualDH.getDataSource() .getInputStream()); FileOutputStream imageOutStream = new FileOutputStream(fileName); new ImageIO().saveImage("image/jpeg", actualObject, imageOutStream); //setting response OMFactory fac = OMAbstractFactory.getOMFactory(); OMNamespace ns = fac.createOMNamespace("urn://fakenamespace", "ns"); OMElement ele = fac.createOMElement("response", ns); ele.setText("Image Saved"); return ele; } } ServiceClient sender = new ServiceClient(); Options options = new Options(); options.setTo(targetEPR); // enabling MTOM options.set(Constants.Configuration.ENABLE_MTOM, Constants.VALUE_TRUE); options.setTransportInfo(Constants.TRANSPORT_HTTP, Constants.TRANSPORT_HTTP, false); options.setSoapVersionURI(SOAP12Constants.SOAP_ENVELOPE_NAMESPACE_URI);(); Image actualObject = new ImageIO().loadImage(actualDH.getDataSource() .getInputStream()); Axis2 Handles SwA messages at the inflow only. When Axis2 receives a SwA message it extracts the binary attachment parts and puts a reference to those parts in the Message Context. Users can access binary attachments using the content-id. Care should be taken to rip off the "cid" prefix when content-id is taken from the "Href" attributes. When accessing the message context from a service users need to get hold of the message context from "setOperationContext()" method from the service class.(see the following service example) Note: Axis2 supports content-id referencing only. Axis2 does not support Content Location referencing of MIME parts. public class EchoSwA { private MessageContext msgcts; public EchoSwA() { } public void setOperationContext(OperationContext oc) throws AxisFault { msgcts = oc.getMessageContext(WSDLConstants.MESSAGE_LABEL_OUT_VALUE); } public OMElement echoAttachment(OMElement omEle) { OMElement child = (OMElement) omEle.getFirstOMChild(); OMAttribute attr = child.getAttribute(new QName("href")); String contentID = attr.getAttributeValue(); Attachments attachment = (Attachments) msgcts.getProperty(MTOMConstants.ATTACHMENTS); contentID = contentID.trim(); if (contentID.substring(0, 3).equalsIgnoreCase("cid")) { contentID = contentID.substring(4); } DataHandler dataHandler = attachment.getDataHandler(contentID); OMText textNode = new OMTextImpl(dataHandler, omEle.getOMFactory()); omEle.build(); child.detach(); omEle.addChild(textNode); return omEle; } } MTOM specification is designed to be backward compatible with the SOAP with Attachments specification. Even though the representation is different, both technologies have the same wire format. We can safely assume that any SOAP with Attachments endpoint can accept a MTOM optimized messages and treat them as SOAP with Attachment messages - Any MTOM optimized message is a valid SwA message. Because of that Axis2 does not define a separate programming model or serialization for SwA. Users can use the MTOM programming model and serialization to send messages to SwA endpoints. Note : Above is tested with memory at any time. Axis2 file caching streams the incoming MIME parts directly in to files, after reading the MIME part headers. Also a user can specify a size threshold for the File caching. When this threshold value is specified, only the attachments whose size is bigger than the threshold value will get cached in files. Smaller attachments will remain in Memory. NOTE : It is a must to specify a directory to temporary store the attachments. Also care should be taken to *clean that directory* from time to time. The following parameters need to be set in Axis2.xml in order to enable file caching. <axisconfig name="AxisJava2.0"> <!-- ================================================= --> <!-- Parameters --> <!-- ================================================= --> <parameter name="cacheAttachments" locked="false">true</parameter> <parameter name="attachmentDIR" locked="false">temp directory</parameter> <parameter name="sizeThreshold" locked="false">4000</parameter> ......... ......... </axisconfig>
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/1_0/mtom-guide.html
2008-05-16T20:03:22
crawl-001
crawl-001-011
[]
docs.huihoo.com
This document explains available mechanisms to extend ADB and possibly adopt it to compile schemas to support other languages. ADB is written with future extensions in mind, with a clear and flexible way to extend or modify it's functionality. Available mechanisms to extend ADB and possibly adopt it to compile schemas to support other languages are described below. The configuration for the ADB framework is in the schema-compile.properties file found in the org.apache.axis2.databinding.schema package. This properties file has the following important properties This is the writer class. This is used by the schema compiler to write the beans and should implement the org.apache.axis2.schema.writer.BeanWriter interface. The schema compiler delegates the bean writing task to the specified instance of the BeanWriter. This specifies the template to be used in the BeanWriter. The beanWriter author is free to use any mechanism to write the classes but the default mechanism is to use a xsl template. This property may be left blank if the BeanWriter implementation does not require a template. This is the type map to be used by the schema compiler. it should be an implementation of the org.apache.axis2.schema.typemap interface. The default typemap implementation encapsulates a hashmap with type QName to Class name string mapping. The first, most simple tweak for the code generator could be to switch to plain bean generation. The default behavior of the ADB framework is to generate ADBBeans, but most users, if they want to use ADB as a standalone compiler, would love to have plain java beans. This can in fact be done by simply changing the template used. The template for plain java beans is already available in the org.apache.axis2.schema.template package. To make this work replace the /org/apache/axis2/databinding/schema/template/ADBBeanTemplate.xsl with the /org/apache/axis2/databinding/schema/template/PlainBeanTemplate.xsl in the schema-compile.properties. Congratulations! You just tweaked ADB to generate plain java beans. To generate custom formats, the templates need to be modified. The schema for the xml generated by the JavaBeanWriter is available in the source tree under the Other directory in the codegen module. Advanced users with knowledge of XSLT can easily modify the templates to generate code in their own formats. To generate code for another language, there are two main components are to be written. Implement the BeanWriter interface for this class. A nice example is the org.apache.axis2.schema.writer.JavaBeanWriter which has a lot of reusable code. In fact if the language is OOP based (such as C# or even C++), one would even be able to extend the JavaBeanWriter itself. Implement the TypeMap interface for this class. The org.apache.axis2.schema.typemap.JavaTypeMap class is a simple implementation for the typemap where the QName to class name strings are kept inside a hashmap instance. This technique is fairly sufficient and only the type names would need to change to support another language. Surprisingly this is enough to have other language support for ADB. Change the configuration and you are ready to generate code for other languages! This tweaking guide is supposed to be a simple guideline for anyone who wishes to dig deep into the mechanics of the ADB code generator. Users are free experiment with it and modify the schema compiler accordingly to their needs. Also note that the intention of this section is not to be a step by step guide to custom code generation. Anyone who wish to do so would need to dig into the code and get their hands dirty!
http://docs.huihoo.com/apache/axis/axis2-1.0-docs/xdocs/1_0/adb/adb-tweaking.html
2008-05-16T20:05:23
crawl-001
crawl-001-011
[]
docs.huihoo.com
Table of Contents Product Index.
http://docs.daz3d.com/doku.php/public/read_me/index/5917/start
2020-09-18T10:51:34
CC-MAIN-2020-40
1600400187390.18
[]
docs.daz3d.com
BarManager.About() Method Activates the About dialog. Namespace: DevExpress.XtraBars Assembly: DevExpress.XtraBars.v20.1.dll Declaration Remarks. See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.BarManager.About
2020-09-18T10:06:58
CC-MAIN-2020-40
1600400187390.18
[]
docs.devexpress.com
The elevator is a special type of transport that moves flow items up and down. It will automatically travel to the level where flow items need to be picked up or dropped off. Flow items are animated as they enter and exit the elevator. This gives a better feel for the load and unload time of the elevator. The elevator is a task executer. It implements offset travel by only traveling the z portion of the offset location. If the offset travel is for a load or unload task, then once the offsetting is finished, it will use the user-specified load/unload time to convey the flow item onto its platform, or off of its platform to the destination location. When conveying the item onto or off of the elevator, the flow item moves directly along the elevator's x-axis. Since the main distinction of an elevator is that it only moves along its z axis, this object can be used for any purpose in which you want the object to only travel along one axis. The elevator uses the standard events that are common to all task executers. See Task Executer Concepts - Events for an explanation of these events. This object uses the task executer states. See Task Executer Concepts - States for more information. The elevator uses the standard statistics that are common to all task executers. See Task Executer Concepts - Statistics for an explanation of these statistics. The elevator object has six tabs with various properties. These tabs are the standard tabs that are common to most task executers. For more information about the properties on those tabs, see:
https://docs.flexsim.com/en/19.2/Reference/3DObjects/TaskExecuters/Elevator/Elevator.html
2020-09-18T11:45:30
CC-MAIN-2020-40
1600400187390.18
[]
docs.flexsim.com
TOPICS× Creating an instance and logging on To create a new instance and Adobe Campaign database, apply the following process: - Create the connection. - Log on to create the related instance. - Create and configure the database. Only the internal identifier can carry out these operations. For more on this, refer to Internal identifier . When the Adobe Campaign console is started up, you access a login page. To create a new instance, follow the steps below: - Click the link in the top right-hand corner of the credentials fields to access the connection configuration window. This link can be either New... or an existing instance name. -.For the connection URL, only use the following characters: [a-z] , [A-Z] , [0-9] and dashes (-) or full stops. - Click Ok to confirm settings: you can now begin with the instance creation process. - In the Connection settings window, enter the internal login and its password to connect to the Adobe Campaign application server. Once connected, you access the instance creation wizard to declare a new instance - In the Name field, enter the instance name . As this name is used to generate a configuration file config- <instance> .xml and is used in the command line parameters to identify the instance, make sure you choose a short name without special characters. For example: eMarketing .The name of the instance added to the domain name must not exceed 40 characters. This lets you restrict the size of "Message-ID" headers and prevents messages from being considered as spam, particularly by tools such as SpamAssassin. - In the DNS masks fields, enter the list of DNS masks to which the instance should be attached. The Adobe Campaign server uses the hostname that appears in the HTTP requests to determine which instance to reach.The hostname is contained between the string https:// and the first slash character / of the server address.You can define a list of values separated by commas.The ? and * characters can be used as wildcards to replace one or various characters (DNS, port, etc.). For instance, the demo* value will work with "" as it will with "" and even "".Names used must be defined in your DNS. You can also inform the correspondence between a DNS name and an IP address in the c:/windows/system32/drivers/etc/hosts file in Windows and in the /etc/hosts file in Linux. You therefore must modify the connection settings to use this DNS name in order to connect to your chosen instance.The server must be identified by this name, particularly for uploading images in emails.In addition, the server must be able to connect to itself by this name, and if possible by a loopback address - 127.0.0.1 -, particularly to allow reports to be exported in PDF format. - In the Language drop-down list, select the instance language : English (US), English (UK), French, or Japanese.Differences between US English and UK English are described in this section .The instance language cannot be modified after this step. Adobe Campaign instances are not multilingual: you cannot switch the interface from a language to another. - Click Ok to confirm instance declaration. Log off and back on to declare the database.The instance can be created from the command line. For more on this, refer to Command lines .
https://docs.adobe.com/content/help/en/campaign-classic/using/installing-campaign-classic/initial-configuration/creating-an-instance-and-logging-on.html
2020-09-18T12:10:40
CC-MAIN-2020-40
1600400187390.18
[]
docs.adobe.com
ARCHER2 User Documentation¶ Warning The ARCHER2 Service is not yet available. This documentation is in development. ARCHER2 is due to commence operation in 2020, replacing the current service ARCHER. For more information on ARCHER, please visit the ARCHER web site. ARCHER2 is the next generation UK National Supercomputing Service. You can find more information on the service and the research it supports on the ARCHER2 website. The ARCHER2 Service is a world class advanced computing resource for UK researchers. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh. What the documentation covers¶ This is the documentation for the ARCHER2 service and includes: - Quick Start Guide - The ARCHER2 quick start guide provides the minimum information for new users or users transferring from ARCHER. - ARCHER2 User and Best Practice Guide - Covers all aspects of use of the ARCHER2 supercomputing service. This includes fundamentals (required by all users to use the system effectively), best practice for getting the most out of ARCHER2, and more advanced technical topics. - Research Software - Information on each of the centrally-installed research software packages. - Software Libraries - Information on the centrally-installed software libraries. Most libraries work as expected so no additional notes are required however a small number require specific documentation - Data Analysis and Tools - Information on data analysis tools and other useful utilities. - Essential Skills - This section provides information and links on essential skills required to use ARCHER2 efficiently: e.g. using Linux command line, accessing help and documentation. Contributing to the documentation¶ The source for this documentation is publicly available in the ARCHER2 documentation Github repository so that anyone can contribute to improve the documentation for the service. Contributions can be in the form of improvements or addtions to the content and/or addtion of Issues providing suggestions for how it can be improved. Full details of how to contribute can be found in the README.rst file of the repository. Credits¶ This documentation draws on the Cirrus Tier-2 HPC Documentation, Sheffield Iceberg Documentation and the ARCHER National Supercomputing Service Documentation. We are also grateful to the Isambard Tier-2 HPC service for discussions on the combination of Github and Sphinx technologies. Quick start - Overview - Quickstart for users - Quickstart for developers User and best practice guide - Overview - Connecting to ARCHER2 - Data management and transfer - Software environment - Running jobs on ARCHER2 - I/O and file systems - Application development environment - Containers - Using Python - Data analysis - Debugging - Profiling - Performance tuning Research software - Overview - CASTEP - ChemShell - Code Saturne - CP2K - ELK - FEniCS - GROMACS - LAMMPS - MITgcm - Met Office Unified Model - NAMD - NEMO - NWChem - ONETEP - OpenFOAM - Quantum Espresso - VASP
https://docs.archer2.ac.uk/
2020-09-18T10:01:16
CC-MAIN-2020-40
1600400187390.18
[]
docs.archer2.ac.uk
Email domain report allows you to break down your customers into groups according to their mailbox service provider. Its purpose is to identify a problem when there is something wrong with your email campaigns, whether it is low deliverability and open rates or high bounce and complaint rates. To find out more tips for improving the effectiveness of your email campaigns, visit Email deliverability tips After following this guide, you will end up with a report providing an overview both in a table as well as in a chart, where the view is customizable. After you make the report there are two possible outcomes. Either all of the metrics are consistent across all MSPs or there are some for which the rates are disproportionately low. If it is the former, the problem with your email campaign does not lie with the MSP. However, if it is the latter you have just identified an MSP/MSPs that are causing the problem and you need to find out what is different about the particular MSP/MSPs causing the trouble. For example, in the image below, Microsoft and Google have ⅓ of the Apple and Verizon open rate. Such disproportionality is alarming and should be investigated. This is how the report will look like after successfully completing the guide. 1. Domain strip scenario For the report, you will first need to have a scenario creating a customer attribute with just the stripped domain from a full email address. This is how the scenario looks like. As you can see in the picture above, you need to select the Now trigger (green). Then, add a condition (together with connecting it to the trigger) as shown below. Lastly, select a Set attribute operator and create an attribute email_domain by typing it down and clicking on the plus sign. As a Value, insert the text below. After doing this last step, your scenario is ready to be used. {{ customer.email.split('@') | last }} Start the scenario (top-right corner) and after a while, click Stop, as this is a one-time process. However, after this initial process, you will have to alter the scenario so it always does the same process with all the new addresses. Firstly, will need to swap the Now green trigger for a Repeat trigger and set it to repeat daily. Secondly, you need to change the condition in the middle of the scenario. In addition to the existing filter, add a new one where the attribute not set, as shown in the picture below. This way, all of the new email addresses will be included and the email domain will be stripped from them on a daily basis. 2. Email domain report After you successfully created the email_domain attribute and adjusted it for continual use, you are ready to create the email domain report. To create the report, copy the conditions outlined in the pictures below. To find out how to work with Reports, visit the linked article. The grouping is set in this way. Sent emails Delivered emails Delivery rate Advanced Delivery rate strategies Read our block about Email marketing analytics about metrics, KPIs, and reports. Definition of count(customer) for the formula in "Delivery rate" Definition of count(campaign) for the formula in "Delivery rate" Dropped (soft-bounced) emails Bounced mails Open rate Definition of count(customer) for the formula in "Open rate" Definition of count(campaign) for the formula in "Open rate" Click rate Definition of count(customer) for the formula in "Click rate" Definition of count(campaign) for the formula in "Click rate" Click rate from enqueued Definition of count(customer) for the formula in "Click rate from enqueued" Definition of count(campaign) for the formula in "Click rate from enqueued" If you did everything as suggested, now you should have a working report, where you can track your campaigns based on the ISPs of the customers. Updated about a month ago
https://docs.exponea.com/docs/email-domain-report
2020-09-18T11:30:04
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.readme.io/1e26459-email_domain_report.png', 'email domain report.png'], dtype=object) array(['https://files.readme.io/1e26459-email_domain_report.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/98e514b-email_domain_report_chart.png', 'email domain report chart.png'], dtype=object) array(['https://files.readme.io/98e514b-email_domain_report_chart.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3fbd89f-domain_strip_scenario.png', 'domain strip scenario.png'], dtype=object) array(['https://files.readme.io/3fbd89f-domain_strip_scenario.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/985c5a3-email_has_value_condition.png', 'email has value condition.png'], dtype=object) array(['https://files.readme.io/985c5a3-email_has_value_condition.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0d7b1e5-set_attribute.png', 'set attribute.png'], dtype=object) array(['https://files.readme.io/0d7b1e5-set_attribute.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3442c1c-dmain_strip_trigger.png', 'dmain strip trigger.png'], dtype=object) array(['https://files.readme.io/3442c1c-dmain_strip_trigger.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c35edc6-condition.png', 'condition.png'], dtype=object) array(['https://files.readme.io/c35edc6-condition.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3fba0c2-email_domain_report_1_arrows.png', 'email domain report 1 arrows.png'], dtype=object) array(['https://files.readme.io/3fba0c2-email_domain_report_1_arrows.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/edf2658-top20_selection.png', 'top20 selection.png'], dtype=object) array(['https://files.readme.io/edf2658-top20_selection.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/dd39a3d-1_sent_mails.png', '1 sent mails.png'], dtype=object) array(['https://files.readme.io/dd39a3d-1_sent_mails.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/4cd6202-2_delivered_mails.png', '2 delivered mails.png'], dtype=object) array(['https://files.readme.io/4cd6202-2_delivered_mails.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/a29f7e6-3_delivery_rate.png', '3 delivery rate.png'], dtype=object) array(['https://files.readme.io/a29f7e6-3_delivery_rate.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/fae6653-count_customer_delivery.png', 'count customer delivery.png'], dtype=object) array(['https://files.readme.io/fae6653-count_customer_delivery.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2bf6b6c-count_campaign_delivery.png', 'count campaign delivery.png'], dtype=object) array(['https://files.readme.io/2bf6b6c-count_campaign_delivery.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9cdfb07-4_dropped_mails.png', '4 dropped mails.png'], dtype=object) array(['https://files.readme.io/9cdfb07-4_dropped_mails.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9ef3b6d-5_bounced_mails.png', '5 bounced mails.png'], dtype=object) array(['https://files.readme.io/9ef3b6d-5_bounced_mails.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/62052c4-6_open_rate.png', '6 open rate.png'], dtype=object) array(['https://files.readme.io/62052c4-6_open_rate.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8e279e8-count_customer_open.png', 'count customer open.png'], dtype=object) array(['https://files.readme.io/8e279e8-count_customer_open.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/a9e2b55-count_campaign_open.png', 'count campaign open.png'], dtype=object) array(['https://files.readme.io/a9e2b55-count_campaign_open.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/443c04f-7_click_rate.png', '7 click rate.png'], dtype=object) array(['https://files.readme.io/443c04f-7_click_rate.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6720fcc-count_customer_click.png', 'count customer click.png'], dtype=object) array(['https://files.readme.io/6720fcc-count_customer_click.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/1b5dfe1-count_campaign_click.png', 'count campaign click.png'], dtype=object) array(['https://files.readme.io/1b5dfe1-count_campaign_click.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e1a9ddb-8_click_rate_from_enqueued.png', '8 click rate from enqueued.png'], dtype=object) array(['https://files.readme.io/e1a9ddb-8_click_rate_from_enqueued.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3fa06df-count_customer_enqueued.png', 'count customer enqueued.png'], dtype=object) array(['https://files.readme.io/3fa06df-count_customer_enqueued.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/daba1f2-count_campaign_enqueued.png', 'count campaign enqueued.png'], dtype=object) array(['https://files.readme.io/daba1f2-count_campaign_enqueued.png', 'Click to close...'], dtype=object) ]
docs.exponea.com
Transferring Shell Objects with Drag-and-Drop and the Clipboard Many applications allow users to transfer data to another application by dragging and dropping the data with the mouse, or by using the Clipboard. Among the many types of data that can be transferred are Shell objects such as files or folders. Shell data transfer can take place between two applications, but users can also transfer Shell data to or from the desktop or Windows Explorer. Although files are the most commonly transferred Shell object, Shell data transfer can involve any of the variety of objects found in the Shell namespace. For instance, your application might need to transfer a file to a virtual folder such as the Recycle Bin, or accept an object from a non-Microsoft namespace extension. If you are implementing a namespace extension, it must be able to behave properly as a drop source and target. This document discusses how applications can implement drag-and-drop and Clipboard data transfers with Shell objects. How Drag-and-Drop Works with Shell Objects Applications often need to provide users with a way to transfer Shell data. Some examples are: - Dragging a file from Windows Explorer or the desktop and dropping it on an application. - Copying a file to the Clipboard in Windows Explorer and pasting it into an application. - Dragging a file from an application to the Recycle Bin. For a detailed discussion of how to handle these and other scenarios, see Handling Shell Data Transfer Scenarios. This document focuses on the general principles behind Shell data transfer. Windows provides two standard ways for applications to transfer Shell data: - A user cuts or copies Shell data, such as one or more files, to the Clipboard. The other application retrieves the data from the Clipboard. - A user drags an icon that represents the data from the source application and drops the icon on a window owned by the target. In both cases, the transferred data is contained in a data object. Data objects are Component Object Model (COM) objects that expose the IDataObject interface. Schematically, there are three essential steps that all Shell data transfers must follow: - The source creates a data object that represents the data that is to be transferred. - The target receives a pointer to the data object's IDataObject interface. - The target calls the IDataObject interface to extract the data from it. The difference between Clipboard and drag-and-drop data transfers lies primarily in how the IDataObject pointer is transferred from the source to the target. Clipboard Data Transfers The Clipboard is the simplest way to transfer Shell data. The basic procedure is similar to standard Clipboard data transfers. However, because you are transferring a pointer to a data object, not the data itself, you must use the OLE clipboard API instead of the standard clipboard API. The following procedure outlines how to use the OLE clipboard API to transfer Shell data with the Clipboard: - The data source creates a data object to contain the data. - The data source calls OleSetClipboard, which places a pointer to the data object's IDataObject interface on the Clipboard. - The target calls OleGetClipboard to retrieve the pointer to the data object's IDataObject interface. - The target extracts the data by calling the IDataObject::GetData method. - With some Shell data transfers, the target might also need to call the data object's IDataObject::SetData method to provide feedback to the data object on the outcome of the data transfer. See Handling Optimized Move Operations for an example of this type of operation. Drag-and-Drop Data Transfers While somewhat more complex to implement, drag-and-drop data transfer has some significant advantages over the Clipboard: - Drag-and-drop transfers can be done with a simple mouse movement, making operation more flexible and intuitive to use than the Clipboard. - Drag-and-drop provides the user with a visual representation of the operation. The user can follow the icon as it moves from source to target. - Drag-and-drop notifies the target when the data is available. Drag-and-drop operations also use data objects to transfer data. However, the drop source must provide functionality beyond that required for Clipboard transfers: - The drop source must also create an object that exposes an IDropSource interface. The system uses IDropSource to communicate with the source while the operation is in progress. - The drag-and-drop data object is responsible for tracking cursor movement and displaying an icon to represent the data object. Drop targets must also provide more functionality than is needed to handle Clipboard transfers: - The drop target must expose an IDropTarget interface. When the cursor is over a target window, the system uses IDropTarget to provide the target with information such as the cursor position, and to notify it when the data is dropped. - The drop target must register itself with the system by calling RegisterDragDrop. This function provides the system with the handle to a target window and a pointer to the target application's IDropTarget interface. Note For drag-and-drop operations, your application must initialize COM with OleInitialize, not CoInitialize. The following procedure outlines the essential steps that are typically used to transfer Shell data with drag-and-drop: - The target calls RegisterDragDrop to give the system a pointer to its IDropTarget interface and register a window as a drop target. - When the user starts a drag-and-drop operation, the source creates a data object and initiates a drag loop by calling DoDragDrop. - When the cursor is over the target window, the system notifies the target by calling one of the target's IDropTarget methods. The system calls IDropTarget::DragEnter when the cursor enters the target window, and IDropTarget::DragOver as the cursor passes over the target window. Both methods provide the drop target with the current cursor position and the state of keyboard modifier keys such as CTRL or ALT. When the cursor leaves the target window, the system notifies the target by calling IDropTarget::DragLeave. When any of these methods return, the system calls the IDropSource interface to pass the return value to the source. - When the user releases the mouse button to drop the data, the system calls the target's IDropTarget::Drop method. Among the method's parameters is a pointer to the data object's IDataObject interface. - The target calls the data object's IDataObject::GetData method to extract the data. - With some Shell data transfers, the target might also need to call the data object's IDataObject::SetData method to provide feedback to the source on the outcome of the data transfer. - When the target is finished with the data object, it returns from IDropTarget::Drop. The system returns the source's DoDragDrop call to notify the source that the data transfer is complete. - Depending on the particular data transfer scenario, the source might need to take additional action based on the value returned by DoDragDrop and the values that are passed to the data object by the target. For instance, when a file is moved, the source must check these values to determine whether it must delete the original file. - The source releases the data object. While the procedures outlined above provide a good general model for Shell data transfer, there are many different types of data that can be contained in a Shell data object. There are also a number of different data transfer scenarios that your application might need to handle. Each data type and scenario requires a somewhat different approach to three key steps in the procedure: - How a source constructs a data object to contain the Shell data. - How a target extracts Shell data from the data object. - How the source completes the data transfer operation. The Shell Data Object provides a general discussion of how a source constructs a Shell data object, and how that data object can be handled by the target. Handling Shell Data Transfer Scenarios discusses in detail how to handle a number of common Shell data transfer scenarios.
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/legacy/bb776905(v=vs.85)
2020-09-18T11:27:58
CC-MAIN-2020-40
1600400187390.18
[]
docs.microsoft.com
apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: {} As an administrator, you can turn on features that are Technology Preview features. You can use the FeatureGates Custom Resource to toggle on and off Technology Preview features throughout your cluster. This allows you, for example, to ensure that Technology Preview features are off for production clusters while leaving the features on for test clusters where you can fully test them. The following features are affected by FeatureGates: You can enable these features by editing the Feature Gate Custom Resource. Turning on these features cannot be undone and prevents the ability to upgrade your cluster. The LocalStorageCapacityIsolation cannot be enabled. You can turn Technology Preview features on and off for all nodes in the cluster by editing the FeatureGates Custom Resource, named cluster, in the openshift-config project. The following Technology Preview features are enabled by feature gates: RotateKubeletServerCertificate SupportPodPidsLimit To turn on the Technology Preview features for the entire cluster: Create the FeatureGates instance: Switch to the Administration → Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate. On the Custom Resource Definitions page, click the Actions Menu and select View Instances. On the Feature Gates page, click Create Feature Gates. Replace the code with following sample: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: {} Click Create. To turn on the Technology Preview features, change the spec parameter to: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade (1)
https://docs.openshift.com/container-platform/4.3/nodes/clusters/nodes-cluster-enabling-features.html
2020-09-18T11:54:15
CC-MAIN-2020-40
1600400187390.18
[]
docs.openshift.com
[−][src]Crate winit simplest retrieved from the EventsLoop the window was created with. There are two ways to do so. The first is to call events_loop.poll_events(...), which will retrieve all the events pending on the windows and immediately return after no new event is available. You usually want to use this method in application that render continuously on the screen, such as video games. use winit::{Event, WindowEvent}; use winit::dpi::LogicalSize; loop { events_loop.poll_events(|event| { match event { Event::WindowEvent { event: WindowEvent::Resized(LogicalSize { width, height }), .. } => { println!("The window was resized to {}x{}", width, height); }, _ => () } }); } The second way is to call events_loop.run_forever(...). As its name tells, it will run forever unless it is stopped by returning ControlFlow::Break. use winit::{ControlFlow, Event, WindowEvent}; events_loop.run_forever(|event| { match event { Event::WindowEvent { event: WindowEvent::CloseRequested, .. } => { println!("The close button was pressed; os module for that), which in turn allows you to create an OpenGL/Vulkan/DirectX/Metal/etc. context that will draw on the window.
https://docs.rs/winit/0.19.5/winit/
2020-09-18T09:53:40
CC-MAIN-2020-40
1600400187390.18
[]
docs.rs
Event Context Data is sent with every event, and is either predefined or custom: Predefined data is data that Sentry recognizes. This data enhances your ability to understand and investigate the source and impact of the data when viewing it in sentry.io. For example, the concept of user helps sentry.io display the number of unique users affected by an event. Custom data is arbitrary structured or unstructured extra data you can attach to your event. Regardless of whether the data sent to Sentry is predefined or custom, additional data can take two forms, tags and context:. Context includes additional diagnostic information attached to an event. By default, contexts are not searchable, but for convenience Sentry turns information in some predefined contexts into tags, making them searchable. Automatic Instrumentation Certain data is sent to Sentry automatically. This section explains the predefined data, some of which is simply the type of device or browser being used at the time of the event. Other predefined data you can modify, such as the level of the event. The final type of predefined data, environment and release, affect the UI experience or enable features within the product to help better identify issues in your application. Predefined Data Sentry turns additional, predefined data or specific attributes on the data into tags, which you can use for a variety of purposes, such as searching in the web UI for the event or delving deeper into your application. For example, you can use tags such as level or user.email to surface particular errors. You can also enable Sentry to track releases, which unlocks features that help you delve more deeply into the differences between deployed releases. If Sentry captures some predefined data but doesn’t expose it as a tag, you can set a custom tag for it. request, device, OS, runtime, app, browser, GPU, logger, and monitor are the most typical predefined data sent with an event. In addition, the following are sent, and can be modified for your team's use: level - Defines the severity of an event. The level can be set to one of five values, which are, in order of severity: fatal, error, warning, info, and debug.error. Learn how to set the level in Set the Level user - Providing user information to Sentry helps you evaluate the number of users affecting an issue and evaluate the quality of the application. Learn how to capture user information in Capture the User fingerprint - Sentry uses one or more fingerprints to determine how to group errors into issues. Learn more about Sentry's approach to grouping algorithms in Grouping Events into Issues. Learn how to override the default group in very advanced use cases in Modify the Default Fingerprint environment - Environments help you better filter issues, releases, and user feedback in the Issue Details page of the web UI. Learn how to set and manage environments release - A release is a version of your code that you deploy to an environment. When enabled, releases also help you determine regressions between releases and their potential source as discussed in the releases documentation. For JavaScript developers, a release is also used for applying [source maps] to minified JavaScript to view original, untransformed source code. - Package: - nuget:Sentry.Extensions.Logging - Version: - 2.1.6 - Repository: -
https://docs.sentry.io/platforms/dotnet/guides/extensions-logging/enriching-error-data/additional-data/
2020-09-18T11:17:32
CC-MAIN-2020-40
1600400187390.18
[]
docs.sentry.io
Actions Reference This section consists of the following subsections: AboutAction Opens the About JChem for Excel window to show the exact version number. AddEditStructureSingleCellAction Opens the structure editor to add a new structure to an empty Excel cell or to edit the existing structure in the Excel cell. For more information, see Add a Structure to a Cell and Edit a Structure in a Cell. AddEditStructureTaskPaneAction Opens the Task Pane Editor to add a new structure to an empty Excel cell or to edit the existing structure in the Excel cell. ClearFilterAction Clears the actual structure filtering and displays the hidden rows with structures again. For more information, see Structure Filter. DecreaseStructureSizeAction Decreases the size of the structures by 10%. It resizes both the column width and row height. For more information, see Resize Structures. EmbedInJCSMILESAction. HelpAction Opens JChem for Excel User Guide. IncreaseStructureSizeAction Increases the size of the structures by 10%. It resizes both the column width and row height. For more information, see Resize Structures. LicenseAction Starts the ChemAxon License Manager to manage ChemAxon licenses. For more information, see License Management. MoleculeWorksheetFilterAction. OpenStructureSingleCellAction Opens a single structure from a wide variety of file formats. The following file formats can be opened: CDX, MOL, MRV, PDB. For more information, see Insert Single Structures. OptionsAction Opens the Options form, where default display settings and application behavior can be defined. The settings specified here are saved for the next sessions. For more information, see Options in JChem for Excel. RedrawStructuresAction Redraws structures if the structures are hidden from the worksheet because of any reason. RemoveHitColoringAction Removes hit coloring from the filtered structures. For more information, see Structure Filter. ResizeStructuresToDefaultInSelectionAction Resizes structures to the default structure size in a selected area. This action is not available in the JChem for Excel ribbon/context menu/menu/toolbar by default but it is possible to make it available by customizing the ribbon/context menu/menu/toolbar. RGroupDecompositionAction. SaveAsStructureSingleCellAction Saves a single structure to a wide variety of file formats. The following file formats can be created: CDX MOL MRV PDB For more information, see Save Single Structure to a File. ShowHideStructuresAction Displays and hides the structures. The structures are visible by default when adding them to JChem for Excel. For more information, see Show and Hide Structures. ShowHideStructuresActionforIDs This action went on the Ribbon to serve the JCIDSysstructure function by switching between the formulas. For more information regarding the JCIDSysstructure function, see JCIDSYSStructure. UnembedFromJCSMILESAction Displays structures again after an EmbedInJCSMILESAction. This action is not available on the ribbon as a button by default. It can be added easily by customizing the ribbon. For more information, see Customizing the Ribbon.
https://docs.chemaxon.com/display/docs/Actions_Reference.html
2020-09-18T10:23:42
CC-MAIN-2020-40
1600400187390.18
[]
docs.chemaxon.com
The ComputeVertexColors command evaluates the color of the texture at each texture coordinate (u,v) and sets the vertex color to the corresponding color. The color of a mesh that is derived from a texture becomes part of the geometry. Material textures can be turned off or changed, but vertex colors cannot. Select a mesh with a texture map assigned. You will not see any difference on the mesh object in the Rendered display mode because the texture in the current material overrides the vertex colors. Use materials and textures Rhinoceros 6 © 2010-2020 Robert McNeel & Associates. 12-Sep-2020
https://docs.mcneel.com/rhino/6/help/en-us/commands/computevertexcolors.htm
2020-09-18T11:25:37
CC-MAIN-2020-40
1600400187390.18
[]
docs.mcneel.com