content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Handling Transactions with EJBs This chapter describes the transaction support built into the Enterprise JavaBean programming model. This chapter begins by introducing the EJB transaction model and explains the notion of container and bean managed transactions. Then it explains the semantics of all transaction attributes and use of the Java Transaction API for bean managed transactions. Finally, it discusses the restrictions on various combinations of EJB types and transaction attributes. This chapter describes the transaction support built into the Enterprise JavaBean programming model. Understanding the Transaction Model Specifying Transaction Attributes in an EJB Using Bean Managed Transactions Restrictions on Transaction Attributes Setting Isolation Levels import javax.transaction.UserTransaction; ... EJBContext ejbContext = ...; ... UserTransaction tx = ejbContext.getUserTransaction(); tx.begin(); ... // do work tx.commit(); If you use the TX_BEAN_MANAGED transaction attribute anywhere in a bean, whether at the bean-wide level or for a method, all other methods in that bean must also employ the TX_BEAN_MANAGED transaction attribute. You cannot mix this attribute with other attributes at the method level. TX_BEAN_MANAGED and TX_NOT_SUPPORTED attributes are not supported for entity beans.
http://docs.sun.com/source/816-6104-10/ptrans.htm
2009-07-03T21:46:52
crawl-002
crawl-002-028
[]
docs.sun.com
# update the server sudo apt-get update # GeoMoose Runtime Deps sudo apt-get install -y \ apache2 libapache2-mod-php5 \ mapserver-bin cgi-mapserver gdal-bin \ php5-mapscript php5-sqlite php5-gd php5-curl # GeoMoose Build Deps sudo apt-get install -y \ git-core \ default-jre \ python-sphinx make texlive texlive-latex-extra \ naturaldocs sudo mkdir /srv/geomoose sudo chown ubuntu:ubuntu /srv/geomoose git clone --recursive /srv/geomoose By default this installs the latest development version of GeoMoose which is a work in progress. It may be unstable and generally inappropriate for daily use. To install a released version switch to appropriate branch or tag and update the submodules. Released branches are named rx.y where x and y are the major and minor version numbers. Released branches always point to the latest bugfix release for that branch. Tags point to a specific release and are formatted as rx.y.z where x and y are as with branches and z is the patch level. e.g: cd /srv/geomoose git checkout r2.9.0 git submodule update --init --recursive As of this writing 2.9.0 is the latest version of GeoMoose. Why do this with Git? Using Git to control versions makes upgrades much easier. It can allow you to try different versions of GeoMoose, including the latest development versions without having to download mulitple ZIP files or create multiple directories. It is necessary to rebuild the JavaScript code any time the code changes so that the optimized build (geomoose.html) contains the current code. This could occur if you make changes yourself, or if you switch to a different branch or tag. cd /srv/geomoose/js/libs ./build_js.sh Don’t be fooled! References to geomoose2.6 will appear while building GeoMoose. This is normal. All versions past 2.6.0 will show these messages. They are normal and the version of GeoMoose that was selected before is the version being built. Apache will be used to serve GeoMoose but Apache needs to have files pointed in the right places. First, we need to normalize the /srv/geomoose directory: cd /srv/geomoose/js ln -s ../services/php . ln -s geomoose.html index.html sudo ln -s /srv/geomoose/js /var/www/html/geomoose sudo ln -s /srv/geomoose/docs/build/html /var/www/html/geomoose-docs GeoMoose uses two files to describe where local resources are located. The first is local_settings.ini. This uses the familiar .ini file format to point to temporary directories, MapServer, and the directory where MapFiles are loated. These next steps can be intimidating to those unfamiliar with the command prompt. We will be using the nano text editor to edit these files. nano does not need to be scary! You can use arrow keys to navigate and it uses the CTRL key with a basic set of commands to save and exit. Open local_settings.ini in the nano text editor nano /srv/geomoose/conf/local_settings.ini Type or copy/paste the following lines into that file: [paths] root=/srv/geomoose/maps/ mapserver_url=/cgi-bin/mapserv temp=/tmp/ After those lines are in the file use the CTRL key and the X key at the same time. When prompted type Y to save your changes, and then ENTER to save the file. This file is used to tell mapserver where to save data it uses temporarily to render images. Open temp_directory.map in nano: nano /srv/geomoose/maps/temp_directory.map Change this line: IMAGEPATH "/tmp/www/" To this line: IMAGEPATH "/tmp/" After that line is in the file use the CTRL key and the X key at the same time. When prompted type Y to save your changes, and then ENTER to save the file.
https://docs.geomoose.org/2.9/workshops/umgeocon2016/ex1.html
2021-05-06T09:48:56
CC-MAIN-2021-21
1620243988753.91
[]
docs.geomoose.org
In this document This document describes how to install and run Snort with SmartNICs and Napatech Software Suite (also referred to as 3GD), and how the multi-CPU distribution functionality improves performance in an IDS scenario. The installation process is described using a source-based installation of Snort on CentOS as an example. You must adapt the installation process for package-based installation, for other Linux distributions, and for FreeBSD.
https://docs.napatech.com/r/tV_BBTbrxypvxa7V2iotHg/5zxTVS7XC3VNLOpTW2hmPA
2021-05-06T10:22:45
CC-MAIN-2021-21
1620243988753.91
[]
docs.napatech.com
In this document This troubleshooting guide can help you diagnose and correct common issues. Purpose The purpose of this application note is to provide users with a basic troubleshooting guide. This document only applies to functionality provided by Napatech Software Suite (also referred to as 3GD). Audience The intended target audience for this document is SmartNIC end users including system integrators. Further information Some Napatech tools listed in this document may not be available in the Napatech Software Suite package. Please note that how to download or how to install the listed tools is not described in this document. Please refer to DN-0449 for more information about Napatech tools. Please contact Napatech Technical Support if you have any further questions regarding the listed tools in this document.
https://docs.napatech.com/r/uEYdApEFrM979dO1sXspfg/gGX57pbgVRuDmQImkeqqWQ
2021-05-06T10:11:59
CC-MAIN-2021-21
1620243988753.91
[]
docs.napatech.com
3.4. Classifications, Products, Components, Versions, and Milestones¶ Bugs in Bugzilla are classified into one of a set of admin-defined Components. Components are themselves each part of a single Product. Optionally, Products can be part of a single Classification, adding a third level to the hierarchy. 3.4.1. Classifications¶ Classifications are used to group several related products into one distinct entity. For example, if a company makes computer games, they could have a classification of “Games”, and a separate product for each game. This company might also have a Common classification, containing products representing units of technology used in multiple games, and perhaps an Other classification containing a few special products that represent items that are not actually shipping products (for example, “Website”, or “Administration”). The classifications layer is disabled by default; it can be turned on or off using the useclassification parameter in the Bug Fields section of Parameters.. 3.4.2. Products¶ Products usually represent real-world shipping products. Many of Bugzilla’s settings are configurable on a per-product basis. When creating or editing products the following options are available: - Product - The name of the product - Description - A brief description of the product - Open for bug entry - Deselect this box to prevent new bugs from being entered against this product. - Enable the UNCONFIRMED status in this product - Select this option if you want to use the UNCONFIRMED status (see Workflow) - Default milestone - Select the default milestone for this product. - Version - Specify the default version for this product. - Create chart datasets for this product - Select to make chart datasets available for this product. It is compulsory to create at least one component in a product, and so you will be asked for the details of that too. When editing a product you can change all of the above, and there is also a link to edit Group Access Controls; see Assigning Group Controls to Products. 3.4.2.1. Creating New Products¶ To create a new product: Administrationfrom the footer and then choose Productsfrom the main administration page. - Select the Addlink in the bottom right. - Enter the details as outlined above. 3.4.2.2. Editing Products¶ defined when the product was created. Click on the product name to edit these properties, and to access links to other product attributes such as the product’s components, versions, milestones, and group access controls. 3.4.2.3. Adding or Editing Components, Versions and Target Milestones¶ To add new or edit existing Components, Versions, or Target Milestones to a Product, select the “Edit Components”, “Edit Versions”, or “Edit Milestones” links from the “Edit Product” page. A table of existing Components, Versions, or Milestones is displayed. Click on an item name to edit the properties of that item. Below the table is a link to add a new Component, Version, or Milestone. For more information on components, see Components. For more information on versions, see Versions. For more information on milestones, see Milestones. 3.4.2.4. Assigning Group Controls to Products¶ Groups and Security. (i.e. bugs in this product can be associated with this group), default (i.e. bugs in this product are in this group by default), and mandatory (i Common Applications of Group Controls for examples of product and group relationships. Note Products and Groups are not limited to a one-to-one relationship. Multiple groups can be associated with the same product, and groups can be associated with more than one product.. 3.4.2.5. Common Applications of Group Controls¶¶ Suppose there is a product called “Bar”. You would like to make it so that only users in the group “Foo” can enter bugs in the “Bar” product. Additionally, bugs filed in product “Bar” must be visible only to users in “Foo” (plus, by default, the reporter, assignee, and CC list of each bug) at all times. Furthermore, only members of group “Foo” should be able to edit bugs filed against product “Bar”, even if other users could see the bug. This arrangement would achieved by the following: Product Bar: foo: ENTRY, MANDATORY/MANDATORY, CANEDIT Perhaps such strict restrictions are not needed for product “Bar”. Instead, you would like to make it so that only members of group “Foo” can enter bugs in product “Bar”, but bugs in “Bar” are not required to be restricted in visibility to people in “Foo”. Anyone with permission to edit a particular bug in product “Bar” can put the bug in group “Foo”, even if they themselves are not in “Foo”. Furthermore, anyone in group “Foo” can edit all aspects of the components of product “Bar”, can confirm bugs in product “Bar”, and can edit all fields of any bug in product “Bar”. That would be done like this: Product Bar: foo: ENTRY, SHOWN/SHOWN, EDITCOMPONENTS, CANCONFIRM, EDITBUGS General User Access With Security Group¶ To permit any user to file bugs against “Product A”, and to permit any user to submit those bugs into a group called “Security”: Product A: security: SHOWN/SHOWN General User Access With A Security Product¶ To permit any user to file bugs against product called “Security” while keeping those bugs from becoming visible to anyone outside the group “SecurityWorkers” (unless a member of the “SecurityWorkers” group removes that restriction): Product Security: securityworkers: DEFAULT/MANDATORY Product Isolation With a Common Group¶: Product A: AccessA: ENTRY, MANDATORY/MANDATORY Product B: AccessB: ENTRY, MANDATORY/MANDATORY: Product A: AccessA: ENTRY, MANDATORY/MANDATORY Support: SHOWN/NA Product B: AccessB: ENTRY, MANDATORY/MANDATORY Support: SHOWN/NA Product Common: Support: ENTRY, DEFAULT/MANDATORY, CANEDIT Make a Product Read Only¶: Product A: ReadOnly: ENTRY, NA/NA, CANEDIT Note For more information on Groups outside of how they relate to products see Groups and Security. 3.4.3. Components¶link from the Edit productpage. - Select the Addlink in the bottom right. - Fill out the Componentfield, a short Description, the Default Assignee, Default CC List, and Default QA Contact(if enabled). The Component Descriptionfield may contain a limited subset of HTML tags. The Default Assigneefield must be a login name already existing in the Bugzilla database. 3.4.4. Versions¶ Versions. 3.4.5. Milestones¶ Milestones are “targets” that you plan to get a bug fixed by. For example, if you have a bug that you plan to fix for your 3.0 release, it would be assigned the milestone of 3.0. Note Milestone options will only appear for a Product if you turned on the usetargetmilestone parameter in the “Bug Fields” tab of the Parameters page.”. This documentation undoubtedly has bugs; if you find some, please file them here.
https://bmo.readthedocs.io/en/latest/administering/categorization.html
2021-05-06T10:01:10
CC-MAIN-2021-21
1620243988753.91
[]
bmo.readthedocs.io
CIDER nREPL Overview cider-nrepl aims to extent the base functionality of an nREPL server to cater to the needs of Clojure(Script) programming environments. It provides nREPL ops for common operations like: code completion source and documentation lookup profiling debugging code reloading find references running tests filtering stacktraces The ultimate goal of cider-nrepl is to provide a solid foundation for nREPL clients, so they don’t have to reinvent the wheel all the time. Despite its name, cider-nrepl is editor-agnostic and is leveraged by several other Clojure editors, besides CIDER (e.g. vim-fireplace, iced-vim, Calva, CCW). While the project is officially a part of CIDER, its development is a joint venture between all interested tool authors. Design This section documents some of the major design decisions in cider-nrepl. While in essence it’s just a collection of nREPL middleware we had to make a few important design decision here and there that influenced the code base and the usability of cider-nrepl in various ways. REPL-powered All of the middleware that are currently part of cider-nrepl are relying on REPL state introspection to perform their work. While we might leverage static code analysis down the road for some tasks, cider-nrepl will always be a REPL-first tool. Editor Agnostic Although those middlewares were created for use with CIDER almost all of them are extremely generic and can be leveraged from other editors. Projects like vim-fireplace and vim-replant are making use of cider-nrepl already. Reusable Core Logic cider-nrepl tries to have as little logic as possible and mostly provides thin wrappers over existing libraries (e.g. compliment, cljfmt, etc). Much of its core functionality lives in orchard, so that eventually it can be used by non-nREPL clients (e.g. Socket REPL clients). Very simply put - there’s very little code in cider-nrepl that’s not simply wrapping code from other libraries in nREPL operations. The primary reason for this is our desire to eventually provide support for non-nREPL REPLs in CIDER, but this also means that other editors can directly leverage the work we’ve done so far. ClojureScript Support We want cider-nrepl to offer feature parity between Clojure and ClojureScript, but we’re not quite there yet and many features right now are Clojure-only. We’d really appreciate all the help we can get from ClojureScript hackers to make this a reality. Isolated Runtime Dependencies. It’s a bit ugly and painful, but it gets the job done. If someone has better ideas how to isolate our runtime dependencies - we’re all ears! Deferred Middleware Loading To improve the startup time of the nREPL server all of cider-nrepl’s middlewares are loaded for real only when needed. We’d love to bring the support for deferred middleware loading straight to nREPL down the road.
https://docs.cider.mx/cider-nrepl/0.26/index.html
2021-05-06T10:24:06
CC-MAIN-2021-21
1620243988753.91
[]
docs.cider.mx
Configure ServiceNow Users can securely log on to ServiceNow using their enterprise credentials. To configure ServiceNow for SSO through SAML, perform the following: In a browser, type https://<your-organization>.service-now.com/and press Enter. For example, if the URL you use to access ServiceNow is, then you must replace <your-organization>with myserver. NOTE: Ensure that the following details are provided in the Citrix Gateway service user interface when adding the ServiceNow app. Assertion URL: https://<your-organizaton>.service-now.com/navpage.do Relay State: https://<your-organizaton>.service-now.com/ Audience: https://<your-organizaton>.service-now.com/ Name ID Format: Select “Email Address” Name ID: Select “User Principal Name (UPN)” The Name ID format and Name ID attributes depend on the method of authentication chosen for ServiceNow. Log on to your ServiceNow account as an administrator. In the upper-left corner, using the Filter Navigator, search for plug-ins, and click Plugins in the search results. In the right pane, in System Plugins section, search for integration. In the search results, right-click Integration - Multiple Provider Single Sign-OnInstaller and click Activate/Upgrade. Click Activate. A progress bar indicates the completion of the activation process. In the left pane, scroll down to the Multi-Provider SSO section and click Multi-ProviderSSO > Identity Providers. In the right pane, click New. Click SAML. If you have the metadata URL, in the Identity Provider New Record section, in the Import Identity Provider Metadata pop-up window, click URL and enter the metadata URL and click Import. The values for the Identity Provider record fields are automatically populated. If you have the metadata XML file, click the XML. Copy the Identity Provider Metadata XML data and paste in the box. Click Import. The values for the Identity Provider record fields are automatically populated. You can update the values if necessary. Important: Citrix recommends that you import the metadata XML file instead of configuring it manually. You can import the metadata XML file from the Citrix Cloud wizard (Citrix Gateway Service > Add a Web/SaaS App > Single sign on > SAML Metadata). While configuring the parameters in the Advanced tab, ensure that the User Field value matches with the value that is configured for the Name ID field in the Citrix Gateway service user interface. Click Submit. In the left pane, click x509 Certificate to upload x509 certificate. In the right pane, click New. In the X.509 Certificate New record section, specify the following information: Name – type a certificate name. Format – click the appropriate format: for example PEM. Expiration notification – select the check box. Type – click the appropriate type. Notify on expiration – click the Add me icon to get notified. Click the Unlock Notify on expiration to add more users. Active – select the check box. Short Description – type description for the certificate. PEM Certificate – paste the PEM certificate. Download the certificate from the Citrix Cloud wizard (Citrix Gateway Service > Add a Web/SaaS App > Single sign on > Certificate). Copy the text from —–BEGIN CERTIFICATE—– to —–ENDCERTIFICATE—– Paste the text in a text editor and save the file in an appropriate format such as <your organization name>. Note: If you have used an XML file to configure the IdP, you do not have to configure the certificate. Click Submit. In the left pane, click Identify Providers. Click the Identity Provider that you have added. On the identity Provider details page, scroll down to the Related Links section. In the X.509 Certificate row, search for the X.509 certificate, and add the appropriate certificate for the identity provider by clicking Edit. To add a new x.509 certificate, click New and to add or remove the certificates, click Edit. Click Update on the identity provider details page to save the changes. To obtain metadata to be used for IdP configuration, click Generate Metadata. Note: You must click Generate Metadata to complete the updates. The service provider metadata appears in a new window. You can use the metadata to validate the entities across both, SP and IdP.
https://docs.citrix.com/en-us/citrix-gateway-service/saas-apps-templates/citrix-gateway-servicenow-saas.html
2021-05-06T09:59:04
CC-MAIN-2021-21
1620243988753.91
[]
docs.citrix.com
messages.log Background Information InterSystems IRIS reports general messages, system errors, certain operating system errors, and network errors through an operator console facility. On Windows, there is no operator console as such and all console messages are sent to the messages.log file in the InterSystems IRIS system manager directory (install-dir/mgr). For InterSystems IRIS systems on UNIX® platforms, you can send operator console messages to the messages.log file or the console terminal. For more information, see “Monitoring Log Files” in the chapter “Monitoring InterSystems IRIS Using the Management Portal” in the Monitoring Guide. The directory location of this file is configurable; see the ConsoleFile entry in Configuration Parameter File Reference. Available Tools Provides the WriteToConsoleLog() method, which you can use to write to the messages.log file. Availability: All namespaces. Provides the ModifyConsoleFile() method. Availability: All namespaces. Enable you to set up structured logging, which will write the same messages seen in messages.log().
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/Doc.View.cls?KEY=ITECHREF_cconsole_log
2021-05-06T08:46:51
CC-MAIN-2021-21
1620243988753.91
[]
docs.intersystems.com
that was a KERNEL_MODE_HEAP_CORRUPTION, no 3rd party drivers on the call stack. you should try to uninstall this 3rd party software and try to reproduce. (some chinese antivirus?) C:\Windows\system32\drivers\AliPaladin64.sys Fri Sep 7 04:14:27 2018 C:\Windows\System32\drivers\Tppwr64v.sys Thu Feb 16 05:57:36 2017 C:\Windows\system32\drivers\TsQBDrv.sys Thu Nov 22 05:27:13 2018 C:\Windows\system32\drivers\xlwfp.sys Fri Mar 23 03:36:33 2018 Multiple instances of the Lenovo Solution Center drivers were unloaded, so you could uninstall that, too. If the crashes continue: enable driver verifier. Thank you very much for your suggestions, but my computer is a brand new computer produced in July 2019. Why does the record show the records of 2017 and 2018? These three drivers are rogue applications from three Chinese companies. I understand that they will always occupy memory in the background (even if the application is closed). I have found a way to uninstall them through the registry. I hope to avoid the next blue screen. Thank you very much Help me analyze the dump file, you helped me find the key point of this failure. Hi MinghaoFu-1981 I've worked as Windows Support Engineer on behalf of Microsoft and I can help analyzing the dump file generated during the blue screen to try to find its root cause. The default location of the dump file is %SystemRoot%memory.dmp. Can you check if the file is there? If so, please upload do OneDrive so I can download and proceed. Thanks! s!AoQSI4tAXyBywYcgtreqvUaR2ca7JA this is dump files Please see the attached pdf. I noticed that you are using Windows 18362 (1903). There are many known issues that can cause this behavior. To avoid this known issues, I suggest updating to 18363 (1909) and applying Windows Update to continue. Can you follow this steps and verify if the issue persists? Thanks 8511-bugcheck-analysis.pdf My computer has been updated in 1909. In the past year, my computer has shown a blue screen twice. The problem you helped me analyze is heap. How can I prevent this from happening again? Thank you very much for your help Share the memory dump files 13 people are following this question.
https://docs.microsoft.com/en-us/answers/questions/28121/windows10-blue-screen-problem-hope-to-help-solve-i.html?sort=oldest
2021-05-06T11:15:54
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
Every Stargaze user is associated with their own creator coin. A creator coin is minted on a bonding curve (with STARS reserves) that favors early buyers. STARS reserves used for buying creator coins are staked on the network, earning yield for the creator. The creator coin curve is a power function of the form: x is the reserve asset (STARS), while y is the price of the creator coin. m and n control the incline of the curve, and c lowers and lifts the curve. Stargaze will use the following constants for creator coins: m = 1/400n = 2c = 0 This generates the following curve: To calculate the buying price for new creator coins, or for selling them, we have to calculate the area under the curve, or the integral. This formula is: r is the reserve balance (amount of STARS) required to be in the place for the creator coin's supply to be x. Buying a creator coin is the crypto-native way of "following" a user. In the Stargaze web app, users of the creator coins you own will show up higher in your feed. This gives users control over constructing their own feed algorithm, instead of an opaque algorithm used in legacy social media. Whenever a user earns rewards from their posts being curated, it get added to their creator coin bonding curve, increasing the coin price. Creator coins are only for registered Stargaze users and are opt-in. A user's Twitter account has to be verified in order for them to mint creator coins. Creator coins will be interoperable as ERC-20s in Ethereum, transferrable over Gravity Bridge. Creator coins are implemented as staking derivatives. This enables users to earn staking yield on their coins. Then can also earn DeFi yield by transferring to other protocols in the Cosmos and Ethereum ecosystems.
https://docs.stargaze.fi/protocol/creator-coins
2021-05-06T08:56:36
CC-MAIN-2021-21
1620243988753.91
[]
docs.stargaze.fi
Export Substance Alchemist exports your materials as Substance files (.SBS and .SBSAR) or as bitmap textures. Those textures are created from each channel of your materials. They can be mixed based on a specific presets. Multiple presets are shipped by default and it is also possible to import new ones. See the following pages for more information:
https://docs.substance3d.com/sadoc/export-188976202.html
2021-05-06T10:51:08
CC-MAIN-2021-21
1620243988753.91
[]
docs.substance3d.com
# Networking The last step before proceeding with the configuration of PlexusLab is the management of the connection to the I/O devices. Make sure you have at least one I/O device for which you know the IP address before proceeding with the setup via the platform. Each device of the Plexus POD series offers the possibility to manually set its own IP; refer to the POD user manual for more information on this. It is good practice to assign a subnet to the section that connects the Plexus Lab web server with the I/O devices, as in the following examples. Tip More complex networks, composed also or only of I/O devices not of the Plexus PODs series, are listed among the advanced examples; in these cases it is necessary to refer to the chapter on configuring the ModBus API. # Basic configuration examples Below are some examples of networks made up of a PlexusLab Web Server and different combinations of connected I/O. # 1 Plexus POD Web Server and PlexusLab + 1 Plexus POD. A configuration of this type, a simple configuration, allows you to have a simple monitoring on your system thanks to the inputs and outputs provided by the Plexus POD. Plexus POD device's address: 192.168.2.200 # 2 or more Plexus PODs Web Server with PlexusLab and 2 (or more) Plexus PODs. A configuration of this type allows you to extend the number of sensors and actuators connected to the system thanks to the use of more than one Plexus POD. Plexus POD devices' addresses: 192.168.2.200, 192.168.2.201, ... # Multi-server configuration examples Below are some examples of networks made up of more than one PlexusLab Web Server and different combinations of connected I/O. # 2 Plexus Lab autonomous 2 Web Server with PlexusLab with its associated PODs (as seen below for basic network configurations), without any virtual link between the two. This configuration is in effect a basic configuration, replicated on more than one server. PlexusLab address 2: 192.168.1.101/plexus/ # 2 Plexus Lab, including 1 bridge 2 Web Server with PlexusLab, one of which is in Bridge mode, plus the related associated PODs (as seen for the basic network configurations). A configuration of this type, once the setup on both Web Servers is complete, allows you to monitor the entire system from the PlexusLab platform installed on the "master" machine. The image below is on adding I/O devices explains in detail how to add a Plexus Bridge. PlexusLab address 2: 192.168.1.101/plexus/
https://docs.v1.plexus-automation.com/en/setup/networking.html
2021-05-06T08:59:52
CC-MAIN-2021-21
1620243988753.91
[array(['/assets/img/1-pod.7afe8b40.png', '1-pod'], dtype=object) array(['/assets/img/2-pods.8ac4a714.png', '2-pods'], dtype=object) array(['/assets/img/2-labs-no-bridge.0531beec.png', '2-labs-no-bridge'], dtype=object) array(['/assets/img/2-labs-1-bridge.0a44b82d.png', '2-labs-1-bridge'], dtype=object) ]
docs.v1.plexus-automation.com
iOSMobileTableDataSource.RowCount From Xojo Documentation Method iOSMobileTableDataSource.RowCount(table As iOSMobileTable,
https://docs.xojo.com/iOSMobileTableDataSource.RowCount
2021-05-06T09:48:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.xojo.com
Utilities¶ Sparse utilities¶ A chainer.Variable can be converted into a sparse matrix in e.g. COO (Coordinate list) format. A sparse matrix stores the same data as the original object but with a different internal representation, optimized for efficient operations on sparse data, i.e. data with many zero elements. Following are a list of supported sparse matrix formats and utilities for converting between a chainer.Variable and these representations. Note Please be aware that only certain functions accept sparse matrices as inputs, such as chainer.functions.sparse_matmul().
https://docs.chainer.org/en/latest/reference/util.html
2021-05-06T10:38:32
CC-MAIN-2021-21
1620243988753.91
[]
docs.chainer.org
Auto-scaling in private cloud environments This section describes how Virtual Warehouse auto-scaling works in Cloudera Data Warehouse (CDW) Private Cloud. Auto-scaling enables both scaling up and scaling down of Virtual Warehouse instances so they can meet your varying workload demands and free up resources on the OpenShift cluster for use by other workloads. The following topics describe how auto-scaling works in Hive and Impala Virtual Warehouses in CDW Private Cloud.
https://docs.cloudera.com/data-warehouse/1.2/auto-scaling/topics/dw-private-cloud-autoscaling-overview.html
2021-05-06T10:51:26
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
To navigate your wireless RAN, review these topics The Uhana by VMware dashboard displays a map and related KPI charts. The Uhana by VMware platform contains a framework for dynamically defining and computing KPIs in a real-time system. The Uhana by VMware dashboard figure represents a dashboard and its parts. See the Dashboard Elements table for callout descriptions. Dashboard Elements Table This section describes how to manipulate the dashboard to view data in the platform. Use the Selection Panel to To view data The following tables provides the data charts available for common data consumption. To get to root causes and remediations, use the Focus menu. Where possible, related terminology can be found in the online glossary of terms, Terminology.
https://docs.vmware.com/en/Uhana-by-VMware/1.0/uhana_piran_1_3/GUID-explore_charts.html
2021-05-06T11:16:00
CC-MAIN-2021-21
1620243988753.91
[]
docs.vmware.com
APP_Addons_List_Table::__construct() Method: Constructor. Source: includes/admin/addons-mp/addons-mp-class.php:53 Method: Constructor. Source: includes/admin/addons-mp/addons-mp-class.php:53 Method: Outputs all the Add-ons page content. Source: includes/admin/addons-mp/addons-mp-class.php:216 Method: Outputs the pagination bar. Source: includes/admin/addons-mp/addons-mp-class.php:244 Method: Fetches the add-ons filters from cache (if not expired) or from the marketplace REST API, directly. Source: includes/admin/addons-mp/addons-mp-class.php:177 Method: Fetches the add-ons from cache (if not expired) or from the marketplace REST API, directly. Source: includes/admin/addons-mp/addons-mp-class.php:119 Method: Retrieves all the add-ons from an RSS list as an array of objects. Source: includes/admin/addons-mp/addons-mp-class.php:422 Method: Retrieves a list of all the available Add-ons filters. Source: includes/admin/addons-mp/addons-mp-class.php:566 Method: Outputs the Add-ons filters. Source: includes/admin/addons-mp/addons-mp-class.php:151 Method: Retrieve a list of CSS classes to be used on the table listing. Source: includes/admin/addons-mp/addons-mp-class.php:278 Method: Retrieves available tabs array Source: includes/admin/addons-mp/addons-mp-class.php:95
https://docs.arthemes.org/jobroller/reference/methods/
2021-05-06T10:40:12
CC-MAIN-2021-21
1620243988753.91
[]
docs.arthemes.org
TagResource Tags the specified resource in AWS Audit Manager. Request Syntax POST /tags/ resourceArnHTTP/1.1 Content-type: application/json { "tags": { " string" : " string" } } URI Request Parameters The request uses the following URI parameters. - resourceArn The Amazon Resource Name (ARN) of the specified resource. Length Constraints: Minimum length of 20. Maximum length of 2048. Pattern: ^arn:.*:auditmanager:.* Required: Yes Request Body The request accepts the following data in JSON format. The tags to be associated with the resource.} Required: Yes Response Syntax HTTP/1.1 200 Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. Errors For information about the errors that are common to all actions, see Common Errors. -:
https://docs.aws.amazon.com/audit-manager/latest/APIReference/API_TagResource.html
2021-05-06T10:26:13
CC-MAIN-2021-21
1620243988753.91
[]
docs.aws.amazon.com
Obsolete As of CRYENGINE 3.6.7, we’re no longer building some cache files for packaged builds. The functionality to do so in the engine is still present, though it will likely be removed at a later date. We anticipate this will not only improve performance on current platforms, it also simplifies the build process greatly and also removes some post-build issues, such as having to modify the startup.pak in order to remove the shipped sample levels which had file content inside. The reason for this change is these cache files were built with 7th Generation consoles (Xbox 360 / Playstation 3) in mind and specifically, optical disk access performance. With the Xbox One and Playstation 4 requiring all apps be installed onto the hard disk, these cache files no longer offer performance benefits. Among the files/folders affected are: - GameSDK/_FastLoad/startup.pak - GameSDK/_LevelCache/*.* - GameSDK/modes/*.* Overview These list files are created for the build process to read from and obtain assets for packaging inside dedicated .pak files, which are then used by the engine for pre-loading. This helps speed up menu loading and performance greatly. The "sequence" version of the resource file is the exact sequence which the engine loaded each asset, rather than sorted by name. This file is provided for debugging purposes and is not required for the build process. See the shipped .xml build scripts inside Bin32/rc/ folder for references and more information on building these pak files in the SDK. It is recommended to update these files frequently, as assets change and these lists can become out-of-date, making them less effective and sometimes causing invalid warnings in console. MP Menu pak - Run the PC version. - Set g_FEMenuCacheSaveList=1 in the console on the main menu before choosing multiplayer option. - Enter the multiplayer menu. - The output file is: GameSDK\Levels\Multiplayer\mpmenu_list.txt - Commit this file to Perforce. - The build system will automatically build a new version of the MP menu pak for the build. GameSDK\modes\mpmenumodels.pak Game Mode switcher pak - Run the PC version. - Set sys_PakSaveLevelResourceList=1 in the console on the main menu. - Enter the multiplayer menu. - Enter the singleplayer menu. - The output files are: GameSDK\gamemodeswitch_sp\auto_resourcelist.txt GameSDK\gamemodeswitch_mp\auto_resourcelist.txt - Important: Edit these files by hand to remove any files from the Localization folder, otherwise localization will break! - Copy/rename them to: GameSDK\Levels\Multiplayer\gamemodeswitch_sp_list.txt GameSDK\Levels\Multiplayer\gamemodeswitch_mp_list.txt - Commit them to Perforce. - The build system will automatically build a new version of the game mode switch pak for the build: GameSDK\modes\gamemodeswitch.pak Menu Common Paks The creation of these files is a bit more hands-on, and requires you to traverse all of the menu options/functions to get the most out of the process. - Run the PC version. - Set sys_PakSaveMenuCommonResourceList=1 in the console on the main menu. - Enter all menus and activation all options. - Ensure that you traverse the Singleplayer menu, then the Multiplayer menu and then back to the Singleplayer menu (so that the MP content is fully written). - The output files are: GameSDK\menucommon_sp\auto_resourcelist.txt GameSDK\menucommon_mp\auto_resourcelist.txt - Copy/rename them to: GameSDK\Levels\Multiplayer\menucommon_sp_list.txt GameSDK\Levels\Multiplayer\menucommon_mp_list.txt - Commit them to Perforce. - The build system will automatically build a new version of the menu common paks for the build: GameSDK\modes\menucommon_sp.pak GameSDK\modes\menucommon_mp.pak Startup Pak The final piece of the process is the Startup.pak file, located in the GameSDK\_FastLoad\ folder. - Edit the system.cfg file located in the SDK root folder: - Add this line at the bottom: sys_PakSaveFastLoadResourceList=1 - Run the PC version. - The output file is: GameSDK\auto_resourcelist.txt - Copy/rename the file to: GameSDK\Levels\Multiplayer\startup_list.txt - Some entries may have double slashes (//), manually edit these to only have one. - Commit this file to Perforce. - The build system will automatically build a new version of the startup pak for the build. GameSDK\_FastLoad\startup.pak
https://docs.cryengine.com/display/SDKDOC4/Creating+Resource+Lists+and+Build+Paks
2021-05-06T09:31:52
CC-MAIN-2021-21
1620243988753.91
[]
docs.cryengine.com
% - %ConstructClone() - %NormalizeObject() - %Prepare() - %PrepareMetaData() - %SerializeObject() - %ValidateObject(). For %ResultSet.SQL, the arguments are: This method returns an instance of %Library.IResultSet in the generic case. In the case of %ResultSet.SQL, an instance of %ResultSet.SQL is returned. This method constructs. This method is not meant to be called directly. It is called by %Save and by %GetSwizzleObject. .unused is not used. checkserial will force the checking of any serial properties by calling their %ValidateObject methods after swizzling this property. Returns a %Status value indicating success or failure. Inherited Members Inherited Properties Inherited Methods - %ClassIsLatestVersion() - %ClassName() - %CreateSnapshot() - %DispatchClassMethod() - %DispatchGetModified() - %DispatchGetProperty() - %DispatchMethod() - %DispatchSetModified() - %DispatchSetMultidimProperty() - %Display() - %DisplayFormatted() - %Extends() - %Get() - %GetData() - %GetMetadata() - %GetParameter() - %IsA() - %IsModified() - %New() - %ObjectModified() - %OriginalNamespace() - %PackageName() - %Print() - %ResultColumnCountGet() - %SendDelimitedRows() - %SetModified() - Fetch() - FetchRows() - GetInfo() - GetODBCInfo()
https://docs.intersystems.com/irisforhealthlatest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&CLASSNAME=%25ResultSet.SQL
2021-05-06T10:48:49
CC-MAIN-2021-21
1620243988753.91
[]
docs.intersystems.com
goal of the TRUNCATE TABLE statement is to remove all records from a table. Since this is not possible in a partitioned stored procedure, VoltDB does not allow TRUNCATE TABLE statements within partitioned stored procedures. You can perform TRUNCATE TABLE statements in ad hoc or multi-partition procedures only.
https://docs.voltdb.com/UsingVoltDB/sqlref_truncate.php
2021-05-06T10:45:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.voltdb.com
Inspire In the Inspire Lab, you can generate color variations of a material from an image. On the Color Extraction panel, you can import an image and extract its colors according to some parameters. On the Material Variation panel, you can import material or a collection and it will generate color variations of it using colors extracted from the image You need to have one image and one material (or collection) to start generating variations.
https://docs.substance3d.com/sadoc/inspire-172823645.html
2021-05-06T08:44:24
CC-MAIN-2021-21
1620243988753.91
[]
docs.substance3d.com
Is the game running fullscreen? It is possible to toggle fullscreen mode by changing this property: using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Example() { Screen.fullScreen = !Screen.fullScreen; } } A fullscreen switch does not happen immediately; it will actually happen when the current frame is finished. See Also: SetResolution.
https://docs.unity3d.com/kr/2017.1/ScriptReference/Screen-fullScreen.html
2021-05-06T09:58:37
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
DataPath¶ The datapath module is an interface for building ERMrest “data paths” and retrieving data from ERMrest catalogs. It also supports data manipulation (insert, update, delete). In its present form, the module provides a limited programmatic interface to ERMrest. Features¶ - Build ERMrest “data path” URLs with a Pythonic interface - Covers the essentials for data retrieval: link tables, filter on attributes, select attributes, alias tables - Retrieve entity sets; all or limited numbers of entities - Fetch computed aggregates or grouped aggregates - Convert entity sets to Pandas DataFrames - Insert and update entities of a table - Delete entities identified by a (potentially, complex) data path Limitations¶ - Only supports application/jsonCONTENT-TYPE (i.e., protocol could be made more efficient). - The ResultSetinterface is a thin wrapper over a dictionary of a list of results. - Many user errors are caught by Python assertstatements rather than checking for “invalid parameters” and throwing custom Exceptionobjects. Tutorials¶ See the Jupyter Notebook tutorials in the docs/ folder. - Example 1: basic schema inspection - Example 2: basic data retrieval - Example 3: building simple data paths - Example 4: slightly more advanced topics - Data Update Example: examples of insert, update, and delete Now, get started! ERMrest Model Management¶ The core.ermrest_model module provides an interface for managing the model (schema definitions) of an ERMrest catalog. This library provides an (incomplete) set of helper routines for common model management idioms. For some advanced scenarios supported by the server but not yet supported in this library, a client may need to resort to direct usage of the low-level deriva.core.ermrest_catalog.ErmrestCatalog HTTP access layer. Features¶ - Obtain an object hierarchy mirroring the model of a catalog or catalog snapshot. - Discover names of schemas, tables, and columns as well as definitions where applicable. - Discover names and definitions of key and foreign key constraints. - Discover annotations on catalog and model elements. - Discover policies on catalog and model elements (if sufficiently privileged). - Create model elements - Create new schemas. - Create new tables. - Create new columns on existing tables. - Create new key constraints over existing columns. - Create new foreign key constraints over existing columns and key constraints. - Reconfigure model elements - Change comment string on schema, table, column, key, or foreign key constraints. - Change acls on schema, table, column, and foreign key constraints. - Change acl_bindings on table, column, and foreign key constraints. - Change annotations on catalog, schema, table, column, key, or foreign key constraints. - Drop model elements - Drop schemas. - Drop tables. - Drop columns. - Drop key constraints. - Drop foreign key constraints. - Alter model elements - Rename a schema. - Rename a table or move the table between schemas. - Rename a column, or change column storage type, default value, or null-ok status. - Rename a key constraint. - Rename a foreign key constraint. Limitations¶ Because the model management interface mirrors a complex remote catalog model with a hierarchy of local objects, it is possible for the local objects to get out of synchronization with the remote catalog and either represent model elements which no longer exist or lack model elements recently added. The provided management methods can incrementally update the local representation with changes made to the server by the calling client. However, if other clients make concurrent changes, it is likely that the local representation will diverge. The only robust solution to this problem is for the caller to discard its model representation, reconstruct it to match the latest server state, and retry whatever changes are intended. Examples¶ For the following examples, we assume this common setup: from deriva.core import ErmrestCatalog import deriva.core.ermrest_model as em from deriva.core.ermrest_model import builtin_types as typ catalog = ErmrestCatalog(...) model_root = catalog.getCatalogModel() Also, when examples show keyword arguments, they illustrate a typical override value. If omitted, a default value will apply. Many parts of the model definition are immutable once set, but in general comment, acl, acl_binding, and annotation attributes can be modified after the fact through configuration management APIs. Add Table to Schema¶ To create a new table, you build a table definition document and pass it to the table-creation method on the object representing an existing schema. The various classes involved include class-methods define(...) to construct the constituent parts of the table definition: column_defs = [ em.Column.define("Col1", typ.text), em.Column.define("Col2", typ.int8), ] key_defs = [ em.Key.define( ["Col1"], # this is a list to allow for compound keys constraint_names=[ [schema_name, "My New Table_Col1_key"] ], comment="Col1 text values must be distinct.", annotations={}, ) ] fkey_defs = [ em.ForeignKey.define( ["Col2"], # this is a list to allow for compound foreign keys "Foreign Schema", "Referenced Table", ["Referenced Column"], # this is a list to allow for compound keys on_update='CASCADE', on_delete='SET NULL', constraint_names=[ [schema_name, "My New Table_Col2_fkey"] ], comment="Col2 must be a valid reference value from the domain table.", acls={}, acl_bindings={}, annotations={}, ) ] table_def = em.Table.define( "My New Table", column_defs, key_defs=key_defs, fkey_defs=fkey_defs, comment="My new entity type.", acls={}, acl_bindings={}, annotations={}, provide_system=True, ) schema = model_root.schemas[schema_name] new_table = schema.create_table(table_def) By default, create_table(...) will add system columns to the table definition, so the caller does not need to reconstruct these standard elements of the column definitions nor the RID key definition. Add a Vocabulary Term Table¶ A vocabulary term table is often useful to track a controlled vocabulary used as a domain table for foreign key values used in science data columns. A simple vocabulary term table can be created with a helper function: schema = model_root.schemas[schema_name] new_vocab_table = schema.create_table( Table.define_vocabulary( "My Vocabulary", "MYPROJECT:{RID}", "{RID}" ) ) The Table.define_vocabulary() method is a convenience wrapper around Table.define() to automatically generate core vocabulary table structures. It accepts other table definition parameters which a sophisticated caller can use to override or extend these core structures. Add Column to Table¶ To create a new column, you build a column definition document and pass it to the column-creation method on the object representing an existing table. column_def = em.Column.define( "My New Column", typ.text, nullok=False, comment="A string representing my new stuff.", annotations={}, acls={}, acl_bindings={}, ) table = model_root.table(schema_name, table_name) new_column = table.create_column(column_def) The same pattern can be used to add a key or foreign key to an existing table via table.create_key(key_def) or table.create_fkey(fkey_def), respectively. Similarly, a schema can be added to a model with model.create_schema(schema_def). Remove a Column from a Table¶ To remove or “drop” a column, you invoke the drop() method on the column object itself: table = model_root.table(schema_name, table_name) column = table.column_definitions[column_name] column.drop() The same pattern can be used to remove a key or foreign key from a table via key.drop() or foreign_key.drop(), respectively. Similarly, a schema or table can be removed with schema.drop() or table.drop(), respectively. Alter a Table¶ To alter certain aspects of an existing table, you invoke the alter() method with optional keyword arguments for the aspects you wish to change. The default for omitted keyword arguments is a special nochange value which means to keep that aspect as it is currently defined: table = model_root.table(orig_schema_name, orig_table_name) table.alter( schema_name=destination_schema_name, table_name=new_table_name ) The schema_name argument allows you to relocate an existing table from an original schema to a destination schema, where both named schemas already exist in the model. This also relocates key or foreign key constraints in the table at the same time. The table_name argument allows you to revise the name of an existing table in the model, while preserving other aspects of the table definition, content, and content history. The same pattern can be used to alter schemas, columns, keys, and foreign keys: schema.alter(schema_name=new_schema_name) column.alter( name=new_column_name, type=new_column_type_obj, nullok=new_nullok_value, default=new_default_value ) key.alter(constraint_name=new_unqualified_name_str) foreign_key.alter( constraint_name=new_unqualified_name_str, on_update=new_action_string, on_delete=new_action_string ) The key and foreign key alterations accept only the unqualified constraint name string, because it is not possible to change the schema qualification other than by relocating the parent table to a different schema. The foreign key alteration also supports changes to the on_update and on_delete action, e.g. NO ACTION, SET NULL, or CASCADE. As a convenience, there are also optional alter() arguments to reconfigure comment, acls, acl_bindings if they exist in the define() method for the same class of object. They are omitted from the preceding examples for the sake of brevity. These arguments allow similar effect to mutating the local configuration fields and then invoking the apply() method to send them to the server, except that configuration changes included in an alter() request will happen atomically with respect to the other indicated alterations. ErmrestCatalog¶ The deriva.core.ermrest_catalog.ErmrestCatalog class provides HTTP bindings to an ERMrest catalog as a thin wrapper around the Python Requests library. Likewise, the deriva.core.ermrest_catalog.ErmrestSnapshot class provides HTTP bindings to an ERMrest catalog snapshot. While catalogs permit mutation of stored content, a snapshot is mostly read-only and only permits retrieval of content representing the state of the catalog at a specific time in the past. Instances of ErmrestCatalog or ErmrestSnapshot represent a particular remote catalog or catalog snapshot, respectively. They allow the client to perform HTTP requests against individual ERMrest resources, but require clients to know how to formulate those requests in terms of URL paths and resource representations. Other, higher-level client APIs are layered on top of this implementation class and exposed via factory-like methods integrated into each catalog instance. Catalog Binding¶ A catalog is bound using the class constructor, given parameters necessary for binding: from deriva.core.ermrest_catalog import ErmrestCatalog from deriva.core import get_credential scheme = "https" server = "myserver.example.com" catalog_id = "1" credentials = get_credential(server) catalog = ErmrestCatalog(scheme, server, catalog_id, credentials=credentials) Client Credentials¶ In the preceding example, a credential is obtained from the filesystem assuming that the user has activated the deriva-auth authentication agent prior to executing this code. For catalogs allowing anonymous access, the optional credentials parameter can be omitted to establish an anonymous binding. The same client credentials (or anonymous access) is applied to all HTTP operations performed by the subsequent calls to the catalog object’s methods. If a calling program wishes to perform a mixture of requests with different credentials, they should create multiple catalog objects and choose the appropriate object for each request scenario. High-Level API Factories¶ Several optional access APIs are layered on top of ErmrestCatalog and/or ErmrestSnapshot and may be accessed by invoking convenient factory methods on a catalog or snapshot object: catalog_snapshot = catalog.latest_snapshot() ErmrestSnapshotbinding for latest known revision of catalog path_builder = catalog.getPathBuilder() deriva.core.datapath.Catalogpath builder for catalog (or snapshot) - Allows higher-level data access idioms as described previously. model_root = catalog.getCatalogModel() deriva.core.ermrest_model.Modelobject for catalog (or snapshot) - The model_rootobject roots a tree of objects isomorphic to the catalog model, organizing model definitions according to each part of the model. - Allows inspection of catalog/snapshot models (schemas, tables, columns, constraints) - Allows inspection of catalog/snapshot annotations and policies. - Allows configuration field mutation to draft a new configuration objective. - Draft changes are applied with model_root.apply() - Many model management idioms are exposed as methods on individual objects in the model hierarchy. Low-Level HTTP Methods¶ When the client understands the URL structuring conventions of ERMrest, they can use basic Python Requests idioms on a catalog instance: - resp = catalog.get(path) - resp = catalog.delete(path) - resp = catalog.put(path, json=data) - resp = catalog.post(path, json=data) Unlike Python Requests, the path argument to each of these methods should exclude the static prefix of the catalog itself. For example, assuming catalog has been bound to as in the constructor example above, an attempt to access table content at would call catalog.get(/entity/MyTable`) and the catalog binding would prepend the complete catalog prefix. The json input to the catalog.put and catalog.post methods behaves just as in Python Requests. The data is supplied as native Python lists, dictionaries, numbers, strings, and booleans. The method implicitly serializes the data to JSON format and sets the appropriate Content-Type header to inform the server we are sending JSON content. All of these HTTP methods return a requests.Response object which must be further interrogated to determine request status or to retrieve any content produced by the server: - resp.status_code: the HTTP response status code - resp.raise_for_status(): raise a Python exception for non-success codes - resp.json(): deserialize JSON content from server response - resp.headers: a dictionary of HTTP headers from the server response Low-level usage errors may raise exceptions directly from the HTTP methods. However, normal server-indicated errors will produce a response object and the caller must interrogate the status_code field or use the raise_for_status() helper to determine whether the request was successful. HTTP Caching¶ By default, the catalog binding uses HTTP caching for the catalog.get method: it will store previous responses, include appropriate If-None-Match headers in the new HTTP GET request, detect 304 Not Modified responses indicating that cached content is valid, and return the cached content to the caller. This mechanism can be disabled by specifying caching=False in the ErmrestCatalog constructor call.
http://docs.derivacloud.org/deriva-py/README.html
2021-05-06T08:47:22
CC-MAIN-2021-21
1620243988753.91
[]
docs.derivacloud.org
User can edit his phone number by clicking on change. Type the new number and click on send code. User will receive a verification code on the phone number he entered. user then enters the code and clicks verify. To change clinic informtaion, Click edit. Add The desired Information and Click Save. Clinic's Settings To add Images to the clinic folow the steps blow. For more information about imaging click here Add a new user. Fill the following fields.Note that an invitation will be sent to the new user's email. Status is pending until new user accepts invitation. The Following is the email sent to new user Alex.Alex accepts invitation. Alex will be asked to insert a password for his account. Alex will also be asked to enter his phone number to send a verification code to his number. Verification code entered. Alex can now sign in. Alex is a user now. Alex status is now active. User can design his patient file. Simply drag and drop required filed. Click to add a new doctor. The following screen appears , user can fill Doctor related fields. Save new doctor, it will appear in the doctors list. You can add doctor directly by inserting his id from mobile app. After inserting id, you can add doctor as user(invitation will be sent to his email for confirmation). Doctor added. Settings User can set the working days, working hours , slot duration... The Following settings will appear on the calendar as follow. Calendar User can disable taking appointments on off-shift days. step 1: Select the off shift day. step 2: Click Set off-shift Schedule. step 3: Set off duration from...To. Off-shift set. The Following Off-shift day will appear on the calendar as follow Click to add a new Department. The following screen appears , user can fill Department related fields. Save new department, it will appear in the departments list. Add a section in the new department. You can set it as default section Click on openning hours so you can add oppening schedule. Set your openning days, openning time and save your openning hours. Click Add Role. The following screen appears , user can fill Role related fields. Save new role, it will appear in the roles list. Now click on pages security to specify wich pages this new role has access to. The following screen appears, user can select desired pages. Clinic Informtaion is displayed in charts. Click to add a new Card. The following screen appears , user can fill Card related fields. Pay => Successful transaction. User can change the default card to use. The following screen containing previous orders appears. Subscription information Upgrade User will be directed to upgrade package View Payments User will be directed to Orders History Request more User can change the quantity he wants to add and pay for it after reading and understanding the terms. Each package differs from the other by increasing more functionality to the software. The price differs as well ,user can click upgrade to choose another package. The following screen appears, user must read and understand upgrading terms. Enter old and new password. Save new password
http://docs.imhotep-med.com/account.aspx
2021-05-06T09:22:08
CC-MAIN-2021-21
1620243988753.91
[]
docs.imhotep-med.com
Release Notes Current Build 2021.12380 - Apr 08 2021 - Jump to this build's Release Notes - Download Here See our Official Announcement for an overview of new features. Official Build - 2021.10000 Series[edit] 2021.10330 - Feb 10 2021 New Features[edit] Release Highlights[edit] New Operators[edit] -[edit] -[edit] -[edit] -[edit] -[edit] -[edit] - Text TOP - Added Slug support which is a high quality GPU-based scalable font rendering library. This is available by setting 'Display Method' = Scalable. This supports .ttf and .ttc font files or any installed font on the system. 3D Viewer Camera Controls[edit] -[edit] -[edit] -[edit] -[edit] -[edit] - parameters to exclude pixels with NaN values, or to exclude pixels with a zero in a selected mask channel. TOPs that can be used for point clouds (Point File In TOP, Math TOP, Point Transform TOP) and Geometry COMP instancing with TOPs use NaN (not a number) or 0 in an alpha channel to indicate that the pixel is to be ignored. - TOP to CHOP - Added 'Output as Single Channel Set' parameter to output a single channel set rather than outputting separate channels for each row when using 'Crop' modes Rows and Column or Full Image. This is much faster than before. SOPs[edit] - CPlusPlus SOP - Added volume/intersect functions. - Extrude SOP - Added option to compute normals for the extruded geometry. DATs[edit] - OP Find DAT - Added options for Parent and Global OP Shortcuts. - Parameter Execute DAT - New 'On Values Changed' callback that includes all values that changed during the frame in a single list. - Render Pick DAT - New pick()method to perform a pick using python. MATs[edit] - Phong MAT - (And all other MATs) now have expanded blending functionality. There is a new 'Blend Operation' menu to select from Add, Subtract, Reverse Subtract, Minimum, and Maximum operations. In addition there is a separate operation menu just for alpha. Furthermore new options for Constant Color/Constant Alpha are found in the 'Source Color'/'Destination Color' menus which enable 'Blend Constant Color' parameters when selected. - Phong MAT, PBR MAT - Can now output World space position, normals, and shadowed area strength to extra color buffers. Color Channel Selection Improvements[edit] - Analyze TOP - Added option to find the pixel with the largest RG or B value: (RGBA Maximum). - Lookup TOP - Added option to use the largest of the 4 RGBA color channels (RGBA Maximum) as the index of the input image. - Lookup TOP - Added more options for selecting the index from the source image. - Normal Map TOP / Blob Track TOP / Edge TOP / Emboss TOP / Luma Blur TOP / Luma Level TOP / Matte TOP // Monochrome TOP / Threshold TOP / Leuze ROD4 CHOP / Nvidia Flow Emitter COMP - Added new Maximum RGB/RGBA color channel selection options. Look At Improvements[edit] - Standardization of Look At - All OPs that do 3D transforms now have the same capability to make a chosen axis (+ or -X, Y or Z) point to the origin of another object, with control of an up-direction. - Geometry COMP - 'Rotate To/Look At' now has a control for Forward Direction. This controls what axis will be rotated to point towards the desired target. - Transform CHOP - Added 'Look At' parameters to create a rotation that aligns a given axis towards another object. - Transform XYZ CHOP - Added 'Look At' parameters to rotate the input points to face another point. - Transform SOP - Added 'Forward' parameter to determine which axis is pointed towards the Look At object. New Python[edit] -[edit] -. - TDAbleton updates - Ableton console now scrolls infinitely like textport abletonMIDI component - Added "split touching notes" feature to separate incoming MIDI notes abletonSong component - Added time signature controls to abletonTrack, abletonChain and abletonClipSlot components - Added Color and Clip Color controls, see BACKWARD COMPATIBILITY WARNING - clip data channels now include the clip number in their names. Bug Fixes and Improvements[edit] COMPs[edit] -[edit] -. - Screen Grab TOP - BACKWARDS COMPATIBILTY WARNING Capture area is now scaled to fit the output resolution rather than cropped when using GPU to GPU capture. Allows capture of windows that are larger than 1280x1280 with a Non-Commercial license. - SSAO TOP - Now works correctly with orthographic cameras. - In TOP - Added support for http and https URLs that don't end with exactly .m3u8. - Video Stream Out TOP - Added control for sending silent audio when no audio output is provided. Some services such as YouTube require audio data always. Other's do not support MP3 so no audio stream should be sent. -[edit] -[edit] - Alembic SOP - Fixed a bug with 'Unload' parameter not freeing write access to Alembic file. - Alembic SOP / Import Select SOP - 'Direct To GPU' bug fix, deactivating/reactivating viewer would sometimes clear the imported SOP data. - Alembic SOP - Don't automatically triangulate when 'Direct to GPU' disabled. -[edit] -[edit] -[edit] - Fixed the long standing performance issue where snippets would keep cooking after you open OP Snippets dialog once, even if you closed the window. - Many existing Operator Snippets have been updated, in addition there are new Snippets for the following Other Improvements[edit] - Export Movie Dialog now has options for 'Audio Codec' and 'Audio Bitrate'. - Unicode - Unicode labels for Custom Parameters and Parameter Pages now work correctly. Names (parameter tokens) still need to be english-alphanumeric. - Unity home option for single sample CHOPs in RMB menu (shortcut key 'f') will set the channel heights to minimum height for readable text used in names and values. - Parameter popup info now includes information about when it was last set by a script. - Tweaked ctrl-arrow behavior when moving around text to be more Python aware. -[edit] - Palette - Fixed issue with dragging components from user folders with unicode characters. - Custom Operators - Fixed crash that can occur if getInfoDATEntries() doesn't fill in some entries. - Custom Operators - Calls to the plugins are now wrapped in try/catch clauses to avoid crashing if there are bugs in the plugin code. Will emit a node error instead. - Matrix Class - BACKWARD COMPATIBILITY WARNING When building a Matrix from a Table DAT, fix the matrix being incorrectly transposed. - issue where TD would only show a black screen when Antialiasing was forced on in the Nvidia Control Panel. - Fixed crash when PYTHONPATH set to alternative python installation in some cases, and any existing PYTHONPATH now moved to end of sys.paths to avoid 3rd party startup issues. -. - Crash when attaching input to Sequencer CHOP. - MacOS Movie File In and Movie File Out crashes that could occur. - Fixed a crash when trying to render some unicode newline characters. - Fixed crash on self deleting 'Help DAT' in panel. - Fixed crash that could occur on loading an empty file into a Table DAT. - Fixed crash with odd copy case in COMP.copy() method. - Fixed crash that could occur when entering/exiting Perform Mode. SDK and API Updates[edit] -[edit] -. Build 2021.12380 - Apr 08, 2021[edit] New Features[edit] -[edit] -[edit] - Palette:camera - Added right-click menu for setting navigation mode and choosing view presets. - Widgets - New gadget folder with NavBar gadget. - Updates to Palette:moviePlayer and Widgets. Bug Fixes and Improvements[edit] -[edit] New Features[edit] -[edit] -[edit] - TDAbleton - Added new tdAbletonPackageBeta which has been updated to add support for Ableton Live 11. - Palette:reproject -[edit] -. Experimental Builds 2020.40000 - Jan 06, 2021[edit] For experimental release notes in this branch refer to Experimental 2020.40000 Release Notes Official Builds 2020.20000 and earlier - Dec 23, 2020[edit] For earlier Official Build release notes refer to Official 2020.20000 Release Notes
https://docs.derivative.ca/index.php?title=Release_Notes&oldid=21530
2021-05-06T10:35:06
CC-MAIN-2021-21
1620243988753.91
[]
docs.derivative.ca
# Pending State Learn how Ethermint handles pending state queries. # Pre-requisite Readings # Ethermint vs Ethereum In Ethereum, pending blocks are generated as they are queued for production by miners. These pending blocks include pending transactions that are picked out by miners, based on the highest reward paid in gas. This mechanism exists as block finality is not possible on the Ethereum network. Blocks are committed with probabilistic finality, which means that transactions and blocks become less likely to become reverted as more time (and blocks) passes. Ethermint is designed quite differently on this front as there is no concept of a "pending state". Ethermint uses Tendermint Core BFT consensus which provides instant finality for transaction. For this reason, Etheremint does not require a pending state mechanism, as all (if not most) of the transactions will be committed to the next block (avg. block time on Cosmos chains is ~8s). However, this causes a few hiccups in terms of the Ethereum Web3-compatible queries that can be made to pending state. Another significant difference with Ethereum, is that blocks are produced by validators or block producers, who include transactions from their local mempool into blocks in a first-in-first-out (FIFO) fashion. Transactions on Ethermint cannot be ordered or cherry picked out from the Tendermint node mempool. # Pending State Queries Ethermint will make queries which will account for any unconfirmed transactions present in a node's transaction mempool. A pending state query made will be subjective and the query will be made on the target node's mempool. Thus, the pending state will not be the same for the same query to two different nodes. # RPC Calls on Pending Transactions eth_getBalance eth_getTransactionCount eth_getBlockTransactionCountByNumber eth_getBlockByNumber eth_getTransactionByHash eth_getTransactionByBlockNumberAndIndex eth_sendTransaction Learn how to deploy a Solidity smart contract on Ethermint using Truffle
https://docs.ethermint.zone/core/pending_state.html
2021-05-06T10:07:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.ethermint.zone
Big Data Management Complex File and Transformation Data Types Avro and Transformation Data Types Avro Union Data Type Unsupported Avro Data Types JSON and Transformation Data Types Unsupported JSON Data Types Parquet and Transformation Data Types Parquet Union Data Type Unsupported Parquet Data Types Creating a Struct Port Creating a Struct Port Use the Create Struct Port wizard to convert data that passes through one or more ports to struct data. In the transformation, select one or more ports that you want to convert as elements of the struct data. The ports you select also determine the elements of the complex data type definition. Right-click the selected ports, and select Hierarchical Conversions Create Struct Port . The Create Struct Port wizard appears with the list of ports that you selected. Optionally, in the Name box, change the name of the complex data type definition. For example, typedef_address. Optionally, click Choose to select other ports in the transformation. Click Finish . You can see the following changes in the mapping: The mapping contains a new Expression transformation Create_Struct with a struct output port and a dynamic port with ports from the upstream transformation. The type definition library contains the new complex data type definition. The struct output port references the complex data type definition. The struct output port contains an expression with the STRUCT_AS function. For example, STRUCT_AS(:Type.Type_Definition_Library.typedef_address,city,state,zip) Convert Relational or Hierarchical Data to Struct Data Updated November 09, 2018 Download Guide Send Feedback Explore Informatica Network Communities Knowledge Base Success Portal Back to Top Back
https://docs.informatica.com/data-engineering/data-engineering-integration/10-2-hotfix-1/big-data-management-user-guide/processing-hierarchical-data-on-the-spark-engine/hierarchical-data-conversion/convert-relational-or-hierarchical-data-to-struct-data/creating-a-struct-port.html
2021-05-06T10:29:05
CC-MAIN-2021-21
1620243988753.91
[]
docs.informatica.com
Fills the diagonal elements {a_ii} with the values from the given sequence. If k != 0, fills the off-diagonal elements {a_{i,i+k}} instead. values may have any length. If the diagonal is longer than values, then the remaining diagonal entries will not be set. If values if longer than the diagonal, then the remaining values are ignored.
http://docs.scipy.org/doc/scipy-0.10.1/reference/generated/scipy.sparse.csr_matrix.setdiag.html
2016-06-25T05:10:33
CC-MAIN-2016-26
1466783392099.27
[]
docs.scipy.org
Onboard an Azure Account Follow these steps to onboard an Azure account to Cloudneeti. Step-1 : Before you configure the Cloudneeti on to your Azure Subscription, you need an active Azure AD account in the Global Administrator role 1.1 Login into the Azure Portal, with User with Global Admin Role Step-2 : Create an Active Directory Application 2.1 In the Azure Portal, click Azure Active Directory in the sidebar then select App Registrations 2.2 Click on New application registration button. Enter the Name for example "Cloudneeti" Select the Application Type as "Web App/API" Enter the Sign-on URL as provided . e.g. "" 2.3 Click Create 2.4 Click on the registered application "Cloudneeti" 2.5 Click Settings 2.6 Click Save - Note : For Auto Deployment download the script : Create AD Application. You can find the instructions here. Step-3 : Generate the Application Key - In App Registration blade, Click on the newly registered application if you had given the name "Cloudneeti" then click on the same - Click Settings - Click Keys - Enter a new description, Select a Expires value from the drop down and Click Save - The key value is generated, Copy the same for your record along with Application ID Step-4 : Configure Azure Active Directory application permissions 4.1 In Settings Preview, click Required permissions 4.2 Click +Add & Select an API : In this step you will modify - Windows Azure Active Directory - Microsoft Graph 4.3 Select the Windows Azure Active Directory API 4.3.1. Select the following application permissions - Manage apps that this app creates or owns - Read all hidden memberships - Read directory data - 4.3.2. Select the following delegated permissions - Access the directory as the signed-in user - Read hidden memberships - Read Directory data 4.4 Select the Microsoft Graph API 4.4.1. Select the following application permissions - Read all usage reports - Read all identity risky user information - Read all hidden memberships - Read directory data - Read all groups - Read all users' full profiles - Read all identity risk event information - Read your organization’s security events - 4.4.2. Select the following delegated permissions - Read user devices Step-5 : Grant Permissions to enable the configurations - Click on Grant Permissions to enable the configurations Step-6 : Authorize Application ID to access your Azure Subscription resources - In the Azure Portal. - Select All services and Subscriptions - Select the particular subscription to assign the application to. - Select Access control (IAM). - Select Add. - To allow the application to call Azure API, assign the Reader and Backup Reader role. By default, Azure AD applications aren't displayed in the available options. To find your application, search for the name. If you had given the name "Cloudneeti” then search for same and select it. - Select Save to finish assigning the role. - Role assignment is automated by Assign-RolesToServicePrincipal.ps1script. You can follow the instructions given in link.
https://docs.cloudneeti.com/saas/onboard-azure-account.html
2019-03-18T18:06:58
CC-MAIN-2019-13
1552912201521.60
[]
docs.cloudneeti.com
14.1.4 Resource Set RESQML is a set of XML schemas (XSD files) freely available to download and use from the Energistics website. To download the latest version of RESQML, go to the Energistics website at:. The standard includes the resource listed in the following table and the table in Section 14.1.4.1 Energistics Resource Set .
http://docs.energistics.org/RESQML/RESQML_TOPICS/RESQML-500-006-0-R-sv2010.html
2019-03-18T17:28:01
CC-MAIN-2019-13
1552912201521.60
[]
docs.energistics.org
Integrating OpenStack with SAML-based Identity Management Solutions using OpenStack CLI This tutorial describes how to utilize the OpenStack CLI (Command Line Interface) clients with SAML single-sign on user credentials. Platform9 managed OpenStack supports integration with federated identity management systems that implement the Security Assertion Markup Language (SAML) standard. OpenStack's CLI tools provide authentication plugins which enable authentication against SAML identity providers (IdP) supporting the Enhanced Client or Proxy (ECP) standard. Prerequisites Before we begin you must have the following installed & configured. Step 1: Create an OpenStack RC file The OpenStack RC file captures the configuration parameters necessary for the OpenStack CLIs to communicate with the REST API endpoints exposed by your OpenStack services. An example file for Platform9 managed OpenStack is below. export OS_AUTH_URL=">/keystone/v3" export OS_REGION_NAME="<region>" export OS_USERNAME="<idp username>" export OS_PASSWORD="<idp password>" export OS_TENANT_NAME="<tenant>" export OS_PROJECT_DOMAIN_ID=${OS_PROJECT_DOMAIN_ID:-"default"} export OS_IDENTITY_API_VERSION=3 export OS_IDENTITY_PROVIDER=${OS_IDENTITY_PROVIDER:-"IDP1"} export OS_PROTOCOL=saml2 export OS_AUTH_TYPE="<plugin_name>" Copy and save this into a new file (e.g., openstack.rc). (Remember to secure the file since it contains the password to login to your private cloud.) Step 2: Select your authentication plugin OS_AUTH_TYPE is the name of the driver plugin you are using for authentication. The SAML authentication plugin bundled with the OpenStack CLI is called v3samlpassword works with identity providers supporting SAML ECP. Skip to Step 3 if your IdP supports ECP. The SAML ECP standard is relatively new, and has yet to see major adoption amongst many commercial SSO providers. To help bridge this gap, Platform9 has written Keystone authentication plugins which add support for the following identity providers: Detailed information about these plugins & installation instructions may be found on GitHub at github.com/platform9/pf9-saml-auth. If you require these plugins, they can easily be installed using Python Pip. Simply run: pip install pf9-saml-auth Additional requirements for AD FS auth plugin The AD FS authentication plugin utilizes WS-Federation / WS-Trust 1.3 to obtain a SAML 1.0 assertion. Both AD FS & Platform9 utilize different endpoints when receiving WS-Fed assertions. You must manually specify these endpoints before utilizing the plugin. For example, on Platform9 these would be: Identity Provider URL: HOSTNAME/adfs/services/trust/13/usernamemixed Service Provider Endpoint: hostname/Shibboleth.sso/ADFS Service Provider Entity ID: hostname/keystone Once you have this information you will need to provide them to the AD FS authentication plugin as either arguments to the OpenStack CLI utility, or environment variables in your OpenStack RC file. CLI arguments openstack --os-auth-type v3pf9samladfs \ --os-identity-provider-url \ --os-service-provider-endpoint \ --os-service-provider-entity-id Environment variables export OS_IDENTITY_PROVIDER_URL="<idp url>" export OS_SERVICE_PROVIDER_ENDPOINT="<sp endpoint>" export OS_SERVICE_PROVIDER_ENTITY_ID="<sp entity id>" Additional requirements for OneLogin auth plugin Platform9's OneLogin authentication plugin leverages the OneLogin API to programmatically authenticate a user, and obtain a SAML assertion. OneLogin requires users to first authenticate with their API & obtain an OAuth token before generating a SAML assertion (or issuing any API call). You must first obtain API credentials from your OneLogin administrator before you may utilize this authentication plugin. Refer to OneLogin's Working with API credentials documentation for more information on creating the necessary API credentials. Once you have these credentials you will need to provide them to the OneLogin authentication plugin as either arguments to the OpenStack CLI utility, or environment variables in your OpenStack RC file. CLI arguments openstack --os-auth-type v3pf9samlonelogin \ --os-onelogin-client-id \ --os-onelogin-client-secret Environment variables export OS_ONELOGIN_CLIENT_ID="<client id>" export OS_ONELOGIN_CLIENT_SECRET="<client secret>" Step 3: Authenticate & access the OpenStack CLI Once you have selected your authentication plugin, and updated your OpenStack RC with the necessary authentication parameters you are ready to use the CLI with SAML authentication. # Source your authentication credentials $ source openstack.rc # Execute the OpenStack interactive CLI $ openstack Then, execute one of the available CLI commands such as "server list". The OpenStack client with attempt to authenticate with your IdP using the supplied credentials, and obtain a SAML assertion. If successful, it will pass this assertion to OpenStack which will issue a Keystone token, and then fulfill your API request. # List available servers (openstack) server list Conclusion You have now successfully configured the OpenStack client to authenticate to your cloud using SAML authentication. If you experience issues using the Platform9-developed SAML auth drivers, contact us at [email protected].
https://docs.platform9.com/support/using-openstack-cli-saml-authentication/
2019-03-18T18:05:13
CC-MAIN-2019-13
1552912201521.60
[]
docs.platform9.com
Managing Roles of Additional Users Operator: <role>. XML Schema: role.xsd Plesk version: Plesk 10.0 - Plesk Onyx 17.5 XML API version: 1.6.3.0 - 1.6.9.0 Plesk user: Administrator, customer Description The Administrator, a reseller, or a customer can own additional user accounts. These accounts are eligible to access all the subscriptions of their owner. They play different roles depending on their permission level: Monitor accounting information, manage applications on sites, etc. The roles are assigned to additional users on creation and can be updated later on. This operator lets you manage manage roles of additional users. To learn how to manage additional users, refer to Managing Additional additional
https://docs.plesk.com/en-US/onyx/api-rpc/about-xml-api/reference/managing-roles-of-additional-users.66647/
2019-03-18T17:48:10
CC-MAIN-2019-13
1552912201521.60
[array(['/en-US/onyx/api-rpc/images/66656.png', 'role-main.gif'], dtype=object) ]
docs.plesk.com
SURAgrid All-Hands Meeting Spring 2014 From SURAgrid The meeting was held in the SURA Washington, DC offices on 21-22 April 2014. 6 people from 6 institutions attended. Others joined on Webex and the audio bridge. The link below is a copy of the presentation slides, site reports, and summary notes from the round table discussion. - File:Program.pdf State of SURAgrid; site reports, Jim Lupo, SURAgrid Chair, LSU Presentations - File:SJ-AllHands-2014.pdf - VO Status & Workflow Options, Steve Johnson, Texas A&M University - File:Sharing Information.pdf - Sharing Training Source Materials, Jim Lupo, LSU - OSG Connect, Phil Smith, Texas Tech University
https://docs.uabgrid.uab.edu/sgw/index.php?title=SURAgrid_All-Hands_Meeting_Spring_2014&printable=yes
2019-03-18T17:21:58
CC-MAIN-2019-13
1552912201521.60
[]
docs.uabgrid.uab.edu
Interaction Server Configuration Interaction Server is required from iWD 8.1. If you are an existing eServices customer, and Interaction Server and its databases are already installed and configured for your environment, you can proceed with installing the iWD Runtime Node (Windows). Otherwise, please install Interaction Server by using the procedures in the e-Services Deployment Guide. A Multimedia Switching Office and Multimedia Switch must be created in Genesys Configuration Database, to support Stat Server and URS operations. Refer to the eServices 8.1 Deployment Guide for more details on these topics. completed-queues There is a specific Interaction Server configuration option named completed-queues that specifies a list of queues for completed interactions. When an interaction is placed into one of these queues, the CompletedAt timestamp is set for that interaction. This is also the timestamp that will be used to calculate the Age of the interaction that is displayed on the Global Task List. This option, if it is not already present, will be added for you automatically by using the Configure Ixn Custom Properties feature of iWD Manager. However, this will only add the iWD_Completed queue to the option. You might want to add other queues to this option, based on how you want this Age to be calculated. For example, you may wish to set it to: iWD_Completed, iWD_Canceled, iWD_Rejected - Section: settings - Option name: completed-queues - Valid values: comma-separated list of queue names. Feedback Comment on this article:
https://docs.genesys.com/Documentation/IWD/8.5.0/Dep/IWDIXServerInstall
2019-03-18T17:26:37
CC-MAIN-2019-13
1552912201521.60
[]
docs.genesys.com
JBoss.orgCommunity Documentation Version: 3.3.0.GA BIRT plugin You can find more detailed information on the BIRT plugin, its report types and anatomy on the BIRT Homepage. To understand the basic BIRT concepts and to know how to create a basic BIRT report, refer to the Eclipse BIRT Tutorials. What extensions JBoss Tools provides for Eclipse BIRT you'll find out in the next sections. The key feature of JBoss BIRT Integration is the JBoss BIRT Integration Framework, which allows to integrate a BIRT report into Seam/JSF container. The framework API reference is in the JBoss BIRT Integraion Framework API Reference chapter of the guide. This guide also covers functionality of JBoss Tools module which assists in integration with BIRT. The integration plug-in allows you to visually configure Hibernate Data Source (specify a Hibernate configuration or JNDI URL), compose HQL queries with syntax-highlighting, content-assist, formatting as well as other functionalities available in the HQL editor. To enable JBoss Tools integration with BIRT you are intended to have the next: Eclipse with JBoss Tools installed (how to install JBoss Tools on Eclipse, what dependences and versions requirements are needed reed in the JBoss Tools Installation section) BIRT Report Designer (BIRT Report Designer 2.3.2 you can download from Eclipse downloads site) BIRT Web Tools Integration ( BIRT WTP Integration 2.3.2 you can download from Eclipse downloads site) Versions of BIRT framework and BIRT WTP integration should be no less than RC4 in order to the BIRT facet works correctly. The plugin. When Eclipse is first started with the JBoss Tools plugins installed, you may be prompted to allow or disallow anonymous statistics to be sent to the JBoss development team. You can find more information on the data that is sent in the Getting Started Guide. Click the to allow the statistics to be sent, or click the button if you prefer not to send this data. The plugin is now installed and ready to use. In this chapter of the guide you will find information on the tasks that you can perform integrating BIRT. The required version of BIRT is 2.3.2 or greater. This section discusses the process of integrating BIRT into a Seam web project. To follow this guide you will need to have the Seam runtime and JBoss Application Server downloaded and extracted on your hard drive. You can download Seam from the Seam Framework web page and JBoss Application Server from JBoss Application Server official site. JBoss Seam 2.2.1 GA and JBoss Application Server 5.1.0 GA were used in the examples presented in this guide. It is recommended that you open the Seam Perspective by selecting→ → → . This perspective provides convenient access to all the Seam tools. To create a new Seam Web project select→ → . If the Seam Perspective is not active, select → → → → . On the first wizard page enter the Project name, specify the Target runtime and Target server. We recommend to use the JBoss AS server and runtime environment to ensure best performance. In the Configuration group select the Seam framework version you are planning to use in your application. In this guide we used Seam 2.2. Click the Birt Reporting Runtime Component facet by checking the appropriate option.button and enable the Alternatively you can select the JBoss BIRT Integration Web Project configuration option from the drop-down list in the Configuration group. You may leave the next two pages with default values; just click thebutton to proceed. On the Birt Configuration page you can modify the BIRT deployment settings. These settings can also be edited afterwards in the web.xml file included in the generated project. Keep the default values for now. You can also leave the default options on the JSF Capabilities page. On the Seam Facet page you should specify the Seam runtime and Connection profile. Please note that the Seam runtime must be the same version you initially specified in the project settings (See Figure 3.1, “Creating Seam Web Project”). When creating a Seam project with BIRT capabilities you can use the BIRT Classic Models Sample Database connection profile to work with the BIRT sample database. For more details on how to configure database connection for a Seam project please read the Configure Seam Facet Settings chapter of Seam Dev Tools Reference Guide. Click thebutton to create the project with BIRT functionality enabled. In the previous section you have created a Seam project with BIRT capabilities. Now you can create a simple kick start project to see that everything is configured correctly. Now create a BIRT report file and insert test data into the file. Name the report file helloBirt.rptdesign in the WebContent folder. The report should print the data from the CLASSICMODELS.CUSTOMERS table of the BIRT Classic Models Sample Database, namely: Customer number ( CLASSICMODELS.CUSTOMERS.CUSTOMERNAME) Contact person first name ( CLASSICMODELS.CUSTOMERS.CONTACTFIRSTNAME) Contact person last name ( CLASSICMODELS.CUSTOMERS.CONTACTLASTNAME) Contact person phone number ( CLASSICMODELS.CUSTOMERS.PHONE) The title of the report should be set via reportTitle parameter. As this guide is primarily focused on the BIRT integration and not the BIRT technology itself, the steps required to make the report will not be shown. For more information on creating a BIRT report file please read the BIRT documentation. When you are done with the helloBirt.rptdesign file, you should create a .xhtml file that will contain the BIRT report you have just created. The JBoss BIRT Integration framework provides 2 components represented as <b:birt> and <b:param> tags. The jboss-seam-birt.jar library implements the functionality of the components. To find more information about the framework pleas read the JBoss BIRT Integraion Framework API Reference chapter. To use that tags on the page you need to declare the tag library and define the name space like this: xmlns:b="" The <b:birt> is a container for a BIRT report, that helps you integrate the report into Seam environment. You can manage the properties of the report using the attributes of the <b:birt> tag. The <b:param> tag describes report parameters. To set a parameter you need to specify it's name the value you want to pass. You can use EL expressions to bind the representation layer with back-end logic. Create the helloBirt.xhtml file in the WebContent folder with the following content: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <b:birt <b:param </b:birt> </rich:panel> </ui:define> </ui:composition> From this listing above you see that the title of the report is set via <b:param> by setting the parameter name and defining the value attribute with the Customers Contacts value. We have created a Seam project and inserted the helloBirt report into the helloBirt.xhtml view file. To see that the application works correctly and as you expect, you need to launch it on the server. In the Servers view (if it is not open select → → → → ), select the server the application is deployed to and hit the button. When the server is started, open your favorite browser and point it to. The JBoss BIRT Integration feature includes the Hibernate ODA Data Source which is completely integrated with Hibernate Tools. You can use it the way as you would use any of BIRT ODA drivers. First, you need to reverse engineer from the database to generate Seam entities. You can perform this operation going to Seam perspective. More details on the Seam Generate Entities please read Seam Developers Tools Reference guide). In this guide we will use the Employees table of the DATAMODELS database, which can be downloaded from the Getting Started Guide.→ → in the Before performing Seam Generate Entities, you should have a connection profile adjusted and connected to a database. For information on how to do this see the CRUD Database Application chapter of the Seam Developer Tools Reference guide. Next you should create a new BIRT report file (Employees table. Call the file employees.rptdesign, and save it in the WebContent folder. Now switch to the BIRT Report Design perspective. In the Data Explorer view right-click the Data Source node and choose . The wizard will prompt you to select data source type. Choose Hibernate Data Source and give it a meaningful name, for instance HibernateDataSource. Click the button to proceed. On the next wizard's dialog you can leave the everything with default values, click thebutton to verify that the connection is established successfully. The Hibernate Data Source enables you to specify a Hibernate Configuration or JNDI URL. Click the New Data Source wizard.button to complete Now you need to configure a new Hibernate ODA data set. Launch the New Data Set wizard. In the Data Explorer View right-click the Data Set node and select . Select HibernateDataSource as target data source and type in the new data set name. Call it HibernateDataSet. The next dialog of the wizard will help you compose a query for the new data set. We will make a report that will print all employees in the database who has Sales Rep job title. select jobtitle, firstname, lastname, email from Employees as employees where employees.jobtitle = 'Sales Rep' To validate the entered query you can press thebutton. All the HQL features like syntax highlighting, content assist, formatting, drag-and-drop, etc., are available to facilitate query composing. Clicking the Edit Data Set dialog where you can adjust the parameters of the data set and preview the resulted set. If everything looks good, click the button to generate a new data set.button will call the Now you can insert the data set items of HibernateDataSet into the employees.rptdesign file. If you don't know how to do this we suggest that you refer to the Eclipse BIRT Tutorial. You can also use parameters in the query to add dynamics to your report. In the previous example we hard coded the selection criterion in the where clause. To specify the job title on-the-fly your query should look like this: select jobtitle,firstname, lastname,email from Employees as employees where employees.jobtitle = ? The question mark represents a data set input parameter, which is not the same as a report parameter. Now you need to define an new report parameter to pass the data to the report, call it JobTitle. The dataset parameter can be linked to a report parameter. In the Data Explorer view click the Data Set node to open it and right-click on the data set you created previously (in our case it is HibernateDataSet), choose Edit and navigate to the Parameters section. Declare a new data set parameter, name it jobtitle and map it to the already existing JobTitle report parameter. You report is ready, you can view it by clicking on the Preview tab of the BIRT Report Designer editor. You will be prompted to assign a value to the report parameter. For instance you can enter "Sales Rep". Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project” and Section 3.2, “Using Hibernate ODA Data Source” describe how to integrate a BIRT report into a Seam web project and how to use a Hibernate data source to generate a dynamic report. In this section we will create a Seam web project that can make a dynamic report using the parameters that are defined on a web page. We will use the PRODUCTS table of Classic Models Inc. Sample Database for the purpose of this demo project. The demo application will generate a report about the company's products, and allow the user to specify how the report will be sorted. To begin with, we need to generate Seam entities like we did in the previous Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project”. The next step is to create a Java class that will store the sortOrder variable and its assessors. The variable will be required to pass dynamic data to the report via report parameters; therefore it has to be of session scope. The code below shows a simple JavaBean class called ReportJB. import java.io.Serializable; import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; @Name("ReportJB") @Scope(ScopeType.SESSION) public class ReportJB implements Serializable { private static final long serialVersionUID = 1L; protected String sortOrder = "buyprice"; public String getSortOrder() { return sortOrder; } public void setSortOrder(String value) { sortOrder = value; } public ReportJB() { } } The report will print the data from the Products table. Create a new report file file called ProductsReport.rptdesign in the WebContent folder. You can use either the BIRT JDBC Data Source or Hibernate Data Source data source to create the data set for this project. If you want to use the latter please read the previous Section 3.2, “Using Hibernate ODA Data Source”. The data set should have at least the following data set items: product vendor, product name, quantity in stock and buy price. The data is retrieved from the database with this query : SELECT productvendor, productname, quantityinstock, buyprice FROM CLASSICMODELS.PRODUCTS as products Make a table in the report and put each data set item into a column. As it was stated in the beginning of the chapter the report will be dynamic, therefore you need to declare a report parameter first. Call this parameter sortOrder and to add the parameter to the query. BIRT offers rich JavaScript API, so you can modify the query programmatically like this (the xml-property tag shown below should already be present in the report): <xml-property< ![CDATA[ SELECT productvendor, productname, quantityinstock, buyprice FROM CLASSICMODELS.PRODUCTS as products ]]> </xml-property> <method name="beforeOpen"> <![CDATA[ queryString = " ORDER BY products."+reportContext.getParameterValue("sortOrder")+" "+"DESC"; this.queryText = this.queryText+queryString; ]]> </method> The report is ready. You can preview it to make sure it works properly. To set the report parameter you should create an XHTML page, call it ProductForm.xhtml, and place it in the WebContent folder. On the page you can set the value of the sortOrder Java bean variable and click the button to open another view page that will display the resulted report. The source code of the ProductForm.xhtml should be the following: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <f:facetBIRT Report Generator</f:facet> <a4j:form <table> <tr> <td>Select sort order criterion:</td> <td><h:selectOneMenu <!-- Bind to your Java Bean --> <f:selectItem <f:selectItem </h:selectOneMenu> </td> </tr> </table> </a4j:form> <s:button <!-- If the sertOrder variable is not set the button won't work --> </rich:panel> </ui:define> </ui:composition> The logic of the file is quite simple: when the sort order criterion is selected the value of ReportJB.sortOrder is set automatically via Ajax, and the report is ready to be generated. Now you need to create the web page that will print the report. Name the file ProductsReport.xhtml. The file to output the report should have the following content: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <f:facetProducts Report</f:facet> <b:birt <b:param </b:birt> </rich:panel> </ui:define> </ui:composition> As you know from Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project”, before using the BIRT Integration framework tags on the page you need to declare the tag library and specify the name space with this line: xmlns:b="" To set the sortOrder report parameter add this line: <b:param We bound the sortOrder report parameter to Java Bean variable value="#{ReportJB.sortOrder}" using EL expression, with the ReportJB.sortOrder variable having its value assigned in the ProductsForm.xhtml file. By default if you embed a report into HTML page the HTML-format report contains the <html>, <head>, <body> etc., tags. However if your HTML page already has those tags, you can rid of them using the embeddable="true" attribute of the <b:birt> component. Deploy the project onto the server and open your browser to see the report is successfully generated. You should navigate to to select the criterion and press the button. You will be redirected to. Thus, a Seam project that includes the BIRT facet can be deployed as any project. If you define the Hibernate ODA driver, the JBoss BIRT engine will use JNDI URL that has to be bound to either Hibernate Session Factory or Hibernate Entity Manager Factory. If you don't specify the JNDI URL property, our engine will try the following JNDI URLs: java:/<project_name> java:/<project_name>EntityManagerFactory When creating a Seam EAR project, Hibernate Entity Manager Factory is bound to java:/{projectName}EntityManagerFactory. All you need to do is to use the Hibernate Configuration created automatically. You can use default values for the Hibernate Configuration and JNDI URL within the BIRT Hibernate Data Source. When using a Seam WAR project, neither HSF nor HEMF are bound to JNDI by default. You have to do this manually. For instance, HSF can be bound to JNDI by adding the following property to the persistence.xml file: <property name="hibernate.session_factory_name" value="java:/projectname"/> And you can use java:/projectname as the JNDI URL property when creating a BIRT Hibernate Data Source. If you want to test this feature using PDE Runtime, you need to add osgi.dev=bin to the WebContent/WEB-INF/platform/configuration/config.ini file. In conclusion, the main goal of this document is to describe the full feature set that JBoss BIRT Tools provide. If you have any questions, comments or suggestions on the topic, please feel free to ask in the JBoss Tools Forum. You can also influence on how you want to see JBoss Tools docs in future leaving your vote on the article Overview of the improvements required by JBossTools/JBDS Docs users. The <b:birt> component servers to integrate a BIRT report into Seam or JSF container. The <b:birt> tag recognizes most of the parameters described on the BIRT Report Viewer Parameters page, though it has attributes of its own. You can find additional JBoss Developer Studio documentation at RedHat documentation website. The latest documentation builds are available through the JBoss Tools Nightly Docs Builds.
http://docs.jboss.org/tools/4.1.0.Final/en/jboss_birt_plugin_ref_guide/html_single/index.html
2019-03-18T18:34:18
CC-MAIN-2019-13
1552912201521.60
[]
docs.jboss.org
VMware vRealize® Automation ™ despite underlying heterogenous infrastructure. Providing On-Demand Services to Users
https://docs.vmware.com/en/vRealize-Automation/index.html
2017-07-20T20:27:20
CC-MAIN-2017-30
1500549423486.26
[array(['images/GUID-FE80F3D4-9CD0-4111-BF63-5E545D2A8730-low.png', 'Diagram of delivering IT and cloud services to users.'], dtype=object) ]
docs.vmware.com
source code Master-less rospy node api. The ronin API is more handle-based than the rospy API. This is in order to provide better forwards compatibility for ronin as changes to rospy are made to enable this functionality more gracefully. Here is an example ronin-based talker:: n = ronin.init_node('talker', anonymous=True) pub = n.Publisher('chatter', String) r = n.Rate(10) # 10hz while not n.is_shutdown(): str = "hello world %s"%n.get_time() n.loginfo(str) pub.publish(str) r.sleep() Authors: Ken Conley and Blaise Gassend
http://docs.ros.org/diamondback/api/ronin/html/index.html
2017-07-20T20:31:19
CC-MAIN-2017-30
1500549423486.26
[]
docs.ros.org
Need an Appointment? We strive to see new patients within two weeks. For general information or to make an appointment at any one of our locations, call us at 850-474-8121 or fax 850-474-8096. Main Office 8333 N. Davis Hwy. Building 1, 2nd Floor Pensacola, FL 32514 After Hours To reach one of our on-call physicians, call 850-474-8000. For additional contact information for specific locations, please visit our office locations page where you will find the address and phone number.
https://kidney-docs.com/contact-us/
2017-07-20T20:24:25
CC-MAIN-2017-30
1500549423486.26
[]
kidney-docs.com
Introduction¶ Purpose¶. >>> import pyperclip >>> pyperclip.copy('Hello world!') >>> pyperclip.paste() 'Hello world!' Not Implemented Error¶ You may get an error message that says: “Pyperclip could not find a copy/paste mechanism for your system. Please see for how to fix this.” In order to work equally well on Windows, Mac, and Linux, Pyperclip uses various mechanisms to do this. Currently, this error should only appear on Linux (not Windows or Mac). You can fix this by installing one of the copy/paste mechanisms: sudo apt-get install xselto install the xsel utility. sudo apt-get install xclipto install the xclip utility. pip install gtkto install the gtk Python module. pip install PyQt4to install the PyQt4 Python module.
http://pyperclip.readthedocs.io/en/latest/introduction.html
2017-07-20T20:24:20
CC-MAIN-2017-30
1500549423486.26
[]
pyperclip.readthedocs.io
Returns space usage information for each file in the database. Note To call this from Azure SQL Data Warehouse or Parallel Data Warehouse, use the name sys.dm_pdw_nodes_db_file_space_usage. Remarks Permissions On SQL Server requires VIEW SERVER STATE permission on the server. On SQL Database Premium Tiers requires the VIEW DATABASE STATE permission in the database. On SQL Database Standard and Basic Tiers requires the SQL Database admin account. Examples Determing the Amount of Free Space in tempdb The following query returns the total number of free pages and total free space in megabytes (MB) available in all files in tempdb. USE tempdb; GO SELECT SUM(unallocated_extent_page_count) AS [free pages], (SUM(unallocated_extent_page_count)*1.0/128) AS [free space in MB] FROM sys.dm_db_file_space_usage; Determining the Amount of Space Used by User Objects The following query returns the total number of pages used by user objects and the total space used by user objects in tempdb. USE tempdb; GO SELECT SUM(user_object_reserved_page_count) AS [user object pages used], (SUM(user_object_reserved_page_count)*1.0/128) AS [user object space in MB] FROM sys.dm_db_file_space_usage; See Also Dynamic Management Views and Functions (Transact-SQL) Database Related Dynamic Management Views (Transact-SQL) sys.dm_db_task_space_usage (Transact-SQL) sys.dm_db_session_space_usage (Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-file-space-usage-transact-sql
2017-07-20T20:36:18
CC-MAIN-2017-30
1500549423486.26
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/yes.png', 'yes'], dtype=object)]
docs.microsoft.com
I followed all of your instructions and still don’t get form messages! What can i do? If you followed the instruction of the guide and the form still doesn’t work, please try the following: _ Install Postman SMTP plugin (even if you have already an SMTP plugin). If it does not help: _ Install Contact Form 7 for Database to check if at least the emails are sent or not. _ Use another email address (it is possible that your email is considered as spam) _ Change the email subject (try it longer and shorter) _If it does not help, try to check if it is not linked to a plugin conflict: Deactivate all your plugins besides Elementor and Elementor Pro and check if the emails are sent properly. NB: When the page is refreshed after filling a form this is generally due to a plugin conflict.
http://docs.elementor.com/article/202-i-followed-all-of-your-instructions-and-still-don-t-get-form-messages-what-can-i-do
2017-07-20T20:34:04
CC-MAIN-2017-30
1500549423486.26
[]
docs.elementor.com
New in version 2.1. - boto - python >= 2.6 # Completely overrides the VPC DHCP options associated with VPC vpc-123456 and deletes any existing # DHCP option set that may have been attached to that VPC. - ec2_vpc_dhcp_options: domain_name: "foo.example.com" region: us-east-1 dns_servers: - 10.0.0.1 - 10.0.1.1 ntp_servers: - 10.0.0.2 - 10.0.1.2 netbios_name_servers: - 10.0.0.1 - 10.0.1.1 netbios_node_type: 2 vpc_id: vpc-123456 delete_old: True inherit_existing: False # Ensure the DHCP option set for the VPC has 10.0.0.4 and 10.0.1.4 as the specified DNS servers, but # keep any other existing settings. Also, keep the old DHCP option set around. - ec2_vpc_dhcp_options: region: us-east-1 dns_servers: - "{{groups['dns-primary']}}" - "{{groups['dns-secondary']}}" vpc_id: vpc-123456 inherit_existing: True delete_old: False ## Create a DHCP option set with 4.4.4.4 and 8.8.8.8 as the specified DNS servers, with tags ## but do not assign to a VPC - ec2_vpc_dhcp_options: region: us-east-1 dns_servers: - 4.4.4.4 - 8.8.8.8 tags: Name: google servers Environment: Test ## Delete a DHCP options set that matches the tags and options specified - ec2_vpc_dhcp_options: region: us-east-1 dns_servers: - 4.4.4.4 - 8.8.8.8 tags: Name: google servers Environment: Test state: absent ## Associate a DHCP options set with a VPC by ID - ec2_vpc_dhcp_options: region: us-east-1 dhcp_options_id: dopt-12345678 vpc_id: vpc-123456.
http://docs.ansible.com/ansible/latest/ec2_vpc_dhcp_options_module.html
2017-07-20T20:33:48
CC-MAIN-2017-30
1500549423486.26
[]
docs.ansible.com
Introduction¶_ds3231_ds3231.DS32311
https://circuitpython.readthedocs.io/projects/ds3231/en/latest/
2017-09-19T13:25:20
CC-MAIN-2017-39
1505818685698.18
[array(['_images/3013-01.jpg', '_images/3013-01.jpg'], dtype=object)]
circuitpython.readthedocs.io
What is Chronopay? Whether you are setting up a new e-business or adding e-commerce to an existing business, ChronoPay can provide you with secure, flexible and cost effective online payment processing solutions. Chronopay Settings - Product ID - Product Name – It is displayed on the ChronoPay secure processing page. - Accepted Currency - Processing URL - Return URL - Security Key - Debug Mode - Under form fields link your Checkout fields to PayPal fields - Save changes by clicking Update with the payment options dialog
http://docs.wpecommerce.org/chronopay-2/
2017-09-19T13:21:01
CC-MAIN-2017-39
1505818685698.18
[]
docs.wpecommerce.org
2. Installation¶ The ways described here to install privacyIDEA are - the installation via the Python Package Index, which can be used on any Linux distribution and - ready made Ubuntu Packages for Ubuntu 14.04LTS and - ready made Debian Packages for Debian Wheezy. If you want to upgrade from a privacyIDEA 1.5 installation please read Upgrading. privacyIDEA needs python 2.7 to run properly! After installation you might want to take a look at First Steps. Footnotes
http://privacyidea.readthedocs.io/en/latest/installation/index.html
2017-09-19T13:33:16
CC-MAIN-2017-39
1505818685698.18
[]
privacyidea.readthedocs.io
Trace: • ESP32 series module topic ESP32 series module topic ESP32 series module topic Place an order:Ai-Thinker official Alibaba shop Overview ESP32 series modules are a series of low-power UART-WiFi chip modules based on Espressif's ESP32 developed by Shenzhen Ai-Thinker Technology Co., Ltd., which can be easily developed for secondary development and access cloud services. Realize mobile phone 3/4G global control anytime, anywhere, and accelerate product prototype design. ESP32 series module The core processor of the ESP32 series module ESP32 integrates the industry-leading core processor of this module in a smaller package. The ESP32 has two built-in low-power Xtensa®32-bit LX6 MCUs. The main frequency supports 80 MHz, 160 MHz and 240 MHz. Support the secondary development of RTOS operating system, integrate Wi-Fi MAC/BB/RF/PA/LNA, and onboard antenna. Support standard IEEE802.11 b/g/n protocol, complete TCP/IP protocol stack and Bluetooth protocol stack. Users can use this module to add networking functions to existing equipment, or build an independent network controller. ESP32 is a high-performance wireless SoC WiFi+Ble solution chip, which provides maximum practicability at the lowest cost and unlimited possibilities for Wi-Fi/Ble functions to be embedded in other systems. ESP32S2 series module rate, and ESP32-S2 Beta chip is different from the final ESP32-S2, Beta chip is a limited engineering sample, so not all functions are available. Our company does not sell ESP32-S2 Beta chip modules! Resources ESP32-S Development Board 12K Development Board (ESP32-S2) Camera Development Board (ESP32-CAM)
https://docs.ai-thinker.com/en/esp32
2021-04-10T15:02:17
CC-MAIN-2021-17
1618038057142.4
[]
docs.ai-thinker.com
Business Network Operator project planning When planning a Corda deployment as a Business Network Operator, there are several considerations: - Deployment environments - Notary compatibility - HSM compatibility - Database compatibility - Corda Enterprise Network Manager deployment. Deployment environments Business Network Operators will need several deployments of Corda Enterprise, at least including: - A development environment including minimal. Node sizing and databases When defining the requirements of a node, it is important to define the resources that the node will require. While every Corda deployment will have different requirements - depending on the CorDapps and business model of the parties - the following table gives approximate sizings for typical node deployments. All Corda Nodes have a database. A range of third-party databases are supported by Corda, shown in the following table:
https://docs.corda.net/docs/corda-enterprise/4.6/operations/project-planner/network-operators.html
2021-04-10T14:32:09
CC-MAIN-2021-17
1618038057142.4
[]
docs.corda.net
4.5.3 If your project already includes the silverstripe/mimevalidator module, you do not need to do anything. To check if the silverstripe/mimevalidator module is installed in your project, run this command from your project root. composer show silverstripe/mimevalidator If you get an error, the module is not installed. Upgrading to silverstripe/recipe-cms 4.5.3 will NOT automatically install silverstripe/mimevalidator. You need to manually install the module silverstripe/mimevalidator. To add silverstripe/mimevalidator to your project, run this command. composer require silverstripe/mimevalidator After installing the mimevalidator module, you need to enable it by adding this code snippet to your YML configuration. SilverStripe\Core\Injector\Injector: SilverStripe\Assets\Upload_Validator: class: SilverStripe\MimeValidator\MimeUploadValidator If your project overrides the defaults allowed file types, it's important that you take the time to review your configuration and adjust it as need be to work with silverstripe/mimevalidator. Read the Allowed file types documentation for more details on controling the type of files that can be stored in your Silverstrip CMS Project. Special consideration when upgrading Userforms The silverstripe/userforms module now also includes silverstripe/mimevalidator in its dependencies. Upgrading to the following versions of userforms will automatically install silverstripe/mimevalidator: - 5.4.3 or later - 5.5.3 or later - 5.6.0 or later (requires CMS 4.6.0) Userforms that include a file upload field will automatically use the MimeUploadValidator. Beware that this will NOT change the default upload validator for other file upload fields in the CMS. You'll need to update your YML configuration for the MimeUploadValidator to be used everywhere..5. Change Log Security - 2020-05-13 cce2b1630 Remove/deprecate unused controllers that can potentially give away some information about the underlying project. (Maxime Rainville) - See cve-2020-6164 - 2020-05-11 8518987cb Stop honouring X-HTTP-Method-Override header, X-Original-Url header and _method POST variable. Add SS_HTTPRequest::setHttpMethod() (Maxime Rainville) - See cve-2019-19326 - 2020-02-17 d3968ad Move the query resolution after the DataListQuery has been altered (Maxime Rainville) - See cve-2020-6165 - 2020-02-11 107e6c9 Ensure canView() check is run on items (Steve Boyd) - See cve-2020-6165 API Changes Features and Enhancements Bugfixes - 2020-07-09 b780c4f50 Tweak DBHTMLText::Plain to avoid treating some chinese characters as line breaks. (Maxime Rainville) - 2020-06-23 e033f26 Fix external link setting text to undefinedon text (#1059) (Andre Kiste) - 2020-06-01 3df2222 Prevent react-selectable from interfering with pagination -19 b9de9e6 Remove direct descendant selector to apply correct margins (Sacha Jud-04 12ea7cd Create NormaliseAccessMigrationHelper to fix files affected by CVE-2019-12245 (Maxime Rainville) - 2020-02-24 bba0f2f72 Fixed issue where TimeField_Readonly would only show "(not set)" instead of the value (UndefinedOffset) --05 c92e3b9d Prioritise same-level pages in OldPageRedirector (Klemen Dolinšek) - 2019-10-17 b62288cc9 Disabled the UpgradeBootstrap upgrader doctor task (Maxime Rainville) 2019-09-02 6d8a4bc Make AbsoluteLink work with manipulated images (fixes #322) (Loz Calver)
https://docs.silverstripe.org/en/4/changelogs/4.5.3/
2021-04-10T14:07:34
CC-MAIN-2021-17
1618038057142.4
[]
docs.silverstripe.org
In version 7 and later, the <write> statement (here) also supports the output flag as used with capture, and the flags option was also added. Also in version 7, the skiponfail option was added: if set, the <write> block will be skipped if the file fails to be opened (as in previous versions).
https://docs.thunderstone.com/site/vortexman/write_output_flags_skiponfail.html
2021-04-10T14:30:40
CC-MAIN-2021-17
1618038057142.4
[]
docs.thunderstone.com
消息复制任务和应用程序Message replication tasks and applications 如消息复制和跨区域联合一文中所述,在服务总线实体对之间以及服务总线与其他消息源和目标之间的消息序列复制通常依赖于 Azure Functions。As explained in the message replication and cross-region federation article, replication of message sequences between pairs of Service Bus entities and between Service Bus and other message sources and targets generally leans on Azure Functions. Azure Functions 是一种可缩放且可靠的执行环境,用于配置和运行无服务器应用程序,包括消息复制和联合任务。Azure Functions is a scalable and reliable execution environment for configuring and running serverless applications, including message replication and federation tasks. 在本概述中,你将了解 Azure Functions 针对此类应用程序提供的内置功能、可为转换任务调整和修改的代码块,以及如何配置 Azure Functions 应用程序,使其与服务总线和其他 Azure 消息传送服务完美集成。In this overview, you will learn about Azure Functions' built-in capabilities for such applications, about code blocks that you can adapt and modify for transformation tasks, and about how to configure an Azure Functions application such that it integrates ideally with Service Bus and other Azure Messaging services. 有关更多详细信息,本文将指向 Azure Functions 文档。For many details, this article will point to the Azure Functions documentation. What is a replication task? A replication task receives events from a source and forwards them to a target. Most replication tasks will forward events unchanged and at most perform mapping between metadata structures if the source and target protocols differ. Replication tasks are generally stateless, meaning that they do not share state or other side-effects across sequential or parallel executions of a task. That is also true for batching and chaining, which can both be implemented on top of the existing state of a stream. This makes replication tasks different from aggregation tasks, which are generally stateful, and are the domain of analytics frameworks and services like Azure Stream Analytics. Replication applications and tasks in Azure Functions In Azure Functions, a replication task is implemented using a trigger that acquires one or more input message from a configured source and an output binding that forwards messages copied from the source to a configured target. Replication tasks are deployed as into the replication application through the same deployment methods as any other Azure Functions application. You can configure multiple tasks into the same application. With Azure Functions Premium, multiple replication applications can share the same underlying resource pool, called an App Service Plan. That means you can easily collocate replication tasks written in .NET with replication tasks that are written in Java, for instance. That will matter if you want to take advantage of specific libraries such as Apache Camel that are only available for Java and if those are the best option for a particular integration path, even though you would commonly prefer a different language and runtime for you other replication tasks. Whenever available, you should prefer the batch-oriented triggers over triggers that deliver individual events or messages and you should always obtain the complete event or message structure rather than rely on Azure Function's parameter binding expressions. The name of the function should reflect the pair of source and target you are connecting, and you should prefix references to connection strings or other configuration elements in the application configuration files with that name. Data and metadata mapping Once you've decided on a pair of input trigger and output binding, you will have to perform some mapping between the different event or message types, unless the type of your trigger and the output is the same. For simple replication tasks that copy messages between Event Hubs and Service Bus, you do not have to write your own code, but can lean on a utility library that is provided with the replication samples. Retry policy To avoid data loss during availability event on either side of a replication function, you need to configure the retry policy to be robust. Refer to the Azure Functions documentation on retries to configure the retry policy. The policy settings chosen for the example projects in the sample repository configure an exponential backoff strategy with retry intervals from 5 seconds to 15 minutes with infinite retries to avoid data loss. For Service Bus, review the "using retry support on top of trigger resilience" section to understand the interaction of triggers and the maximum delivery count defined for the queue. Setting up a replication application host A replication application is an execution host for one or more replication tasks. It's an Azure Functions application that is configured to run either on the consumption plan or (recommended) on an Azure Functions Premium plan. All replication applications must run under a system- or user-assigned managed identity. The linked Azure Resource Manager (ARM) templates create and configure a replication application with: - an Azure Storage account for tracking the replication progress and for logs, - a system-assigned managed identity, and - Azure Monitoring and Application Insights integration for monitoring. Replication applications that must access Event Hubs bound to an Azure virtual network (VNet) must use the Azure Functions Premium plan and be configured to attach to the same VNet, which is also one of the available options. Examples The samples repository contains several examples of replication tasks that copy events between Event Hubs and/or between Service Bus entities. For copying event between Event Hubs, you use an Event Hub Trigger with an Event Hub output binding: [FunctionName("telemetry")] [ExponentialBackoffRetry(-1, "00:00:05", "00:05:00")] public static Task Telemetry( [EventHubTrigger("telemetry", ConsumerGroup = "$USER_FUNCTIONS_APP_NAME.telemetry", Connection = "telemetry-source-connection")] EventData[] input, [EventHub("telemetry-copy", Connection = "telemetry-target-connection")] EventHubClient outputClient, ILogger log) { return EventHubReplicationTasks.ForwardToEventHub(input, outputClient, log); } For copying messages between Service Bus entities, you use the Service Bus trigger and output binding: [FunctionName("jobs-transfer")] [ExponentialBackoffRetry(-1, "00:00:05", "00:05:00")] public static Task JobsTransfer( [ServiceBusTrigger("jobs-transfer", Connection = "jobs-transfer-source-connection")] Message[] input, [ServiceBus("jobs", Connection = "jobs-target-connection")] IAsyncCollector<Message> output, ILogger log) { return ServiceBusReplicationTasks.ForwardToServiceBus(input, output, log); } The helper methods can make it easy to replicate between Event Hubs and Service Bus: Monitoring To learn how you can monitor your replication app, please refer to the monitoring section of the Azure Functions documentation. A particularly useful visual tool for monitoring replication tasks is the Application Insights Application Map, which is automatically generated from the captured monitoring information and allows exploring the reliability and performance of the replication task source and target transfers. For immediate diagnostic insights, you can work with the Live Metrics portal tool, which provides low latency visualization of log details.
https://docs.azure.cn/zh-cn/service-bus-messaging/service-bus-federation-replicator-functions
2021-04-10T14:34:35
CC-MAIN-2021-17
1618038057142.4
[]
docs.azure.cn
SDK Release 4.1.0 Note: This version of the SDK is now deprecated. You can find the full documentation for this version in the legacy documentation 4.1.1 - Fixed the method for signing using an ethereum address. - Performance improvements. - Ambient/sky improvements in preview. - Added joystick in mobile view. 4.1.0 A few packages have been rebranded: - metaverse-api is now decentraland-api - metaverse-rpc is now decentraland-rpc - metaverse-compiler is now decentraland-compiler When migrating a scene to 4.1.0, keep in mind that the first lines of the file that import from metaverse-api must be changed to import from decentraland-api. import * as DCL from "decentraland-api" import { Vector3Component } from "decentraland-api" - The new onClickhandler can be added to any entity to handle click events in the same way that React handles clicks. This can greatly simplify scene code, for example: Old way import * as DCL from "decentraland-api" export default class InteractiveCubeScene extends DCL.ScriptableScene { state = { size: 1, } sceneDidMount() { this.eventSubscriber.on("interactiveBox_click", async () => { this.resizeBox() }) } resizeBox = () => { this.setState({ size: Math.random() * 3 }) } async render() { return ( <scene> <box id="interactiveBox" withCollisions scale={this.state.size} position={{ x: 5, y: 1, z: 5 }} /> </scene> ) } } New way import * as DCL from "decentraland-api" export default class InteractiveCubeScene extends DCL.ScriptableScene { state = { size: 1, } resizeBox = () => { this.setState({ size: Math.random() * 3 }) } async render() { return ( <scene> <box onClick={this.resizeBox} withCollisions scale={this.state.size} position={{ x: 5, y: 1, z: 5 }} /> </scene> ) } } Note that the new way saves you from having to create and subscribe to a click event, and attaching and ID to every element that needs to handle a click. Using this handler, the entity doesn’t require an ID to be clicked. All you need to do is pass a function through an onClick JSX attribute and enjoy! - The parcel limits are now inclusive. Before, entities couldn’t reach the border of the scene’s parcels, you needed to limit positions to something like { x: 9.9, y:1, z: 9.9} in a 1 parcel scene. Now you can position things up to the very limit of the parcels, so on a 1 parcel scene entities can reach { x: 10, y:1, z: 10} Static scenes have better performance A bug was fixed where an entity’s lookAtvalue couldn’t be the same as the value for position. This was problematic in scenarios where you need a character to move slowly towards a position (with a transition) and look in that direction as it does. Now this scenario is fully supported. Preview scenes have a new lighting configuration. Previous lighting conditions were too bright and didn’t allow the geometry of certain shapes to be seen clearly. Migrate a scene to 4.1.0 To migrate a scene built with an earlier version to 4.1.0, follow these steps: - Delete the file package-lock.json - Delete the folder node_modules - In scene.tsx, change all imports from metaverse-api to decentraland-api. For example: import * as DCL from "decentraland-api" - Modify the package.json file to change the following: - Change metaverse-compilerinto decentraland-compiler. - Change metaverse-apiinto decentraland-api. - Add "decentraland": "latest"in devDependencies. The file should look something like this: { "name": "dcl-project", "version": "1.0.0", "description": "My new Decentraland project", "scripts": { "start": "dcl start", "build": "decentraland-compiler build.json", "watch": "decentraland-compiler build.json --watch" }, "author": "", "license": "MIT", "devDependencies": { "decentraland-api": "latest", "decentraland": "latest" } } - Run npm installor dcl startto build new versions of package-lock.json and node_modules based on the dependencies of the new version. - If your scene included any special dependencies, like Babylon or Axios, install them again with npm.
https://docs.decentraland.org/releases/sdk/4.1.0/
2021-04-10T14:28:08
CC-MAIN-2021-17
1618038057142.4
[]
docs.decentraland.org
- Docs - Zix WordPress Theme - - Getting Started - - Plugin Installation Plugin Installation Estimated reading : 1 minute After activating the Zix
https://docs.droitthemes.com/docs/zix-wordpress-theme/getting-started/plugin-installation/
2021-04-10T14:00:21
CC-MAIN-2021-17
1618038057142.4
[array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png', 'Still_Stuck'], dtype=object) ]
docs.droitthemes.com
Postgres <table_name> or Postgres.
https://docs.greenplum.org/6-13/admin_guide/query/topics/query-piv-opt-root-partition.html
2021-04-10T14:26:21
CC-MAIN-2021-17
1618038057142.4
[]
docs.greenplum.org
Basic usage GroupDocs Signature library provides ability to manipulate with different electronic signature types such as Text, Image, Digital, Barcode, QR-code, Stamp, Form Field, Metadata. These e-signatures could be added to document, updated, deleted, verified or searched inside already signed documents. Our product also provides information about document type and structure - file type, size, pages count, etc. and generates document pages preview based on provided options. Here are main GroupDocs Signature API concepts: - Signature is the main class that contains all required methods for manipulating with document e-signatures. - Most part of methods expects different options to eSign document, verify and search electronic signatures inside document. - Signature class implements IDisposable interface to correctly release used resources - like safely closing document streams when all operations completed. Referencing required namespaces The following code shows how to include required namespace for all code examples. using GroupDocs.Signature; using GroupDocs.Signature.Domain; using GroupDocs.Signature.Options; using GroupDocs.Signature.Domain.Extensions; Signature object definition The following code shows most used code pattern to define Signature object and call its methods. // Sign document with text signature. using (Signature signature = new Signature("sample.docx")) { TextSignOptions textSignOptions = new TextSignOptions("John Smith"); signature.Sign("SampleSigned.docx", textSignOptions); } Please check detailed examples of how to eSign documents, search and verify document signatures in the following guides:
https://docs.groupdocs.com/signature/net/basic-usage/
2021-04-10T14:20:58
CC-MAIN-2021-17
1618038057142.4
[]
docs.groupdocs.com
Apps in Humanitec are made up of one or more related Workloads running in a single Kubernetes namespace. An App has one or more Environments associated with it. Apps can be deployed into these Environments. Developers in Humanitec can work on one or more Apps. Application Configuration refers to all the configuration details needed to deploy an App. It includes Environment Variables, Secrets, and Container Configurations. Humanitec is made for containerized Apps. Humanitec tracks new Container Images as they are built by hooking into the end of your Continuous Integration (CI) pipelines. This allows Container Images to be associated with source control metadata such as the commit that it was built from or what branch it came from. Container Images you build can be stored in Humanitec’s hosted registry or in your own private registry. Deployment Sets contain all the non-environment-specific configuration for an App. A Draft is a version of an App that is not yet deployed. Each App in Humanitec has one or more Environments. An Environment is an independent space that an App can be deployed into. Different versions of the same App can be deployed into many Environments at the same time. Environments can be configured to be either independent namespaces in the same cluster or each Environment is deployed in its own cluster. (It is possible to have some Environments, e.g. Production and Staging, deploy to their own unique clusters while at the same time deploying all development environments as independent namespaces in the same cluster.) Environments can also include environment-specific External Resources. Environment Types define which infrastructure to use for related Environments. Environment Types are created by the DevOps team and can be selected by the developers when creating a new Environment. The default Environment Type in Humanitec is called development. You can create as many own Environment Types as you want and name them depending on the naming used in your organization (e.g., QA, test-feature-branch). They are matched with Resources using Resource Definitions. Environment Variables externalize application configuration. An externalized application configuration allows for the ad-hoc creation of new Environments whenever needed. If Environment Variables contain sensitive information (e.g., passwords) they can be stored as Secrets and are only decrypted during deployment time. Humanitec can also manage and provision dependencies that are external to the cluster. These resources are called External Resources. Examples include: PostgreSQL databases provided via a managed service such as Google CloudSQL, DNS Names provisioned using Cloudflare Managed DNS, or S3 buckets provided by Amazon S3. External Resources can either be Dynamic or Static. Dynamic Resources are created and destroyed on demand when a new Deployment defines a dependency on a resource. For example, can be created when a new Workload is added to an App or an existing App is deployed into a new Environment; or destroyed when an Environment is deleted. Static Resources simply map to an existing resource that is managed outside of Humanitec. For example, the production database instance might be managed on a dedicated physical server by the SRE team but should be used by an app deployed to the “Production” Environment in Humanitec. Both, Dynamic and Static Resources, can be configured to resolve based on criteria such as a particular Environment. For example, databases for all development Environments might be dynamically provisioned from a single Google CloudSQL instance whereas the production databases should use pre-provisioned databases running on dedicated physical on-premise servers. Humanitec IDs are used throughout the platform to identify objects. They have the following requirements: Can only contain lowercase letters, numbers and dashes '-' Must be at least 3 characters long Cannot start or end with a dash '-' Examples of valid IDs: my-organization 2021-02-01--temp-environment Examples of invalid IDs: My-Organization - contains characters other than lowercase letters, numbers or dashes a - fewer than 3 characters test-env- - starts or ends in a dash Manifests refer to Kubernetes Manifests. In general Humanitec manages the creation of Manifests as part of the deployment process. It is possible to export Manifests out of Humanitec for a particular deployment. Resource Drivers are used to controlling External Resources. Humanitec provides a growing. Workload refers to the Kubernetes definition of "Workload". In general, it represents a set of pods with its controller specified by a Kubernetes Workload Resource, (e.g Deployment, StatefulSet, or Job.) The type of controller used is managed by the Workload Profile which can be defined by your DevOps team. The default profile used in Humanitec is a Deployment. A Workload Profile defines the structure of the Kubernetes Manifests that will be generated for a particular Workload. The Workload Profile also defines additional parameters that can be set by the developer through a Deployment Set.
https://docs.humanitec.com/concepts/
2021-04-10T14:40:38
CC-MAIN-2021-17
1618038057142.4
[]
docs.humanitec.com
- Application error rate example (v2) - Application reporting and health status (v2) - Average response time examples (v2) - Change the alias for your application (v2) - Get average CPU usage per host for an app - Get average throughput for an app (v2) - Get host memory used for an application - Get web transaction time data (v2) - Getting Apdex data for apps or browsers (v2) - List an app's host IDs and instance IDs - List your app ID and metric timeslice data (v2) - Summary data examples (v2)
https://docs.newrelic.com/docs/apis/rest-api-v2/application-examples-v2/
2021-04-10T15:27:22
CC-MAIN-2021-17
1618038057142.4
[]
docs.newrelic.com
New to Telerik Reporting? Download free 30-day trial How to Add LocationMapSeries to the Map Item The LocationMapSeries are used when the data points have a single coordinate pair, obtained directly from the data set or by using a Location provider. Adding a LocationMapSeries instance to the map To add new PointMap, PieMap or a ColumnMap series to the map follow these steps: Open Series collection editor and Add new PointMapSeries, PieMapSeries or ColumnMapSeries item. Set the GeoLocationGroup to an existing GeoLocationMapGroup instance or create a new one from scratch. Set the SeriesGroup to an existing MapGroup instance or create a new one from scratch. If you are creating a PointMapSeries, you can define a SeriesGroup by which your data will be grouped. This might come handy if you want to have a different color for every data point in your series. If you are creating a PieMapSeries or a ColumnMapSeries, you need to define an additional child group, which will be used to determine how the data will be grouped for every data point. The color and count of the pie sectors (or columns when creating a ColumnMapSeries) will be determined by the last child group of defined SeriesGroups. In the most cases you would create one series group without grouping (which will result in a static group) and add one child group, with Groupings set to the field you would like to group by. Set Size to an expression that will be used to determine the pie sector or the column size. When all the properties are set, the LocationMapSeries instance should look similar to the following one in the Property Grid:
https://docs.telerik.com/reporting/maphowtoaddlocationmapseriestothemapitem
2021-04-10T13:51:34
CC-MAIN-2021-17
1618038057142.4
[array(['/reporting/media/Map_AddLocationMapSeries.png', 'Map Add Location Map Series'], dtype=object) ]
docs.telerik.com
- Variables work in !path (test) - Variables work in links (test) - Wiki Import Support Unicode (test) - Rick Mugridge's FitLibrary is supported. Downloaded separately. .FitNesse.FitLibraryUserGuide[?] - Note that most classes have been moved from the fit package into the fitlibrary package. - .NET TestRunner .FitNesse.DotNet.DotNetTestRunner[?] - Enhanced variable lookup so that if it can't find a variable defined on the current page or any ancestor pages FitNesse will look at system properties (test). - Added -R switch to !contents widget to display all of the descendent hierarchy (test). - Added -seamless option to the !include widget. This allows you to include pages without the rendered wiki page informing the viewer of the include (test). - Improved navigation, organizing left nav into blocks of test related actions (Test, Suite), other page related actions (Edit, Properties, etc) and navigation to other parts of the wiki. - Page title is no longer a link. It's been replaced by a WhereUsed link on the left navbar. Also, you may be interested to know that FitNesse is no longer hosted at sourceforge. The CVS repository at sourceforge is far too slow for our continuous integration. Object Mentor is hosting the sourcecode and releases can be downloaded at. Public CVS access is currently not available. release 20050405 Replace /dotnet/fitnesse.key with the version in this zip
http://docs.fitnesse.org/.FrontPage.FitNesseDevelopment.FitNesseRelease20050405
2021-04-10T15:13:22
CC-MAIN-2021-17
1618038057142.4
[]
docs.fitnesse.org
Private Sales & Events Private sales and other catalog events are a great way to leverage your existing customer base to generate buzz and new leads, or to offload surplus inventory. You can create limited-time sales, limit sales to specific members, or create a standalone private sale page. You can also define invitations and event details. Increase brandA unique identity that defines a particular product or group of products. loyalty and generate a buzz by giving your best customers the VIP treatment. Offer exclusive access to Member Only sales or private sales to increase brand loyalty. You can also use these sales to liquidate excess merchandise. Customer Groups are extremely useful in setting up these types of Members Only and VIP sales. A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more.
https://docs.magento.com/m2/2.2/b2b/user_guide/marketing/events-private-sales.html
2019-01-16T06:40:36
CC-MAIN-2019-04
1547583656897.10
[]
docs.magento.com
Introduced in Maltego 4, collections aim to clean up the graph by grouping 'similar' entities, making it easier to view portions of the graph and find the key relationships you are looking for. The underlying collection rules all adhere to the following criteria: - Only entities of the same type may be collected together in a single collection - Entities that are pinned (pinned to the graph) may not be collected - A minimum entity limit exists which must be satisfied for a collection node to form, i.e. a collection node may not contain less than the minimum limit of entities. The image below shows the controls on the Collections tab of the ribbon as configured for a fresh install of Maltego. Collections are enabled by default and may be toggled off/on by pressing the Disable/Enable Collections button. On the Simplify Graph section a slider and spinner work in tandem to control the level of graph simplification. The numbers on the slider and that of the spinner correspond, designating the minimum number of entities that any collection node may contain. Dragging the slider to the left decreases this global minimum entity limit for collections, thereby increasing the amount of graph simplification. The Show Collections Tutorial button shows this tutorial in the Maltego client. The Select Collections button selects all the collection nodes on the current graph. Levels of Simplification A typical use case for using collection nodes is analysing Twitter followers. The image below shows the Detail View for three different Twitter accounts for which their followers where found, sorted alphabetically according to the entity name. Since transforms were run on these entities as input, none of them have incoming links. "Paterva" has the highest number of Twitter followers (outgoing links) among the 3 entities, with 3432, which according to the transform rules resulted in a weight of 100. With collections disabled (and for pre-Maltego4 versions), the graph output looks like the image below when in organic layout (zoomed to 2%). The graph consists of 4164 entities (4489 links in total), making it difficult to visualise the interesting relationships and common followers without having to continuously zoom in and out of the graph. With collections enabled and the slider in its default position of 25 entities, the graph output looks as follows in circular layout (zoomed to 15%). Notice the circular entities (uncollected) and square collection nodes. Dragging the slider to the far left for the greatest amount of graph simplification, renders the graph as follows (zoomed to 100%). The graph is now simpler and much easier to work with. Navigating a Collection With the collection node containing 269 entities selected (designated by "269" in the collection node heading on the graph), the selected entities can be viewed in list form in the Detail View, and sorted according to various columns (multi-column sorting is also supported using the Shift key in conjunction with mouse clicks on the column headings). Hovering over or clicking on the entities in this list shows the relevant entity properties in the Property View. Clicking on the icon in the Inspect column in the image above (shown by the orange plus (+) sign), shows in-depth details of that single entity (image below). Double-clicking on the Twitter user icon in the image below, will open the Details dialog. Clicking on the Back To List button (or right-clicking inside the Detail View component) in the image below, returns to the Detail View list of the entities in the collection node as in the image above. By double-clicking on the entity name in the Detail View list (or clicking on the icon in the Collected column which shows the number of entities in the collection node), the graph will automatically pan and zoom to the selected entity, briefly flashing the entity inside the collection node in white as in the image below. Pin/Unpin Entities Collections are simply visual elements -- if an entity is of specific interest and it must not be grouped within the collection node, one can press on the pin icon of that entity, either on the graph's collection component (as in the image below) or in the Detail View list. Having multiple entities selected and then clicking on the pin icon will pin all selected entities to the graph (uncollect from collection). Alternatively, all entities in a collection can be pinned to the graph by clicking the larger pin icon in the collection component heading (seen as a very faint overlay in the top-right corner of the image below). Clicking on the pin icon for the "Black Hat" entity , isolates the entity from the collection node. Pinning the entity to the graph (see image below). Other rules for exclusion from a collection node are if the entity has attachments or notes. When dragging entities onto the graph, they are pinned by default. If the orange pin icon of a pinned entity, such as the "Black Hat" entity below, is clicked to unpin the entity from the graph, the entity becomes available to be collected, and will only be collected should it satisfy the criteria outlined in the overview (top of page), and share relationships with (i.e. are 'similar' to) other entities of the same type. Typically, this will boil down to whether it is linked to (shares) common parent and child entities, although the rules can understandably become quite complex for heavily meshed graphs. Exploring the Detail View List With collection nodes, there is the same functionality that has always been in Maltego. For instance, one can find entities on the graph containing certain word(s), whether they form part of a collection node or not, by using the Quick Find functionality on the Investigate tab of the ribbon. Alternatively, when using the Detail View list with the "269" collection node selected, the "Black Hat" entity can be pinned to the graph from this listed view, which would uncollect it but keep it among the selected entities displayed in the list. The list entities can then further be filtered according to entities containing the word "black" in them as in the image below. As can be seen by the text inside the icon in the "Collected" column, the collection node now only contains 268 entities, and the pinned "Black Hat" entity is displayed as a normal (circle) entity. While on the graph all 269 entities of the original collection node are still selected, the Detail View list only shows the 2 filtered entities. By clearing the filter textfield, all 269 entities will again be displayed within the list. Alternatively, by selecting the 2 list entities in the image above, and clicking on the Sync Selection to Graph button to the left of the filter textfield, the graph selection changes to only these 2 entities and will be displayed as in the image below. Solid orange borders signify full selection (all entities within the visual element selected), while a dashed orange border (as for the "268" collection node above), signifies partial selection. The collection node heading in this case indicates that only 1 of the 268 entities within the collection node is selected. Since pinned entities (and other entities not in collection nodes) only represent a single entity, these entities can therefore never be in a state of partial selection. Transforms can also be run within the Detail View list using the context menu (on either single or multiple entities). Simply select the entities in the Detail View list, right-click to invoke the context menu (see image below), and run transforms as usual.
https://docs.maltego.com/support/solutions/articles/15000010775-collections-tab
2019-01-16T05:55:13
CC-MAIN-2019-04
1547583656897.10
[array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508629/original/DtJpNup1XJ5qMSkwGGVcZVN_1LsfQSDHDA.png?1528981677', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508885/original/b_IBF9lSxOzz3wCeA4wO2m-xTNrMOTGiPA.png?1528982006', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508893/original/MOvyS4Fi1z4Fs99oiFt6Ox-4CI-kHpCS2A.png?1528982034', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508902/original/4zeOexDlk_zFII1svzTh1dnjCUet2mM_fw.png?1528982065', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508923/original/GV-lK6kA8fhxEXrJ8o8adUP3bLBYkfnaVA.png?1528982094', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004508978/original/BTW05vVLmo35HmMM8PnID42IxR6FEt4ZsQ.png?1528982168', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509133/original/bGuXVVGmNE-I57RohUiCm5767qWNZGUKfQ.png?1528982219', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509182/original/1kKM93927y7SY5n0jM6f_yJ-VTL8YFgbpw.png?1528982254', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509248/original/8WEKcJX-SNXDv00LN9DOL6-Fu6ZMoTS1Zg.png?1528982341', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509307/original/lIcuQvBqmPR7WSGdzFQgvozT2EUw-cX1IQ.png?1528982479', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509481/original/Xbu55CGdbFjNDr91wWadb91hBzCPB1u8ow.png?1528982676', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509529/original/E5DU0KdygLHlZI_18raLAvYB6c-XafMUPg.png?1528982762', None], dtype=object) array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15004509649/original/i5WkmJnD9H1i8aheJcjq5zqsi_DIHocdIg.png?1528982917', None], dtype=object) ]
docs.maltego.com
9.8. From EDI Files to Inverting Natural Source EM Data¶ This A to Z example tackles practical aspects of preparing and inverting natural source EM data (MT and/or ZTEM) using GIFtools. Here the user begins with a set of EDI formatted MT survey files, loads them into the GIFtools framework, interprets the data and inverts the data with two different OcTree codes (E3DMT versions 1 and 2). The goal is to use synthetic high frequency MT data to resolve the D0-27 and D0-18 anomalies. We then add ZTEM data and show the capability of GIFtools to jointly invert multiple datasets. Once finished, the user will be familiar with: - The coordinate systems and Fourier convention generally used for MT data - Basic interpreting of MT data through apparent resistivities - Practical strategies for inverting natural source data with the E3DMT codes - Differences between E3DMT versions 1 and 2 - How to jointly invert MT and ZTEM data using GIFtools The full A to Z example is split into 3 parts:
https://giftoolscookbook.readthedocs.io/en/latest/content/AtoZ/NS/index.html
2019-01-16T06:47:31
CC-MAIN-2019-04
1547583656897.10
[]
giftoolscookbook.readthedocs.io
pyquil.api._qam.QAM¶ - class pyquil.api._qam. QAM[source]¶ The platonic ideal of this class is as a generic interface describing how a classical computer interacts with a live quantum computer. Eventually, it will turn into a thin layer over the QPU and QVM’s “QPI” interfaces. The reality is that neither the QPU nor the QVM currently support a full-on QPI interface, and so the undignified job of this class is to collect enough state that it can convincingly pretend to be a QPI-compliant quantum computer. Methods
https://pyquil.readthedocs.io/en/stable/apidocs/autogen/pyquil.api._qam.QAM.html
2019-01-16T05:40:49
CC-MAIN-2019-04
1547583656897.10
[]
pyquil.readthedocs.io
class OEDisplayBondIdx : public OEDisplayBondPropBase This class represents OEDisplayBondIdx. See also The following methods are publicly inherited from OEDisplayBondPropBase: std::string operator()(const OEChem::OEBondBase &bond) const Returns the string representation of the index of the bond (i.e. the number returned by the OEBondBase.GetIdx method) base_type *CreateCopy() const Deep copy constructor that returns a copy of the object. The memory for the returned OEDisplayBondIdx object is dynamically allocated and owned by the caller.
https://docs.eyesopen.com/toolkits/java/depicttk/OEDepictClasses/OEDisplayBondIdx.html
2019-01-16T07:03:28
CC-MAIN-2019-04
1547583656897.10
[]
docs.eyesopen.com
Supported clients from previous deployments in Lync Server 2013. Supported Server and Client Combinations. Note For details about the ability of Lync Server 2013 clients to coexist and interact with clients from earlier versions of Lync Server and Office Communications Server, see Client interoperability in Lync 2013 in the Planning documentation.
https://docs.microsoft.com/en-us/lyncserver/lync-server-2013-supported-clients-from-previous-deployments
2019-01-16T06:37:17
CC-MAIN-2019-04
1547583656897.10
[]
docs.microsoft.com
App.jsto the following: import React from 'react'; import { Text, View, } from 'react-native'; export default class App extends React.Component { render() { return ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text style={{ fontSize: 56 }}> Hello, world! </Text> </View> ); } } OpenSans-Bold.ttfinto the assets directory in your project. The location we recommend is your-project/assets/fonts. npm install --save expoin your project directory. Add the following importin your application code: import { Font } from 'expo'; expolibrary provides an API to access native functionality of the device from your JavaScript code. Fontis the module that deals with font-related tasks. First, we must load the font from our assets directory using Expo.Font.loadAsync(). We can do this in the componentDidMount() lifecycle method of the Appcomponent. Add the following method in App: Now that we have the font files saved to disk and the Font SDK imported, let's add this code: export default class App extends React.Component { componentDidMount() { Font.loadAsync({ 'open-sans-bold': require('./assets/fonts/OpenSans-Bold.ttf'), }); } // ... } 'open-sans-bold'in Expo's font map. Now we just have to refer to this font in our Textcomponent. Note: Fonts loaded through Expo don't currently support the fontWeightor fontStyleproperties -- you will need to load those variations of the font and specify them by name, as we have done here with bold. Textcomponents using the fontFamilystyle property. The fontFamilyis the key that we used with Font.loadAsync. <Text style={{ fontFamily: 'open-sans-bold', fontSize: 56 }}> Hello, world! </Text> Expo.Font.loadAsync()is an asynchronous call and takes some time to complete. Before it completes, the Textcomponent is already rendered with the default font since it can't find the 'open-sans-bold'font (which hasn't been loaded yet). Textcomponent when the font has finished loading. We can do this by keeping a boolean value fontLoadedin the Appcomponent's state that keeps track of whether the font has been loaded. We render the Textcomponent only if fontLoadedis true. fontLoadedto false in the Appclass constructor: class App extends React.Component { state = { fontLoaded: false, }; // ... } fontLoadedto truewhen the font is done loading. Expo.Font.loadAsync()returns a Promisethat is fulfilled when the font is successfully loaded and ready to use. So we can use async/await with componentDidMount()to wait until the font is loaded, then update our state. class App extends React.Component { async componentDidMount() { await Font.loadAsync({ 'open-sans-bold': require('./assets/fonts/OpenSans-Bold.ttf'), }); this.setState({ fontLoaded: true }); } // ... } Textcomponent if fontLoadedis true. We can do this by replacing the Textelement with the following: <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> { this.state.fontLoaded ? ( <Text style={{ fontFamily: 'open-sans-bold', fontSize: 56 }}> Hello, world! </Text> ) : null } </View> nullchild element is simply ignored by React Native, so this skips rendering the Textcomponent when fontLoadedis false. Now on refreshing the app you will see that open-sans-boldis used. Note: Typically you will want to load your apps primary fonts before the app is displayed to avoid text flashing in after the font loads. The recommended approach is to move the Font.loadAsynccall to your top-level component.
https://docs.expo.io/versions/v28.0.0/guides/using-custom-fonts
2019-01-16T05:30:10
CC-MAIN-2019-04
1547583656897.10
[]
docs.expo.io
LiveTestWidgetsFlutterBindingFramePolicy enum Available policies for how a LiveTestWidgetsFlutterBinding should paint frames. These values are set on the binding's LiveTestWidgetsFlutterBinding.framePolicy property. The default is fadePointers. Constants - benchmark → const LiveTestWidgetsFlutterBindingFramePolicy Ignore any request to schedule a frame. This is intended to be used by benchmarks (hence the name) that drive the pipeline directly. It tells the binding to entirely ignore requests for a frame to be scheduled, while still allowing frames that are pumped directly (invoking Window.onBeginFrame and Window.onDrawFrame) to run. The SchedulerBinding.hasScheduledFrame property will never be true in this mode. This can cause unexpected effects. For instance, WidgetTester.pumpAndSettle does not function in this mode, as it relies on the SchedulerBinding.hasScheduledFrame property to determine when the application has "settled". const LiveTestWidgetsFlutterBindingFramePolicy(3) - fadePointers → const LiveTestWidgetsFlutterBindingFramePolicy Show pumped frames, and additionally schedule and run frames to fade out the pointer crosshairs and other debugging information shown by the binding. This can result in additional frames being pumped beyond those that the test itself requests, which can cause differences in behavior. const LiveTestWidgetsFlutterBindingFramePolicy(1) - fullyLive → const LiveTestWidgetsFlutterBindingFramePolicy Show every frame that the framework requests, even if the frames are not explicitly pumped. This can help with orienting the developer when looking at heavily-animated situations, and will almost certainly result in additional frames being pumped beyond those that the test itself requests, which can cause differences in behavior. const LiveTestWidgetsFlutterBindingFramePolicy(2) - onlyPumps → const LiveTestWidgetsFlutterBindingFramePolicy Strictly show only frames that are explicitly pumped. This most closely matches the behavior of tests when run under flutter test. const LiveTestWidgetsFlutterBindingFramePolicy(0) - values → const List< LiveTestWidgetsFlutterBindingFramePolicy> A constant List of the values in this enum, in order of their declaration. const List< LiveTestWidgetsFlutterBindingFramePolicy> Properties Methods - toString( ) → String - Returns a string representation of this object.override - noSuchMethod( Invocation invocation) → dynamic - Invoked when a non-existent method or property is accessed. [...]inherited Operators - operator ==( dynamic other) → bool - The equality operator. [...]inherited
https://docs.flutter.io/flutter/flutter_test/LiveTestWidgetsFlutterBindingFramePolicy-class.html
2019-01-16T06:01:56
CC-MAIN-2019-04
1547583656897.10
[]
docs.flutter.io
TestPointer class A class for generating coherent artificial pointer events. You can use this to manually simulate individual events, but the simplest way to generate coherent gestures is to use TestGesture. Constructors - TestPointer([int pointer = 1 ]) - Creates a TestPointer. By default, the pointer identifier used is 1, however this can be overridden by providing an argument to the constructor. [...] Properties - isDown → bool - Whether the pointer simulated by this object is currently down. [...]read-only - location → Offset - The position of the last event sent by this object. [...]read-only - pointer → int - The pointer identifier used for events generated by this object. [...]final - hashCode → int - The hash code for this object. [...]read-only, inherited - runtimeType → Type - A representation of the runtime type of the object.read-only, inherited Methods - cancel( {Duration timeStamp: Duration.zero }) → PointerCancelEvent - Create a PointerCancelEvent. [...] - down( Offset newLocation, { Duration timeStamp: Duration.zero }) → PointerDownEvent - Create a PointerDownEvent at the given location. [...] - move( Offset newLocation, { Duration timeStamp: Duration.zero }) → PointerMoveEvent - Create a PointerMoveEvent to the given location. [...] - up( {Duration timeStamp: Duration.zero }) → PointerUpEvent - Create a PointerUpEvent. [...] - noSuchMethod( Invocation invocation) → dynamic - Invoked when a non-existent method or property is accessed. [...]inherited - toString( ) → String - Returns a string representation of this object.inherited Operators - operator ==( dynamic other) → bool - The equality operator. [...]inherited
https://docs.flutter.io/flutter/flutter_test/TestPointer-class.html
2019-01-16T06:49:45
CC-MAIN-2019-04
1547583656897.10
[]
docs.flutter.io
Applications / Security Insights application / Navigating the Security Insights applicationDownload as PDF Navigating the Security Insights application. The following articles describe the widgets that you can see in each of the tabs. For additional information and actions you can perform on each widget, see the in-app documentation of each widget selecting the corresponding information icon.
https://docs.devo.com/confluence/ndt/applications/security-insights-application/navigating-the-security-insights-application
2019-01-16T06:55:24
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
Configuring and sharing dashboards Once created, you can fine-tune the dashboard and share it using the options available in the dashboard menus. Preferences Select Preferences to access several configuration options regarding dashboards and widgets. ShareShare Select Share to access different sharing options for your dashboard. Refresh You can choose between different options regarding the data refresh frequency of the dashboard. This will affect all the widgets in the dashboard.
https://docs.devo.com/confluence/ndt/dashboards/configuring-and-sharing-dashboards
2019-01-16T05:45:46
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
If anything goes wrong, you can revert back to the "Automatic Proxy settings" in System Network Preferences using Automatic Proxy Configuration your-corporate-proxy-uri:port-number/proxy.pac System Preferencesfor your Mac (Apple Menu > System Preferences). Locationis set to your proxy network, and not "Automatic". Advanced...on the bottom right side of the window. Proxy > External Proxy Settings, check Use external proxy servers Web Proxy (HTTP), and enter your-corporate-proxy-uri:port-number Proxy server requires a password Bypass external proxies for the following hosts:enter localhost *.local Technical note: This whole process is required because the iOS Simulator is served a bum proxy certificate instead of the actual certificate, and doesn't allow it, for which is required to run Expo. Also note: Configure applications that need internet access, such as Spotify, to use as your proxy. Some apps, such as Chrome and Firefox, you can configure in the settings to use your "System Network Preferences" which will use Charles : 8888, or no proxy, depending on how you have your "Location" set in the Apple menu / network preferences. If you are set to "Automatic" no proxy is used, if it is set to "your proxy network" the proxy is used and Charles will need to be running. ~/.npmrcand set: http_proxy= https_proxy= ~/.gitconfigand set [http] proxy = [https] proxy = ~/.bashrc, ~/.bash_profile, or ~/.zshrcor wherever you set your shell variables, and set: export HTTP_PROXY="" export http_proxy="" export ALL_PROXY="" export all_proxy="" export HTTPS_PROXY="" export https_proxy="" Note: if you switch your network location back to "Automatic" in order to use npm or git, you will need to comment these lines out using a #before the line you wish to disable. You could alternatively use a command-line proxy manager if you prefer.
https://docs.expo.io/versions/v28.0.0/introduction/troubleshooting-proxies
2019-01-16T06:09:49
CC-MAIN-2019-04
1547583656897.10
[]
docs.expo.io
Genesys Voice Platform Also known as GVP. An open-source self-service platform that delivers VoiceXML applications across a variety of networks, by using local media processing in conjunction with industry-leading speech resources. Through GVP, callers are provided with highly personalized self-service offerings. GVP provides greater functionality than traditional Interactive Voice Responses (IVRs) through its extension of existing web personalization and the industry-standard programming language, VoiceXML. GVP also blends self-service with agent-assisted service. Glossary Genesys Administrator The web-based User Interaction Layer component that provides an interface to the Configuration Layer, Management Layer, and other Genesys solutions. Universal Routing Server Also known as URS. The server that is used by Universal Routing that automatically executes routing-strategy instructions and distributes incoming customer interactions among contact-center agents. Previously known as Interaction Router. Glossary. Glossary SIP Server SIP Server has the same position in the Genesys Media Layer as all Genesys T-Servers. It is a combination of a T-Server and a call-switching component, in which the call switching element functions as a SIP Back-to-Back User Agent (B2BUA). Because SIP Server supports the Internet Engineering Task Force (IETF) SIP RFC 3261 suite, it is compatible with the most popular SIP-compatible, off-the-shelf hardware or software. SIP Server can operate with or without a third-party softswitch. Genesys SIP Server gives the entire Genesys line of products access to SIP networks, offering a standards-based, platform-independent means of taking full advantage of the benefits of voice/data convergence. Glossary T-Server The Genesys software component that provides an interface between your telephony hardware and the rest of the Genesys software components in your enterprise. It translates and keeps track of events and requests that come from, and are sent to, the Computer-Telephony Integration (CTI) link in the telephony device. T-Server is a TCP/IP-based server that can also act as a messaging interface between T-Server clients. It is the critical point in allowing your Genesys solution to facilitate and track the contacts that flow through your enterprise. Solution Control Interface Also known as SCI. A Genesys Framework component that is used to administer Genesys solutions—for example, to start or stop the solution, view logs, configure event-triggered alarms, and provide real-time status information for all Genesys applications. Glossary Solution Control Server Also known as SCS. The Genesys Framework component that serves as the control point for which SCI is the interface. Together, SCS and SCI provide the services that are described in the definition of SCI.. Configuration Manager The Genesys Framework component that provides a user-friendly interface for manipulating the contact-center configuration data that Genesys solutions use and for setting user permissions for solution functions and data. DB Server The Genesys Framework component that provides a single database interface for Genesys servers to use while they connect to a variety of proprietary database engines, such as Oracle, Microsoft SQL Server, DB2, Informix, and Sybase. Glossary Contents - 1 General Deployment - 1.1 Prerequisites - 1.2 Deployment Tasks - 1.3 Creating the ORS Application Object - 1.4 Configuring an ORS Cluster - 1.5 Manually Loading an SCXML Application on a DN - 1.6 Manually Loading an SCXML Application on an Enhanced Routing Script - 1.7 Configuring the ApplicationParms Section of an Enhanced Routing Script Object General Deployment This topic contains general information for the deployment of your Orchestration Server (ORS). In addition, you may have to complete additional configuration and installation steps specific to your Orchestration Server and devices. Note: You must read the Framework 8.1 Deployment Guide before proceeding with this Orchestration Server guide. That document contains information about the Genesys software you must deploy before deploying Orchestration Server. Prerequisites Orchestration Server has a number of prerequisites for deployment. Read through this section before deploying your Orchestration Server. Framework Components You can only configure ORS after you have deployed the Configuration Layer of Management Framework as described in the Management Layer User's Guide. This layer contains DB Server, Configuration Server, Configuration Manager, and, at your option, Deployment Wizards. If you intend to monitor or control ORS through the Management Layer, you must also install and configure components of this Framework layer, such as Local Control Agent (LCA), Message Server, Solution Control Server (SCS), and Solution Control Interface (SCI), before deploying ORS. Refer to the Framework 8.1 Deployment Guide for information about, and deployment instructions for, these Framework components. When deploying ORS 8.1.3 or later, Local Control Agent and Solution Control Server version 8.1.2 or later are required. Orchestration Server and Local Control Agent To monitor the status of Orchestration Server through the Management Layer, you must load an instance of Local Control Agent (LCA) on every host running Orchestration Server components. Without LCA, Management Layer cannot monitor the status of any of these components. Persistent Storage Determine whether you will use persistent storage (Apache Cassandra). If you chose to do so, then installing Cassandra should be done as the first before you deploy ORS. See the Cassandra Installation/Configuration Guide. Supported Platforms For the list of operating systems and database systems supported in Genesys releases 8.x. refer to the Genesys System-Level Guides, such as Supported Operating Environment Reference Guide and Interoperability Guide on the Genesys documentation website at docs.genesys.com/System. Task Summary: Prerequisites for ORS Deployment About Configuration Options Configuring Orchestration Server is not a one-time operation. It is something you do at the time of installation and then in an ongoing way to ensure the continued optimal performance of your software. You must enter values for Orchestration Server configuration options on the Options tab of your Orchestration Server Application object in Configuration Manager. The instructions for configuring and installing Orchestration Server that you see here are only the most rudimentary parts of the process. You must refer extensively to the configuration options section of this wiki. Familiarize yourself with the options. You will want to adjust them to accommodate your production environment and the business rules that you want implemented. Deployment Tasks You can configure ORS entirely in Configuration Manager or in Genesys Administrator. This chapter describes ORS configuration using Configuration Manager. The table below presents a high-level summary of ORS deployment tasks. Note: The above ORS Deployment Tasks assumes you have installed/configured any other Genesys components which interact with Orchestration Server, for example, T-Server/SIP Server, Stat Server, Universal Routing Server, Interaction Server (if needed), Composer (if needed), Genesys Administrator (if needed), Genesys Voice Platform (if needed). Creating the ORS Application Object - In Configuration Manager, select Environment > Applications. - Right-click either the Applications folder or the subfolder in which you want to create your Application object. - From the shortcut menu that opens, select New > Application. - In the Open dialog box, locate the template that you just imported, and double-click it to open the ORS Application object. For Configuration Server versions before 8.0.3, the Type field will display Genesys Generic Server. - Select the General tab and change the Application name (if desired). - Make sure that the State Enabled check box is selected. - In a multi-tenant environment, select the Tenants tab and set up the list of Tenants that use ORS. Note: Order matters. The first Tenant added will become the default Tenant for Orchestration Server. Please ensure that the list of Tenants is created in the same order for both Orchestration Server and Universal Routing Server. - Click the Server Info tab and select the following: Host, select the name of the Host on which ORS resides; Ports, select the Listening Port. Note that a default port is created for you automatically in the Ports section after you select a Host. Select the port and click Edit Port to open the Port Info dialog box. - Enter an unused port number for the Communication Port. For information on this dialog box, see the Port Info Tab topic in the Framework 8.1 Configuration Manager Help. - For Web Service access to this ORS Application, configure the HTTP port: - In the New Port Info dialog box, in Port ID, enter http. - For Communication Port, enter an unused port number. - For Connection Protocol, select http from the drop-down menu list. - Click OK. - Select the Start Info tab and specify the following: - Working Directory, enter the Application location (example: C:/GCTI/or_server) - Command Lineenter the name of executable file (example: orchestration.exe). - Command Line Arguments, enter the list of arguments to start the Application (example: -host <name of Configuration Server host> -port <name of Configuration Server port>-app <name of ORS Application> - Startup time, enter the time interval the server should wait until restarting if the server fails. - Shutdown time, enter the time interval the server takes to shut down. - Auto-Restart setting, selecting this option causes the server to restart automatically if the server fails. - Primary setting, selecting this option specifies the server as the primary routing server (unavailable). - Select the Connections tab and specify all the servers to which ORS must connect: - T-Server - Interaction Server - Universal Routing Server Configuring an ORS Cluster ORS provides a new configuration Transaction of the type List, called ORS in the Environment tenant , to determine the ORS cluster configuration. Each section in the List represents a single Orchestration cluster. Each of the key/value pairs in that section links a specific Orchestration application to a Data Center (legacy method). Starting with release 8.1.400.64, you can configure an ORS cluster using a dedicated Transaction object of type List.In addition, options for all ORS applications in a cluster may be configured within that object. This eliminates the need to individually configure the options in every ORS application. For more information, see Clustering, Enhanced Cluster Configuration section. Notes: - All ORS nodes with the Data Center set to an empty string will belong to one "nameless” Data Center. - ORS 8.1.3 and later requires creating an ORS Transaction List even if the deployment has only one ORS node. Adding an ORS Application to Cluster and Data Center Configure each section to represent a single Orchestration cluster, and each of the key/value pairs to link a specific Orchestration Application to a Data Center. - In Configuration Manager, select the Tenant Environment and navigate to the Transactions folder. - Right-click inside the Transactions window and select New > Transaction from the shortcut menu. - On the General tab, enter the following information: - Name: ORS - Alias: ORS - Type: List (pulldown menu) - Recording Period: 0 - State Enabled should be checked. - Click the Annex tab to enter the cluster information.Right-click inside the Section window and select New from the shortcut menu. Enter the name of your cluster. - Double-click on the cluster name. - Right-click inside the Section window and select New from the shortcut menu. - In the Option Name field, enter the name of an Orchestration application configured as Primary. - In the Option Value field, enter the name of the Data Center associated with the Orchestration Node. - Click OK to save. - Repeat Steps 7 - 10 for all Orchestration Nodes that belong to this cluster. - Click on Up One Level. - Repeat Steps 5 - 12 for all clusters. - Click OK to save and exit. An example is shown below. In the above example, Cluster1 consists of six nodes presented by Primary instances of Orchestration Servers: - node001 and node002, which are linked to Data Center London. - node003 and node004, which are linked to Data Center Paris. - node005 and node006, which are linked to a "nameless” Data Center. When a Data Center value is left empty, the nodes default to a "nameless" Data Center. Notes: - In ORS 8.1.3 and later, work allocation happens automatically, based on the configuration of the cluster described above. - ORS 8.1.3 and later requires creating an ORS Transaction List even the deployment has only one ORS node. Manually Loading an SCXML Application on a DN This section describes manually loading an SCXML application on a DN. The following types of DNs can be configured: Extension, ACD Position, Routing Point. See DN-Level Options. - In Configuration Manager, select the appropriate Tenant folder, Switch name, and DN folder. - Open the appropriate DN object. - Select the Annex tab. - Select or add the Orchestration section. - Right-click inside the Options window and select New from the shortcut menu. - In the resulting Edit Option dialog box, in the Option Name field, type application. - In the Option Value field, type the URL of the SCXML document to load. - Refer to the application option description for a full description of this configuration option and its valid values. - Click OK to save Manually Loading an SCXML Application on an Enhanced Routing Script See Enhanced Routing Script Options. - In Configuration Manager, select the appropriate Tenant and navigate to the Scripts folder. - Open the appropriate Script object of type Enhanced Routing Script (CfgEnhancedRouting). - Select the Annex tab. - Select or add the Application section. - Right-click inside the options window and select New from the shortcut menu. - In the Option Value field, create the url option. - Refer to the url option description in the Application section for a full description of this configuration option and its valid values. - Click OK to save In addition, an option can be used to specify a string that represents a parameter value that is to be passed to the Application. The ApplicationParms section contains the values for data elements that can be referred to within the SCXML application. The Enhanced Routing Script object is named as such to identify SCXML applications and Routing applications. Existing IRD-based IRL applications are provisioned as Script objects. Configuring the ApplicationParms Section of an Enhanced Routing Script Object - In Configuration Manager, select the appropriate Tenant and navigate to the Scripts folder. - Open the appropriate Script object of type Enhanced Routing Script (CfgEnhancedRouting). - Select the Annex tab. - Select or add the ApplicationParms section. - Right-click inside the options window and select New from the shortcut menu. - In the resulting Edit Option dialog box, in the Option Name field, type a name for the parameter option. - In the Option Value field, type the value for the option. Note: Refer to the option description for {Parameter Name} for a full description of this configuration option its valid values. The table Parameter Elements for ApplicationParms provides useful information about parameters that can be added. The figure below shows an example of the use of the ApplicationParms section. - Click OK to save. - Repeat from Step 5 to add another option in this section. Feedback Comment on this article:
https://docs.genesys.com/Documentation/OS/latest/Deployment/General
2019-01-16T06:02:04
CC-MAIN-2019-04
1547583656897.10
[]
docs.genesys.com
Introduction The KNIME Database Extension provides a set of KNIME nodes that allow connecting to almost all JDBC-compliant databases. These nodes reside in the Database category in the Node Repository, where you can find a number of database access, manipulation and writing nodes. The database nodes are part of every KNIME Analytics Platform installation. It is not necessary to install any additional KNIME Extensions. Connecting to a database The Database → Connector subcategory in the Node Repository contains a set of database-specific connector nodes for commonly used databases such as MySQL, as well as the generic Database Connector node. Vendor-specific JDBC drivers For some databases KNIME Analytics Platform does not contain a ready-to-use JDBC driver. In these cases, it is necessary to first register a vendor-specific JDBC 4.1 driver in KNIME Analytics Platform. Please consult your database vendor to obtain the JDBC driver. Working with databases Additionally, there are nodes to read, write or delete data from a database, or to to run custom SQL statements, such as the Database SQL Executor node.
https://docs.knime.com/latest/database_extension_guide/index.html
2019-01-16T05:49:18
CC-MAIN-2019-04
1547583656897.10
[]
docs.knime.com
7. Maintenance¶ Hardware requirements, deployment process in details, aspects related to security, configuration files — all of the listed is explained in this separate section, helpful for DevOps engineers or those who are digging deeper in the system capabilities. Table of contents - 7.1. Permissions - 7.2. List of Permissions - 7.2.1. Command-related permissions - 7.2.2. Query-related permissions - 7.2.3. Supplementary Sources - 7.3. Ansible
https://iroha.readthedocs.io/en/latest/maintenance/index.html
2019-01-16T06:03:54
CC-MAIN-2019-04
1547583656897.10
[]
iroha.readthedocs.io
Alerts and notifications / Configuring alerts / Create a delivery method / Email delivery methodsDownload as PDF Create email type delivery methods for each member of your organization, or email group, that should be able to receive alert notifications by email..
https://docs.devo.com/confluence/ndt/alerts-and-notifications/configuring-alerts/create-a-delivery-method/email-delivery-methods
2019-01-16T05:41:20
CC-MAIN-2019-04
1547583656897.10
[]
docs.devo.com
Package rdsutils BuildAuthToken ¶ func BuildAuthToken(endpoint, region, dbUser string, creds *credentials.Credentials) (string, error) BuildAuthToken will return a authentication token for the database's connect based on the RDS database endpoint, AWS region, IAM user or role, and AWS credentials. Endpoint consists of the hostname and port, IE hostname:port, of the RDS database. Region is the AWS region the RDS database is in and where the authentication token will be generated for. DbUser is the IAM user or role the request will be authenticated for. The creds is the AWS credentials the authentication token is signed with. An error is returned if the authentication token is unable to be signed with the credentials, or the endpoint is not a valid URL. The following example shows how to use BuildAuthToken to create an authentication token for connecting to a MySQL database in RDS. authToken, err := for more information on using IAM database authentication with RDS.
http://docs.activestate.com/activego/1.8/pkg/github.com/aws/aws-sdk-go/service/rds/rdsutils/
2019-01-16T06:49:24
CC-MAIN-2019-04
1547583656897.10
[]
docs.activestate.com
This page provides a tutorial for setting up progressive path tracing for rendering. Overview In this tutorial we will discuss an alternative method for computing the final image with V-Ray called progressive path tracing. The method described in this tutorial is somewhat deprecated; newer versions of V-Ray include the Progressive image sampler which can be used for a similar purpose.. Tutorial Assets To download the scene used in this tutorial, click on the button below. Initial Rendering Step 1: Initial setup. Setting up progressive path tracing is fairly easy: 1.1. Open the starting scene. 1.2. Set V-Ray as the current rendering engine. 1.3. Check the Override mtl optio n 3: Adjusting the noise level (You are correct in noticing there was no Step 2.). Sample size = 0.04 Sample size = 0.02 Sample size = 0.0 (unbiased solution) Rendering with Materials Step 1: Rendering with materials 1.1. Turn off the Override mtl option from the Global switches (i. Increasing the Image Size): Notes - The image sampler type (Fixed, Adaptive DMC, Adaptive subdivision) is ignored in this mode, since the path tracing algorithm does pixel supersampling automatically. After the image is complete, V-Ray will print the minimum and maximum paths that were traced for the pixels in the image. - The antialiasing filter however, is taken into consideration. Note that sharpening filters (Mitchell-Netravali, Catmull-Rom) may introduce noise and will require more samples to produce a smooth image. Larger filters like Blend may also take more time to converge. Turning the antialiasing filter off produces the least noise. - Subdivs parameters in materials, textures, lights, camera settings, etc. are ignored in this mode. Noise and quality is controlled entirely through the light cache Subdivs parameter. - The only parameters of the Global DMC Settings that are taken into consideration are Adaptive amount and Time-independent. Never set the Adaptive amount parameter to 0.0 when using path tracing, since this will bring the rendering to a halt. - At present, only the RGBA channel is generated by the path tracing algorithm. Any additional G-Buffer channels are ignored. - The light cache has no limitation on the number of diffuse light bounces in the scene. The number of specular bounces (through reflections/refractions) is controlled either per material, or globally from the Global Switches rollout. - At present, the path tracing mode does not work properly when rendering to fields. - At present, the path tracing mode does not work with matte objects/materials.
https://docs.chaosgroup.com/display/VRAY4MAX/Progressive+Path+Tracing+With+V-Ray
2019-01-16T06:27:28
CC-MAIN-2019-04
1547583656897.10
[]
docs.chaosgroup.com
RenderRepaintBoundary class Creates a separate display list for its child. This render object creates a separate display list for its child, which can improve performance if the subtree repaints at different times than the surrounding parts of the tree. Specifically, when the child does not repaint but its parent does, we can re-use the display list we recorded previously. Similarly, when the child repaints but the surround tree does not, we can re-record its display list without re-recording the display list for the surround tree. In some cases, it is necessary to place two (or more) repaint boundaries to get a useful effect. Consider, for example, an e-mail application that shows an unread count and a list of e-mails. Whenever a new e-mail comes in, the list would update, but so would the unread count. If only one of these two parts of the application was behind a repaint boundary, the entire application would repaint each time. On the other hand, if both were behind a repaint boundary, a new e-mail would only change those two parts of the application and the rest of the application would not repaint. To tell if a particular RenderRepaintBoundary is useful, run your application in checked mode, interacting with it in typical ways, and then call debugDumpRenderTree. Each RenderRepaintBoundary will include the ratio of cases where the repaint boundary was useful vs the cases where it was not. These counts can also be inspected programmatically using debugAsymmetricPaintCount and debugSymmetricPaintCount respectively. - Inheritance - Object - AbstractNode - RenderObject - RenderBox - RenderProxyBox - RenderRepaintBoundary Constructors - RenderRepaintBoundary({RenderBox child }) - Creates a repaint boundary around child. Properties - debugAsymmetricPaintCount → int - The number of times that either this render object repainted without the parent being painted, or the parent repainted without this object being painted. When a repaint boundary is used at a seam in the render tree where the parent tends to repaint at entirely different times than the child, it can improve performance by reducing the number of paint operations that have to be recorded each frame. [...]read-only - debugSymmetricPaintCount → int - The number of times that this render object repainted at the same time as its parent. Repaint boundaries are only useful when the parent and child paint at different times. When both paint at the same time, the repaint boundary is redundant, and may be actually making performance worse. [...]read-only - isRepaintBoundary → bool - Whether this render object repaints separately from its parent. [... Methods - debugFillProperties( DiagnosticPropertiesBuilder properties) → void - Add additional properties associated with the node. [...]override - debugRegisterRepaintBoundaryPaint( {bool includedParent: true, bool includedChild: false }) → void - Called, in checked mode, if isRepaintBoundary is true, when either the this render object or its parent attempt to paint. [...]override - debugResetMetrics( ) → void - Resets the debugSymmetricPaintCount and debugAsymmetricPaintCount counts to zero. [...] - toImage( {double pixelRatio: 1.0 }) → Future< Image> - Capture an image of the current state of this render object and its children. [...] -( HitTestResult result, { Offset position }) → bool - Determines the set of render objects located at the given position. [...]inherited - hitTestChildren( H
https://docs.flutter.io/flutter/rendering/RenderRepaintBoundary-class.html
2019-01-16T05:37:38
CC-MAIN-2019-04
1547583656897.10
[]
docs.flutter.io
In Windows when you activate Maltego, it will create a license file for your client located here: C:\ProgramData\Paterva\Maltego\MaltegoLicense.Lic. If Maltego does not have permission to write to this location your license file will not be saved. If this is the case you will get an error message saying that your license file could not be saved. You can resolve this issue by running Maltego as Administrator. In case of multiple users sharing Maltego, use this method: Save your license file in user directory
https://docs.maltego.com/support/solutions/articles/15000011967-what-can-cause-the-could-not-save-license-file-exception-
2019-01-16T05:41:52
CC-MAIN-2019-04
1547583656897.10
[array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15008125694/original/iWMOCq25mJJ9Q5F5qJHyn3S6tEYmwg__PA.png?1540554576', None], dtype=object) ]
docs.maltego.com
Break Applies To: Windows Server 2008, Windows Server 2012, Windows 8 Disassociates a shadow copy volume from VSS and makes it accessible as a regular volume. The volume can then be accessed using a drive letter (if assigned) or volume name. If used without parameters, break displays help at the command prompt. Note This command is relevant only for hardware shadow copies after import. For examples of how to use this command, see Examples. Syntax break [writable] <SetID> Parameters Remarks Exposed volumes, like the shadow copies they originate from, are read-only by default. The alias of the shadow copy ID, which is stored as an environment variable by the load metadata command, can be used in the SetID parameter. Examples To make a shadow copy with the alias name Alias1 accessible as a writable volume in the operating system, type: break writable %Alias1% Note Access to the volume is made directly to the hardware provider without record of the volume having been a shadow copy.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc733096(v=ws.11)
2019-01-16T06:11:25
CC-MAIN-2019-04
1547583656897.10
[]
docs.microsoft.com
Contributing to Phalcon Phal.
https://docs.phalconphp.com/id/3.4/contributions
2019-01-16T06:06:09
CC-MAIN-2019-04
1547583656897.10
[]
docs.phalconphp.com
Translators often translated documents using so-called CAT (computer aided translation) tools. Some of the more common CAT tools include Wordfast, OmegaT, Trados, Déja Vù, Metatexis, Isometry, and others. These tools are typically used to translate ordinary documents, but some of them can also be used to translate l10n formats like PO, CSV, XLIFF and other bilingual formats. There are two types of CAT tools, namely those that work directly on the source text, and those that work indirectly on the source text by extracting the text from it. OmegaT is written in Java and is GPL. It can translate OpenDocument files, well-formed XHTML, Java .properties files, key=value files and plaintext files, as well as a number of other formats. OmegaT extracts the text from the source documents, and the translator translates the text within the OmegaT environment. Text formatting is dealt with through special OmegaT tags. Déja Vù is propriatory software (EUR 990 per licence). It can translate many other propriatory formats and open formats, including Gettext PO. It works, similar to OmegaT, by extracting text from source files so that the translator works within the program’s own environment. The disadvantage of programs like Déja Vù and OmegaT is that they can only handle existing, well-known text formats. The advantage is that they usually handle these formats very well or comprehensively. Wordfast is a Visual Basic macro that runs inside Microsoft Word, and it is propriatory (EUR 250 per licence). It translates any file that can be opened in Microsoft Word, Excel and PowerPoint. It does not extract text, but instead it selectively allows the user to translate the portions of the file that needs to be translated. The way Trados handles files, is the same as the way Wordfast does it. In fact, Trados and Wordfast can read each other’s files – that is, the bilingual RTF files. This is the file type that is most useful for new or rare file formats. Trados is proprietary A newer method used by Trados is called TagEditor (it produces TTX files). TTX files are bilingual XML files that are opened translated in TagEditor, which is basically a user-friendly XML-viewer (but it can only view TTX XML files). In this sense TTX is similar to XLIFF or PO, because it is a bilingual file which is generated from the original file. CAT tools generally have additional features prized by translators, such as the ability to capture and use a translation memory, automatic on-the-fly fuzzy matching, automatic glossry recognition, various keyboard shortcuts to improve speed, and power reference search facilities.
http://docs.translatehouse.org/projects/localization-guide/en/latest/guide/common_cat_tools.html
2017-08-16T21:49:58
CC-MAIN-2017-34
1502886102663.36
[]
docs.translatehouse.org
: NOTE: Use of certain options in the options section of a combined query requires the rest-admin or equivalent privileges. For details, see Using Dynamically Defined Query Options in the REST Application Developer's Guide.. The"]}
http://docs.marklogic.com/REST/POST/v1/suggest
2017-08-16T21:36:14
CC-MAIN-2017-34
1502886102663.36
[array(['/images/i_speechbubble.png', None], dtype=object)]
docs.marklogic.com
Find. We are getting close to the release of Joomla 1.6,. Too young for GSoC? Then watch this space for details of GHOP 2010. We're beginning to ramp up preparations for this year's contest. Further details will be placed on the GHOP page when we have them. You don't need to join the Documentation Working Group to help us improve the documentation. Just register on this wiki and get started. Feel free to fix any errors you find; take a look in the Cookie jar; or consider helping out in one of our mini-projects... No documentation events currently scheduled. These past events involved the Joomla! documentation effort in some form or another..
http://docs.joomla.org/index.php?title=Main_Page&diff=25793&oldid=22307
2014-11-21T03:11:31
CC-MAIN-2014-49
1416400372542.20
[]
docs.joomla.org
The following methods control the mouse cursor: These functions can be used to display the system wait cursor during some prolonged computation and then replace it with the normal cursor. You might want to do this instead of putting up a 3ds Max progress bar, which may be too heavy-weight in some situations. Note that in some cases, 3ds Max may itself restore the arrow cursor underneath this one (for example, with loadMAXFile() ), and you may need to redisplay it after such calls. Sets the current system cursor to one of the standard 3ds Max cursors. Valid <name> values are: The following 3ds Max system global variables are associated with the mouse: A read only variable to get the mouse mode as an <integer> value. A read only variable to get the state of the mouse buttons as a 3 element <bitArray>. The order of the bits is: #{Left, Middle, Right} A read only variable to get the mouse position in the currently active viewport as a <point2> value. The coordinates are in pixel values. If the currently active viewport is a 2D view (Track View, Schematic View, Listener, etc.), the coordinates are in the first non-2D viewport. A read only variable to get the mouse position on the screen as a <point2> value. The coordinates are in pixel values relative to the top-left corner of the screen. Contains true if any mouse proc is currently in the process of aborting; otherwise contains false. Read-only. Available in 3ds Max 2008 and higher. Previously available in Avguard Extensions.
http://docs.autodesk.com/3DSMAX/15/ENU/MAXScript-Help/files/GUID-EE2C3ADF-9BAB-4424-AAAB-A5ABF027DE73.htm
2014-11-21T02:15:47
CC-MAIN-2014-49
1416400372542.20
[]
docs.autodesk.com
Working with files stored in the cloud Tutorial: Setting up a Box Account on a BlackBerry 10 Device Tutorial: Setting up a Dropbox Account on a BlackBerry 10 Device Retrieve files saved to the cloud If you are logged in to a cloud application on your BlackBerry device, you can use File Manager to access files that you have stored in the cloud. To retrieve a file you have saved to the cloud: - Tap . - Tap a cloud application. Save a file to your device: - To pin a file, tap . - To unpin a file, tap . Synchronize a file with the cloud If you have edited a pinned file while offline, you can use the Sync Now option when you reconnect to a wireless network to ensure that the file is synchronized with the cloud before other files are synchronized. To force the synchronization of a file: - Touch and hold a file. - Tap . Change cloud application settings You can change your settings for items like the back up options and network usage for your cloud applications. When you enable back up options, changes to files in the selected applications are automatically updated and stored in the cloud. To change your settings: - Tap . - Tap . Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/47561/cfl1337711689556.jsp
2014-11-21T02:54:44
CC-MAIN-2014-49
1416400372542.20
[array(['cfl1344367162587_lowres_en-us.png', 'Sync Now'], dtype=object) array(['mba1334327736541_lowres_en-us.png', 'Settings'], dtype=object)]
docs.blackberry.com
. POM location can be either a full path to the POM or Ivy file, or a path to the directory containing pom.xml or ivy.xml. If POM location is not given then pom.xml or ivy.xml from current working directory is used. When both pom.xml and ivy.xml are present, pom.xml is processed. You can specify more file locations.. AUTHOR Written by Mikolaj Izdebski. REPORTING BUGS Bugs should be reported through Java Packages Tools issue tracker at Github:. SEE ALSO pom_add_dep(7), pom_add_parent(7), pom_add_plugin(7), pom_disable_module(7), pom_remove)
https://docs.fedoraproject.org/pt_PT/java-packaging-howto/manpage_pom_remove_dep/
2021-09-16T19:55:56
CC-MAIN-2021-39
1631780053717.37
[]
docs.fedoraproject.org
Experiment¶ The experiment module offers a schema and utilities for succinctly expressing commonly used applications and algorithms in near-term quantum programming. An Experiment object is intended to be consumed by the QuantumComputer.experiment method. NOTE: When working with the experiment method, the following declared memory labels are reserved: - “preparation_alpha”, “preparation_beta”, and “preparation_gamma” - “measurement_alpha”, “measurement_beta”, and “measurement_gamma” - “symmetrization” - “ro” Schema¶ - class pyquil.experiment. Experiment(settings, program, qubits=None, *, symmetrization=<SymmetrizationLevel.EXHAUSTIVE: -1>, calibration=<CalibrationMethod.PLUS_EIGENSTATE: 1>). Methods - class pyquil.experiment. ExperimentSetting(in_state, out_operator, additional_expectations. - class pyquil.experiment. ExperimentResult(setting, expectation, total_counts, stddev=None, std_err=None, raw_expectation=None, raw_stddev=None, raw_std_err=None, calibration_expectation=None, calibration_stddev=None, calibration_std_err=None, calibration_counts=None, additional_results. Utilities¶ pyquil.experiment. bitstrings_to_expectations(bitstrings, joint_expectations=None)[source]¶ Given an array of bitstrings (each of which is represented as an array of bits), map them to expectation values and return the desired joint expectation values. If no joint expectations are desired, then just the 1 -> -1, 0 -> 1 mapping is performed. pyquil.experiment. correct_experiment_result(result, calibration)[source]¶ Given a raw, unmitigated result and its associated readout calibration, produce the result absent readout error. pyquil.experiment. merge_memory_map_lists(mml1, mml2)}] pyquil.experiment. ratio_variance(a, var_a, b, var_b) - - pyquil.experiment. read_json(fn)[source]¶ Convenience method to read pyquil.experiment objects from a JSON file. pyquil.experiment. to_json(fn, obj)[source]¶ Convenience method to save pyquil.experiment objects as a JSON file. See read_json().
https://pyquil-docs.rigetti.com/en/v2.19.0/apidocs/experiment.html
2021-09-16T19:47:41
CC-MAIN-2021-39
1631780053717.37
[]
pyquil-docs.rigetti.com
Licensing¶ wasp-os is licensed to you under the GNU Lesser General Public License, as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. wasp-os <>. Notwithstanding the above some essential components of wasp-os, such as the MicroPython distribution, are licensed under under different open source licenses. The licensing for these components is clearly indicated and reinforced by the directory and sub-module structure. Additionally binary releases of wasp-os include a binary copy of the Nordic Softdevice which is licensed under the 5-clause Nordic license. - GNU Lesser General Public License - GNU General Public License - Preamble - 0. Definitions - 1. Source Code - 2. Basic Permissions - 3. Protecting Users’ Legal Rights From Anti-Circumvention Law - 4. Conveying Verbatim Copies - 5. Conveying Modified Source Versions - 6. Conveying Non-Source Forms - 7. Additional Terms - 8. Termination - 9. Acceptance Not Required for Having Copies - 10. Automatic Licensing of Downstream Recipients - 11. Patents - 12. No Surrender of Others’ Freedom - 13. Use with the GNU Affero General Public License - 14. Revised Versions of this License - 15. Disclaimer of Warranty - 16. Limitation of Liability - 17. Interpretation of Sections 15 and 16 - How to Apply These Terms to Your New Programs - The MIT License (MIT) - 5-Clause Nordic License
https://wasp-os.readthedocs.io/en/latest/license.html
2021-09-16T18:28:16
CC-MAIN-2021-39
1631780053717.37
[]
wasp-os.readthedocs.io
To allow users to use the Agora RTC SDK in environments with restricted network access, Agora provides a cloud proxy. Users only need to add specific IP addresses and ports to the firewall whitelist, and call the API to configure the Agora cloud proxy service. Download the Agora RTC SDK. Integrate the SDK and prepare the development environment. For details, see QuickStart Guide. Contact Agora technical support and provide the following information: Refer to the corresponding table according to the SDK version you are using, add all the IP addresses and ports in the table to your firewall whitelist. When using the UDP cloud proxy, whitelist the following addresses and ports: For SDKs v3.3.0 or later, you can call setCloudProxy to enable cloud proxy. After adding the whitelist, call setCloudProxy, and set proxyType as UDP_PROXY(1). Test the audio and video call functionality. To disable the cloud proxy service, call setCloudProxy and set proxyType as NONE_PROXY(0). For SDKs v3.2.1 or earlier, you can call setParameters to enable cloud proxy. After adding the whitelist, refer to the following sample code to enable cloud proxy: // Enables the cloud proxy server and configures the service by default. setParameters("{\"rtc.enable_proxy\":true}"); Test the audio and video call functionality. To disable the cloud proxy service, call setParameters("{\"rtc.enable_proxy\":false}");. setCloudProxybefore using the cloud proxy function. For example, if you need to enable cloud proxy to join a channel, call setCloudProxybefore joinChannel. If you need to enable cloud proxy to conduct a lastmile test, call setCloudProxybefore startLastmileProbeTest. The settings of setCloudProxytakes effect within the lifecycle of RtcEngine.
https://docs.agora.io/en/Voice/cloudproxy_native?platform=iOS
2021-09-16T18:58:42
CC-MAIN-2021-39
1631780053717.37
[]
docs.agora.io
Handset Designer enables you to easily draw formwork for sloped floors. Setting the Wall Elevations The first step in creating a sloped floor is to set the wall elevations. You will notice that each wall has four elevation values. These values correspond to the top and bottom of the wall at the beginning and end of the wall, when viewed from the side. To set the elevation values: 1. Click the Wall shape, to select it. 2. Click the Shape Data button in the Handset Designer toolbar . The Shape Data popup appears. 3. As necessary: a. Click in the Top elevation at start of wall field and enter a new value. b. Click in the Bottom elevation at start of wall field and enter a new value. c. Click in the Top Elevation at end of wall field and enter a new value. d. Click in the Bottom elevation at end of wall field and enter a new value. Note: For measurements use ft. for feet, in. for inches or m. for meters. (Be sure to designate ft., in, or m. or you will receive an error when drawing the formwork.) 4. Click the Handset Designer tab and then click Draw Formwork. Handset Designer adds formwork to the wall. 5. Click the wall again to select it. 6. As needed, click Right Elevation or Left Elevation to create the desired elevation view. 7. Right-click the elevation shape and select Page-# from the menu that appears. Handset Designer displays the elevation with the sloped floor. Panel Offset Side by side panels and fillers are vertically offset. Since the dado slots on the panels are at 1 ft. on center (greatest value), the offsets must be at 1 ft. intervals as well. Go to Previous Article: Editing Formwork Go to Next Article: Formwork Stackup
https://docs.avontus.com/pages/viewpage.action?pageId=103842200
2021-09-16T18:01:50
CC-MAIN-2021-39
1631780053717.37
[]
docs.avontus.com
Table of Contents meta Table of Contents. bundle agent example { meta: "bundle_version" string => "1.2.3"; "works_with_cfengine" slist => { "3.4.0", "3.5.0" }; reports: "Not a local variable: $(bundle_version)"; "Meta data (variable): $(example_meta.bundle_version)"; } The value of meta data can be of the types string or slist or data.
https://docs.cfengine.com/docs/3.12/reference-promise-types-meta.html
2021-09-16T18:04:40
CC-MAIN-2021-39
1631780053717.37
[]
docs.cfengine.com
You can link your Fastly and Signal Sciences accounts, allowing you to sign in using your Fastly account login credentials and freely switch between the Signal Sciences and Fastly consoles. After linking your accounts, you will only be able to log into the Signal Sciences console using your Fastly account credentials. Linking your Fastly and Signal Sciences accounts only affects authentication when logging into the Signal Sciences console. Other settings such as user roles and API access tokens are not affected. Before you begin Before you begin linking your Fastly and Signal Sciences accounts, understand the following: - You can not unlink your Fastly and Signal Sciences accounts once they have been linked. - Linked accounts do not currently support SAML authentication. Linked accounts authenticate using your Fastly email address and password, rather than through your identity provider. - 2FA is supported, but must be enabled on both your Fastly and Signal Sciences accounts before you will be able to link them. - Signal Sciences accounts set to bypass SSO can not be linked. How to link your Fastly and Signal Sciences accounts Log into the Signal Sciences console. From the My Profile menu, select Account Settings. The account settings management page appears. Under the Link Fastly account header, click Link account. The link account page appears. Click Start Verification. The Fastly account login page appears. Enter your Fastly account login credentials. Click SIGN IN. The account link confirmation page appears. Click Link Fastly account. A confirmation appears stating the account has been successfully linked. Click Account settings to return to the account settings management page, or click View dashboard to return to the Corp Overview page.
https://docs.fastly.com/signalsciences/using-signal-sciences/features/link-fastly-account/
2021-09-16T18:28:44
CC-MAIN-2021-39
1631780053717.37
[]
docs.fastly.com
UiPath.MicrosoftOffice365.Activities.Office365ApplicationScope Uses the Microsoft identity platform to establish an authenticated connection between UiPath and your Microsoft Office 365 application. This authenticated connection enables a Robot to call the Microsoft Graph API to read and write resources on your behalf. To establish your authenticated connection, you first register your Microsoft Office 365 application in your Azure Active Directory (using your personal, work, and/or school Microsoft Office 365 account). When registering your application, you assign the Microsoft Graph API permissions that specify the resources a. How to register your app and assign permissions To learn more about registering your application and assigning permission, see the Setup guide. This guide provides step-by-step instructions to configure your Microsoft Office 365 application for automation. Properties Application Id and Certificate (Unattended) - CertificateAsBase64 - The base64 representation of the certificate. Note: This property is required if AuthenticationType is set to ApplicationIdAndCertificate. - CertificatePassword - An optional password that may be required to use the certificate, as a Secure String. Application Id and Secret (Unattended) - Application Secret - The secret string that the application uses to provide its identity. - Secure Application Secret - The Application (client) secret, as a SecureString. Note: One of these properties is required if AuthenticationType is set to ApplicationIdAndSecret. Authentication - Application Id - The unique application (client) ID assigned by the Azure Active Directory when you registered your app during Setup. The application (client) ID represents an instance of a Microsoft Office 365 application. A single organization can have multiple application (client) IDs for their Microsoft Office 365 account. Each application (client) ID contains its own permissions and authentication requirements. For example, you and your colleague can both register a Microsoft Office 365 application in your company's Azure Active Directory with different permissions. Your app could be configured to authorize permissions to interact with files only, while your colleague's app is configured to authorize permissions to interact with files, mail, and calendar. If you enter your application (client) ID into this property and run attended automation, the consent dialogue box would be limited to file permissions (and subsequently, only the Files activities can be used). - Authentication Type - The type of authentication required for your registered application. Select one of the five options: InteractiveToken, IntegratedWindowsAuthentication, UsernameAndPassword, ApplicationIdAndSecret or ApplicationIdAndCertificate. The default value is InteractiveToken. For more information about these options and which one to select, see the Unattended vs. Attended Automation section below. - Environment - The environment, either Azure Global or national clouds that are unique and separate environments from Azure Global. Select one of the five options: Default, Global, China, Germany or USGovernment. The default value is Global. - Services - The service(s) that you granted API permissions to when you registered your app during Setup. This field supports only MicrosoftServicevariables. Select one or more of the following services: - Files - Select this service to use the Files and/or Excel activities. - Mail - Select this service to use the Outlook activities. - Calendar - Select this service to use the Calendar activities. - Groups - Select this service to use the Groups activities. - Shared - Select this service to use the Planner activities. The default value is Unselected. If the necessary API permissions are not granted during app registration, the applicable activities will fail to run even if the service is selected in this property. For more information, see Add API permissions in the Setup guide. - Tenant - The unique directory (tenant) ID assigned by the Azure Active Directory when you registered your app during Setup. Required for multi-tenant applications and IntegratedWindowsAuthentication. The directory (tenant) ID can be found in the overview page of your registered application (under the application (client) ID). Common - ContinueOnError - If set, continue executing the remaining activities even if the current activity has failed. - DisplayName - The display name of the activity. - TimeoutMS - Specifies the amount of time to wait (in milliseconds) for the interactive authentication (consent dialogue box) to complete before an error is thrown. This field supports only integer and Int32variables. The default value is 30000ms (30 seconds) (not shown). Interactive Token - OAuthApplication - Indicates the application (client) to be used. If UiPathis selected, ApplicationID and Tenant are ignored. This field supports only OAuthApplicationvariables. Select one of the two options: - UiPath - Default. When you want to use the application created by UiPath. In this case, Application ID and Tenant parameter values are ignored. - Custom - When you want to create your own application with correct permissions. In this case, a value must be set for Application ID parameter. Misc - Private - If selected, the values of variables and arguments are no longer logged at Verbose level. Username and Password (Unattended) These properties apply when you run unattended automation only. When specifying values for these properties, be sure the AuthenticationType property is set to UsernameAndPassword. For more information, see the Username and Password section above. - Password - The password of your Microsoft Office 365 account. - SecurePassword - The password of your Microsoft Office 365 account, as a SecureString. - Username - The username of your Microsoft Office 365 account. Note: Required if AuthenticationType is UsernameAndPassword. How it works The following steps and message sequence diagram is an example how the activity works from design time (i.e., the activity dependencies and input/output properties) to run time. - Complete the Setup steps. - Add the Microsoft Office 365 Scope activity to your project. - Enter values for the Authentication, Input, and Unattended (if applicable) properties. Unattended vs. Attended Automation The Microsoft Office 365 Scope activity has four different authentication flows (AuthenticationTypes) that you can choose from when adding the activity to your project. Your selection is dependent on the type of automation mode you plan to run (unattended or attended) and your application authentication requirements (consult with your administrator if you're unsure which authentication requirements apply to your application). Important IntegratedWindowsAuthentication or UsernameAndPassword authentication types do not work when Multi-Factor Authentication (MFA) is enabled. If your application requires MFA, you can run attended automation using the InteractiveToken authentication type or unattended automation using ApplicationIdAndSecret and ApplicationIdAndCertificate. ApplicationIdAndSecret and ApplicationIdAndCertificate authentication types are appropriate for unattended automation and work regardless of whether the MFA is enabled or disabled. Interactive Token The InteractiveToken authentication type can be used for attended automation and when multi-factor authentication (MFA) is required. This is the default option and what we use in our examples. If you're interested in "playing around" with the activity package, this option is easy to configure and works well for personal accounts (using the default redirect URI noted in step 7 of the Register your application section of the Setup guide). When the Microsoft Office 365 activity is run for the first time using this authentication type, you are prompted to authorize access to the resources (you granted permissions to when registering your app) via a consent dialogue box. If you select this option, the Username, Password, and Tenant properties should be left empty. This authentication type follows the OAuth 2.0 authorization code flow. Integrated Windows Authentication The IntegratedWindowsAuthentication authentication type can be used for unattended automation. This option can apply to Windows hosted applications running on computers joined to a Windows domain or Azure Active Directory. You should only select this option if your registered application is configured to support Integrated Windows Authentication (additional information can be found on
https://docs.uipath.com/activities/docs/microsoft-office-365-scope
2021-09-16T17:57:15
CC-MAIN-2021-39
1631780053717.37
[array(['https://files.readme.io/4fa80ea-MicrosoftOffice365Scope_MSC.png', 'MicrosoftOffice365Scope_MSC.png'], dtype=object) array(['https://files.readme.io/4fa80ea-MicrosoftOffice365Scope_MSC.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
sysconfig — Provide access to Python’s configuration information¶ Novo na versão 3.2. Código-fonte: Lib/sysconfig.py The sysconfig module provides access to Python’s configuration information like the list of installation paths and the configuration variables relevant for the current platform. Configuration variables¶. sysconfig. get_config_vars(*args)¶ With no arguments, return a dictionary of all configuration variables relevant for the current platform. With arguments, return a list of values that result from looking up each argument in the configuration variable dictionary. Para cada argumento, se o valor não for encontrado, retorna None. sysconfig. get_config_var(name)¶++'], raise a KeyError. sysconfig. get_paths([scheme[, vars[, expand]]])¶. Outras funções¶ sysconfig. get_python_version()¶ Return the MAJOR.MINORPython version number as a string. Similar to '%d.%d' % sys.version_info[:2]. sysconfig.; e.g., on Linux, the kernel version isn’t particularly important. Exemplos de valores retornados: linux-i586 linux-alpha (?) solaris-2.6-sun4u Windows will return one of: win-amd64 (64bit Windows on AMD64, aka x86_64, Intel64, and EM64T) win32 (all others - specifically, sys.platform is returned) Mac OS X pode retornar:. Usando o módulo sysconfig como um Script¶" ... This call will print in the standard output the information returned by get_platform(), get_python_version(), get_path() and get_config_vars().
https://docs.python.org/pt-br/3/library/sysconfig.html
2021-09-16T18:09:48
CC-MAIN-2021-39
1631780053717.37
[]
docs.python.org
Colouring the Console¶ The rocon_console.console modules provides definitions and methods that enable simple colouring for your output on the console (without having to remember all the specific keycodes that shells use). It will also automatically try and detect if your console has colour support and if there is none, the colour definitions will cause the rocon_console.console methods to gracefully fallback to a non-coloured syntax for printing. Colour Definitions¶ There are definitions for many of the basic console colour codes (send a pull request if you want support for anything more). The current list includes: - Regular: black, red, green, yellow, blue, magenta, cyan, white, - Bold: bold, bold_black, bold_red, bold_green, bold_yellow, bold_blue, bold_magenta, bold_cyan, bold_white Usage¶ Importing import rocon_console.console as console Freeform Style Simply intersperse colour definitions throughout your printing statements, e.g. import rocon_console.console as console print(console.cyan + " Name" + console.reset + ": " + console.yellow + "Dude" + console.reset) Logging Style For standard style logging modes, there are a few functions that attach a descriptive prefix and colourise the message according to the logging mode. Prefixes: logdebug: [debug], green loginfo: [info], white logwarn: [warn], yellow logerror: [error], red logfatal: [fatal], bold_red import rocon_console.console as console logdebug("the ingredients of beer are interesting, but not important to the consumer") loginfo("the name of a beer is useful information") logwarn("this is a lite beer") logerror("this is a budweiser") logfatal("this is mereley a cider")
https://docs.ros.org/en/kinetic/api/rocon_console/html/colouring.html
2021-09-16T20:07:00
CC-MAIN-2021-39
1631780053717.37
[]
docs.ros.org
Bin This topic describes how to use the function in the . Description Puts continuous numerical values into discrete sets, or bins, by adjusting the value of <field> so that all of the items in a particular set have the same value. Function Input/Output schema - Function Input collection<record<R>> - This function takes in collections of records with schema R. - Function Output collection<record<S>> - This function outputs the same collection of records but with a different schema S. Syntax - bin - <span-options> - <field> [AS <result>] How the bin function works Use the bin function to group records by the numerical values in a field. Suppose your incoming data looks like the following: You decide to add a bin function to your pipeline that bins the streaming data using a 5 minute time span on the timestamp field. ...| bin span=5m timestamp; The bin function groups the timestamps in the timestamp field into 5 minutes intervals. The groups are: Required arguments - field - Syntax: string - Description: The name of the field to bin. The value of the field must be a numerical data type. - Example: timestamp - span - Syntax: <time-specifier> - Description: Sets the size of each bin, using a span length based on time or log-based span. - Example: 5m Span options - log-span - Syntax: <num>log<num> - Description: Sets to logarithm-based span. The first number is a coefficient. The second number is the base. The coefficient must be a real number >= 1.0 and < the base number. - Example: span=2log10 - span-length - Syntax: <int><timescale> - Description: A span of each bin. If discretizing based on the timestampfield or used with a timescale, this is treated as a time range. If not, this is an absolute bin length. - timescale - Syntax: <subseconds> | <sec> | <min> | <hr> | <day> | <month> | <year> - Description: Time scale units. If discretizing based on the timestampfield. - Default: sec Optional arguments - aligntime - Syntax: aligntime=<time-specifier> - Description: Align the bin times to something other than base UTC time (epoch 0). The aligntime option is valid only when doing a time-based discretization. Ignored if span is in days, months, or years. Aligntime of earliest and latest are not supported. - Example: 4h - result - Syntax: AS <string> - Description: A new name for the field. - Example: time SPL2 example Align the bins to 3 hours and set the span to 1 hour intervals from that time ... | bin aligntime=3h span=1h timestamp | ...; This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/DSP/1.2.0/FunctionReference/Bin
2021-09-16T18:34:21
CC-MAIN-2021-39
1631780053717.37
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
This endpoint creates multiple new Devices. Request To create multiple new Devices please make a POST request to the following URL: Query Parameters Body Parameters The body is an Array containing Device JSON objects. Each Device object can contain the following body parameters: Header Don't forget Please note that the X-Bulk-Operationheader attribute is necessary Learn more. curl -X POST '' \ -H 'Content-Type: application/json' \ -H 'X-Auth-Token: oaXBo6ODhIjPsusNRPUGIK4d72bc73' \ -H 'X-Bulk-Operation: True' \ -d '[ { "label": "device_1", "properties": { "_location_type":"manual", "_location_fixed": { "lat": 6.2486, "lng": 75.5742 } } }, { "label": "device_2", "tags": ["Colombia", "Medellin", "IoTIsGreat"] }, { "label": "device_3", "properties": {}, "description":"This is the description for device_3" } ]' { "task": { "id": "5f208f564763e74744b2ba87" } } { "code": 400001, "message": "Validation Error.", "detail": { .... } } { "code": 401001, "message": "Authentication credentials were not provided.", "detail": "Authentication credentials were not provided." } { "detail": "Header `X-BULK-OPERATION` should be provided for bulk operation." } Response Returns a Task Id of the asynchronous process.
https://docs.ubidots.com/reference/bulk-create-devices
2021-09-16T19:04:18
CC-MAIN-2021-39
1631780053717.37
[]
docs.ubidots.com
You can configure how a virtual machine starts up and shuts down on the recovery site during a recovery. You can configure whether to shut down the guest operating system of a virtual machine before it powers off on the protected site. You can configure whether to power on a virtual machine on the recovery site. You can also configure delays after powering on a virtual machine to allow VMware Tools or other applications to start on the recovered virtual machine before the recovery plan continues. Prerequisites Procedure - In the vSphere Client or the vSphere Web Client, click . - On the Site Recovery home tab, select a site pair, and click View Details. - Click the Recovery Plans tab, click a recovery plan, and click Virtual Machines. - Right-click a virtual machine and click Configure Recovery. - Expand Shutdown Action and select the shutdown method for this virtual machine. - Expand Startup Action and select whether to power on the virtual machine after a recovery. - (Optional) Select or deselect the Wait for VMware tools check box.If you select Wait for VMware tools, Site Recovery Manager waits until VMware Tools starts after powering on the virtual machine before the recovery plan continues to the next step. You can set a timeout period for VMware Tools to start. - (Optional) Select or deselect the Additional Delay before running Post Power On steps and starting dependent VMs check box and specify the time for the additional delay.For example, you might specify an additional delay after powering on a virtual machine to allow applications to start up that another virtual machine depends on.
https://docs.vmware.com/en/Site-Recovery-Manager/8.2/com.vmware.srm.admin.doc/GUID-27EBD95C-8B61-49F6-A397-56BDC536B60E.html
2021-09-16T18:55:50
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
Date: Sun, 10 Sep 1995 12:49:09 -0700 (PDT) From: Julian Elischer <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: VM86 code Message-ID: <[email protected]> In-Reply-To: <[email protected]> from "Marc Ramirez" at Sep 9, 95 08:37:46 pm Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help I believe gary clark ([email protected]) has done quite a bit on this and has made good progress... please check with him to see how you can fit in with what he's doing.. julian > > Has anyone incorporated the VM86 code into -current yet? I took an > involuntary hiatus from the net for about a month. > > If no one has integrated the stuff yet, I'm going to give it a whirl > tomorrow. (no promises; this is my first kernel hack) > > Marc. > Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=26726+0+archive/1995/freebsd-questions/19950910.freebsd-questions
2021-09-16T19:59:50
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org