content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
You can create a customized package and apply the package to the existing Functional roles in the SD-WAN Orchestrator. Procedure - In the Operator portal, click Role Customization. - Click New Package. - In the Role Customization Package Editor window, enter the following: - Enter a Name and a Description for the new custom package. - In the Roles pane, select a Functional role and click Remove Privileges to customize the privileges for the selected role.Note: You can only add or remove Deny Privileges, that is take away privileges from the system default. You cannot grant additional privileges to a role using this option.In the Assign Privileges window, select the features from the Available Deny Privileges and move them to the Selected Deny Privileges pane.Note: You can assign only Deny privileges to the Functional roles.Click OK. - Repeat assigning privileges to the Functional roles in the Role Customization Package Editor window. - Select the Show Modified checkbox to filter and view the customized privileges. The changes to the privileges are highlighted in a different color. - Click Create. You can click CSV to download the Functional role privileges of selected role, in a CSV format. - The new package details are displayed in the Role Customization Packages window. - To edit the privileges, click the link to the package or select the package and click Role Customization Package Editor window that opens, add or remove Deny Privileges to the Functional roles in the package and click OK. . In the What to do next Select the customized package and click SD-WAN Orchestrator.to apply the customization available in the selected package to the existing Functional roles across the You can edit the Deny privileges in an applied package whenever required. After modifying the privileges in the Role Customization Package Editor window, click OK to save and apply the changes to the Functional roles. Note: You can download the customized Functional role privileges as a JSON file and upload the customized package to another Orchestrator. For more information, see Upload Customized Package.
https://docs.vmware.com/en/VMware-SD-WAN/5.0/vmware-sd-wan-operator-guide/GUID-6B41B847-E2DC-4C6E-BB57-D5B91393E34D.html
2022-06-25T09:02:23
CC-MAIN-2022-27
1656103034877.9
[]
docs.vmware.com
3.6. Deleting posts automatically¶ The plugin can delete posts that are older than a certain amount of time, automatically. To automatic deletion, you have to enable the deleting event both in general settings and site settings. To enable it, do the following: - Enable General Settings > Scheduling > Deleting > Deleting is activesetting (See Deleting is active?) - Configure other settings under Deleting Section as you want - Save the general settings - Open the site settings for which you want to enable deleting. In this page, enable Main > Active for deleting(See: Active for post deleting?) setting and save the settings. After these steps are done, the plugin will automatically delete the posts by using the settings configured under Deleting Section. Note If deleting is not enabled in the general settings, no posts will be deleted. Tip You can go to Dashboard Page of the plugin and check Active Sites Section. In this section, you should be able to see the site for which you have just enabled deleting. 3.6.1. Disabling deleting for all sites¶ To disable deleting for all sites, uncheck General Settings > Scheduling > Deleting > Deleting is active (See: Deleting is active?) and save the general settings.
https://docs.wpcontentcrawler.com/1.11/guides/deleting-posts-automatically.html
2022-06-25T07:26:53
CC-MAIN-2022-27
1656103034877.9
[]
docs.wpcontentcrawler.com
3.9. Adding request headers¶ The plugin can add headers to the requests made to the target web pages. To define the request headers that should be sent to the target site, do the following: - Go to Site Settings > Main > Request > Headers(See: Headers) and add the request headers that should be sent to the target site. You can also import all the request headers at once, as explained in Importing request headers from browser. - Save the site settings (See: Saving The Settings) After these steps are done, the plugin will add the request headers you defined to every request sent to the target site.
https://docs.wpcontentcrawler.com/guides/adding-request-headers.html
2022-06-25T07:53:37
CC-MAIN-2022-27
1656103034877.9
[]
docs.wpcontentcrawler.com
- hair - hair2 - hairinfo - information - infotex - material - ornatrixmod - override - pass - strands - vraysamplerinfotex To add a label to the list of required labels, choose '+ labelname' from Related Labels. To remove a label from the required labels, choose '- labelname' from above. - There are no pages at the moment.
https://docs.chaosgroup.com/label/VRAY4MAX/hair+hair2+hairinfo+information+infotex+material+ornatrixmod+override+pass+strands+vraysamplerinfotex
2020-03-28T23:15:16
CC-MAIN-2020-16
1585370493121.36
[]
docs.chaosgroup.com
Table of Contents Installing The Dovecot MDA This page is supplemental to main article: Creating a Virtual Mail Server with Postfix, Dovecot and MySQL Dovecot is a popular and secure mail delivery agent, or MDA, which can be configured to work alongside the postfix MTA. As with postfix, we will build and install our dovecot package using the current build script from SBo. This example uses the version current at time of writing, but you should always build the latest version available for your Slackware version. We will assume that you are familiar with SlackBuilds and will provide only the essential steps for building dovecot here. For more detailed information please visit the SBo How-To page. Our dovecot build requires no special parameters. The essential steps for building dovecot are (as root): cd /tmp wget tar -xvzf dovecot.tar.gz cd dovecot cat dovecot.info ... DOWNLOAD="" MD5SUM="a3eb1c0b1822c4f2b0fe9247776baa71" ... # Fetch archive from URL in DOWNLOAD line # wget # Verify integrity of archive - compare to MD5SUM line # md5sum dovecot-2.2.13.tar.gz a3eb1c0b1822c4f2b0fe9247776baa71 # Build package # chmod +x dovecot.SlackBuild ./dovecot.SlackBuild The resulting package will be found in /tmp/dovecot-2.2.13-x86_64-1_SBo.tgz (or simillar for 32 bit version). Copy the package file to the target platform if necessary and install: installpkg {path-to/}dovecot-2.2.13-x86_64-1_SBo.tgz Configuring The Dovecot MDA You should become familiar with the dovecot documentation in order to properly configure your installation. You will also find a local copy of the complete documentation installed with the package in /usr/doc/dovecot-2.2.13/wiki/ (adjust for you version number if necessary). The dovecot package will create a mostly empty configuration directory at /etc/dovecot. cat /etc/dovecot/README Configuration files go to this directory. See example configuration files in /usr/doc/dovecot-2.2.13/example-config/ So we will create the necessary directory structure and copy only the necessary example config files to the working location as our point of reference. mkdir /etc/dovecot/conf.d cp /usr/doc/dovecot-2.2.13/example-config/dovecot.conf /etc/dovecot/. cp /usr/doc/dovecot-2.2.13/example-config/dovecot-sql.conf.ext /etc/dovecot/. cp /usr/doc/dovecot-2.2.13/example-config/conf.d/10-auth.conf /etc/dovecot/conf.d/. cp /usr/doc/dovecot-2.2.13/example-config/conf.d/10-mail.conf /etc/dovecot/conf.d/. cp /usr/doc/dovecot-2.2.13/example-config/conf.d/10-master.conf /etc/dovecot/conf.d/. cp /usr/doc/dovecot-2.2.13/example-config/conf.d/10-ssl.conf /etc/dovecot/conf.d/. cp /usr/doc/dovecot-2.2.13/example-config/conf.d/auth-sql.conf.ext /etc/dovecot/conf.d/. We will work from top to bottom of the copied file list to perform configuration. Open the file, /etc/dovecot/dovecot.conf and make the following changes: vi /etc/dovecot/dovecot.conf # Uncomment the following line to set supported protocols # protocols = imap pop3 lmtp # Set postmaster_address to your admin address # postmaster_address = [email protected] # Add following line commented, uncomment to troubleshoot SSL errors # #verbose_ssl = yes Next, configure the database access parameters and password query for dovecot: vi /etc/dovecot/dovecot-sql.conf.ext # Uncomment and set the following lines as shown # driver = mysql connect = "host=localhost dbname=mailserver user=mailuser password={your mailuser password}" default_pass_scheme = SHA512-CRYPT password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; Next, we configure the authentication methods to be used by dovecot. We will restrict it to use only secure authentication by the settings here and in the included auth-sql.conf.ext file, excluding other methods. vi /etc/dovecot/conf.d/10-auth.conf # Uncomment this line - no plain text authentication! # disable_plaintext_auth = yes # Plain is inside SSL, add "login" for MUA user/pass authentication # auth_mechanisms = plain login # Comment out this line, no file based auth # #!include auth-system.conf.ext # Uncomment this line to allow SQL based auth # !include auth-sql.conf.ext Set the filesystem path for virtual mail. The virtual user's mail boxes will be at /var/vmail/vhosts/DOMAIN/USER. Dovecot will perform the substitutions for %d and %n at runtime. vi /etc/dovecot/conf.d/10-mail.conf # Uncomment and set the mail_location path # mail_location = maildir:/var/vmail/vhosts/%d/%n Set the configuration for the dovecot master process: vi /etc/dovecot/conf.d/10-master.conf # Find the "service imap-login" section and set port to 0 to disable insecure imap login # service imap-login { inet_listener imap { port = 0 } ... } # Find the "service pop3-login" section and set port to 0 to disable insecure pop3 login # service pop3-login { inet_listener pop3 { port = 0 } ... } # Find the "service lmtp" section and make the following changes # service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { mode = 0600 user = postfix group = postfix } ... } # Find the "service auth" section, set postfix handler for SASL, db auth user/perms # service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } unix_listener auth-userdb { mode = 0600 user = vmail } user = dovecot } # Find the "service auth-worker" section, run auth processes as unpriv user # service auth-worker { user = vmail } Next we set up the SSL configuration so it is mandatory and uses the certificates created earlier: vi /etc/dovecot/conf.d/10-ssl.conf # Uncomment as necessary and make the following changes # ssl = required ssl_cert = </etc/ssl/localcerts/dove.pem ssl_key = </etc/ssl/private/dove.key Finally, configure authentication and user data paths for dovecot access: vi /etc/dovecot/conf.d/auth-sql.conf.ext # Find the "passdb" section and configure as follows # passdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext } # Find the "userdb" section and configure as follows # userdb { driver = static args = uid=vmail gid=vmail home=/var/vmail/vhosts/%d/%n } Now we want to further secure the installation by making all dovecot configutation files owned by the non–privledged vmail user, and accessible by the dovecot group, with no access by others. chown -R vmail:dovecot /etc/dovecot chmod -R o-rwx /etc/dovecot Return to main article page Sources - Based primarily on Dovecot documentation -
http://docs.slackware.com/howtos:network_services:postfix_dovecot_mysql:dovecot
2020-03-29T01:07:36
CC-MAIN-2020-16
1585370493121.36
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
First minimal payment This tutorial will help you integrate Axerve Ecommerce Solutions and process your first payment. Before you begin, make sure you have properly configured your merchant’s profile. You can find more information on how to configure the profile here. Payment type Axerve Ecommerce Solutions can be integrated on your website in two different ways: - Using the Banca Sella popup to finish the payment; - Customizing the payment page according to your style. The first part of this tutorial is the same for both purposes. Have a REST Before you can actually redirect the user to Axerve Ecommerce Solutions, you must do a POST to payment/create to initiate the payment process. Click on the link to see the API. Remember: payment/createand payment/submitMUST be used in every transaction to create and process the payment. The only case where you don’t need to call payment/submitis when the payment is performed from an alternative payment method. There are two endpoints available, test and production: POST POST Let’s go back to your application. Everything starts with a POST call to the payment/create, asking Axerve Ecommerce Solutions to process a payment. Set the authorization headers: Authorization: apikey R0VTUEFZNjU5ODcjI0VzZXJjZW50ZSBUZX.... Content-Type: application/json Every POST call must have the Content-type: application/jsonheader. and the body: { "shopLogin":"GESPAY65987", "amount":"27.30", "currency":"EUR", "shopTransactionID":"your-custom-id" } Axerve Ecommerce Solutions will answer with: { "error": { "code": "0", // everything ok! "description": "request correctly processed" }, "payload": { "paymentToken": "1c3f27af-1997-4761-8673-b94fbe508f31", "paymentID": "1081814508", "userRedirect": { "href": null } } } With the paymentToken ( 1c3f27af-1997-4761-8673-b94fbe508f31) and the paymentID ( 1081814508) you can now redirect the user to the payment page. Now you have to choose which payment page you want to use. If you want to use the Lightbox solution keep reading, otherwise jump to the following section: using your customized payment page. Using the Lightbox solution If. You can find more information in the lightbox documentation page Using your customized payment page You can also customize every part of the payment process and design the form the way you want. This means you can handle everything in the same style of your website. Assuming you have already made a POST to payment/create, the next step is to design an HTML form that will contain the credit card data: number, expiration date, cvv. Then, once you have designed it, you must do a POST on payment/submit with the credit card data and the paymentToken generated by payment/create: fetch('', { method: 'POST', headers: { 'paymentToken': `${PAYMENT_TOKEN}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ "buyer":{ "email":"[email protected]", "name":"Test Payment" }, "paymentTypeDetails": { "creditcard": { "number":"4012001037141112", "expMonth":"05", "expYear":"27", "CVV":"444", "DCC":null } }, "shopLogin":"GESPAY65987" } }) } ) POST payment/submitallows the user to pay with many other payment systems other than the credit card. For a full list, check the API. At this point two things can happen: - the credit card is not 3D-Secure - the credit card is 3D-Secure and you must redirect to the bank page to validate the payment The credit card is not 3D-Secure In this case, the card does not require any special authorization and the response to POST payment/submit will be something like: { "error":{ "code":"0", "description":"request correctly processed" }, "payload":{ "transactionType":"submit", "transactionResult":"OK", ... "paymentID":"1700444660", "userRedirect":{ "href":"" } } } Now you can redirect the user to userRedirect.href, that will be populated based on your configuration in the Merchant Back-Office or your custom ones passed to POST payment/submit. The credit card is 3D-Secure If the credit card is 3D-Secure, we need to perform another step to complete the payment. Axerve Ecommerce Solutions will answer with the transactionErrorCode 8006, that requires you to authenticate the card against the bank issuer: { "error":{ "code":"0", "description":"request correctly processed" }, "payload":{ "transactionType":"submit", "transactionResult":"", "transactionErrorCode":"8006", "transactionErrorDescription":"Verify By Visa", "paymentID":"1546124641", ... "userRedirect":{ "href":"" } } } To complete the payment process, redirect the user to userRedirect.href and complete the payment process. Once it’s finished, you’ll be redirected to the success url page, as configured in the Merchant Back-Office. Conclusions In this tutorial, we have seen how easy it is to integrate Axerve Ecommerce Solutions in your payment flow. There’s much more! Axerve Ecommerce Solutions allows you to pay with alternative payment methods and with different security checks, and offers an API to automate many operations of your Ecommerce. Keep reading the docs!
https://docs.gestpay.it/rest/getting-started/first-simple-payment/
2020-03-29T00:24:04
CC-MAIN-2020-16
1585370493121.36
[]
docs.gestpay.it
Guest Post - Windows Phone Development – FAQs v1.0 10’ of them as ‘FAQs’ and here they are. Let’s call it FAQ v1.0. 1. I’ve bought a laptop year ago, can I use it to build WP apps? Any laptop with a processor, motherboard and BIOS supporting virtualization and Second Level Address Translation (SLAT) can be used to build and test WP apps, provided it is running Windows 8. Intel calls SLAT as Extended Page Tables (EPT). If you know the processor model, you can check details about your processor on processor manufacturer’s site. Else, you can use coreinfo.exe (read here: ) utility which can give you details about your machine. However, this feature is required for emulator. If you want to build and test on real device directly, any PC with descent configuration can work. 2. I want to keep my development environment separate, can I use VM for WP development? This idea is not supported. Even though there are some sites which use third-party virtualization tools to create VMs for WP app development, I personally will not recommend it. 3. What version should I build for v7.0, 7.1, 7.5, 7.8, and 8.0? 7.0 was first release of Windows Phone. After that, major release was 7.8 with many features. However 8.0 is a game changer. Apps built for 7.x can run on 8.0 however, reverse is not supported. If you want to target all WP devices, 7.x is the way to go. However, 8.0 provides many new features and rich API set. You’ll miss-out these features if you build for 7.x. You can have more information about WP versions here: 4. Do I need to pay annual fee for developing Windows Phone apps? To build and test app on your own device, you don’t need to pay any fees. You can developer unlock one device and test 2 apps (maximum at any time) on it. However to change this number or sell the apps you’ll need to get a developer account by paying annual fees. You can read more about it at:. 5. While building my app, how can I test my app with location data? Windows Phone SDK comes with fantastic tools and Windows Phone emulator is one of them. It has functionality to emulate sensors (Accelerometer, Camera) as well as location. You can test your location specific functionality using this emulator. 6. Can I associate my app with files, share data between two apps or launch another app? Yes. You can associate file types with your app, so when file is downloaded or opened, your app can be an option for the user to open that file. Read more about it here: If you want to share data between apps or launch another app, you need to know the URI schemes associated with it. To launch system apps:, example of other apps: 7. My app is ready, can I submit to the store now? Once your app is ready, rather submitting it on store directly, test is on real device. Run Store Test kit to test it for performance and other bugs. Also make sure you’ve taken proper screenshots of the app (again use emulator’s feature to take screenshot rather than print-screen/screen clipping). Make sure it is not leaking memory, not showing any debug information and most importantly, build it in ‘release’ mode. More information about test kit can be found here: 8. Is there any way to provide more information to test-team? Sometimes, you might have implemented login/authentication functionality in your app OR used some special features which may cause certification failure OR you may want to convey some message to the testing team. In this case, to provide more information to the team and to make their life easier and help expedite your app certification process, always provide details about your app along with ‘technical notes to testers’ whilst submitting the app. 9. Can I share my app without going through store? The only way to distribute your app to consumers is to go through store. However, if you’ve built enterprise application and want to distribute internally, you can do enterprise distribution. However, you’ll require a ‘Company Account’ not an ‘Individual Account’. Another way to distribute your app to selected few people for testing purpose, you can go through store under ‘beta testing’ option. It will allow you to distribute the app to selected people and will make your app private, without showing on store. 10. My app failed the certification. What should I do? If your app has failed the certification process, you’ll get a mail about it with detailed error report. Look for reason for failure. Usually, the report also contains steps to reproduce the error. Try to reproduce it and fix it. After fixing it test it again and resubmit. I hope with these FAQs answered you can kick-start your Windows Phone Development and build some stunning apps. By the way, have I told you there are gifts for Windows Phone Developers who complete challenges! Visit About Guest Blogging South Asia MVP Award Program introduces Guest Posts by the MVPs from the region. These posts would help readers to be in touch with the recent trends in technology and be up-to-date with knowledge on Microsoft products. Author
https://docs.microsoft.com/en-us/archive/blogs/indiamvp/guest-post-windows-phone-development-faqs-v1-0
2020-03-29T00:41:48
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
Steps for Signing a Device Driver Package Applies To: Windows 7, Windows Server 2008 R2. Important The certificate created and used in this section can be used only with 32-bit drivers on 32-bit versions of Windows. For more information about using device drivers for 64-bit versions of Windows, see the “Important Note” at the beginning of the section Requirements for Device Driver Signing and Staging, earlier in this guide. Steps outline: Signing a device driver package The following steps illustrate the basic process for signing a device driver package. Step 1: Create a digital certificate for signing Step 2: Add the certificate to the Trusted Root Certification Authorities store Step 3: Add the certificate to the per machine Trusted Publishers store Step 4: Sign the device driver package with the certificate Step 1: Create a digital certificate for signing In this step you create a certificate that can be used to sign the sample Toaster driver package. First, open the Certificates MMC snap-in to see the current certificates. Important Do not run certmgr.msc to open the snap-in. By default, that opens the Current User version of the certificate stores. This procedure requires the certificates to be placed in the stores for the Computer Account instead. To open the Certificates MMC snap-in. Note You cannot use the previous x86 Free Build Environment command prompt window, because it was not running with the administrator permissions required by the MakeCert tool. If you attempt to run MakeCert without administrator permissions, it will fail with error code 0x5 (Access Denied). To create a digital certificate by using the MakeCert tool. Step 2: Add the certificate to the Trusted Root Certification Authorities store. Note Certificates that are placed in the per user Trusted Root Certification Authorities store will not validate signatures of device driver packages. To add the test certificate to the Trusted Root CA certificate. Step 3: Add the certificate to the per machine Trusted Publishers store To use your new certificate to confirm the valid signing of device drivers, it must also be installed in the per computer Trusted Publishers store. Note Certificates that are placed in the per user Trusted Publishers store cannot validate signatures of device driver packages. To add the test certificate to the Trusted Publishers certificate store In the Certificates snap-in, right-click your certificate, and then click Copy. Right-click Trusted Publishers, and then click Paste. Open Trusted Publishers and Certificates, and then confirm that a copy of your certificate is in the folder. Click OK to close the certificate. Step 4: Sign the device driver package with the certificate If you are using the sample Toaster device and driver -- or if your organization wants to implement a policy where all device drivers must be signed by your organization's own certificate -- then follow these steps to replace the existing signature with your own. Note Prepare the driver package .inf file The .inf file controls the installation of the driver package. The digital signature for a device driver package resides in a catalog file, with a .cat file name extension. The .inf file used to install the driver package must include. Note If your driver package has already been signed by the vendor, then the .inf file already has a reference to a valid catalog file, and you can skip this procedure. To prepare the driver package . Create a catalog file for the driver package. Note In previous versions of the WDK, you used a tool called Signability. This tool has been deprecated, and replaced with Inf2Cat.. Note The Inf2Cat tool must be run at a command prompt with administrator permissions. To create a catalog file for the driver package. Signability test complete ...................... Errors: None Warnings: None Catalog generation complete. C:\toaster\device\toaster.cat Review the completed .cat file. At the command prompt, type: start toaster.cat. Sign the catalog file by using SignTool Now that you have a catalog file, you can sign it by using the SignTool program. Use this procedure whether you are using the sample Toaster device driver or not. Important When signing a driver package, you must include the option to timestamp the signature. This timestamp specifies when the signature was created. If a certificate expires or is revoked for security reasons, then only signatures created before the expiration or revocation are valid. If a timestamp is not included in the signature, then Windows cannot determine if the package was signed before or after the expiration or revocation, and will reject the signature. To sign a catalog file using SignTool The meaning of each parameter is as follows: /s MyCompanyCertStore Specifies the name of the certificate store in which SignTool searches for the certificate specified by the parameter /n. /n “MyCompany – for test use only”: Successfully signed and timestamped: C:\toaster\device\toaster.cat To view and verify your signed catalog file, at the command prompt, type: start toaster.cat.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd919238(v=ws.10)?redirectedfrom=MSDN
2020-03-29T01:04:08
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
Check out the new evaluation function, a!localVariables(). It does everything with() does but with additional refresh options that may drastically improve the performance of your design. Lets with(local!myvar:2, ...), myvar is of type Integer. The type returned by the with()
https://docs.appian.com/suite/help/19.4/fnc_evaluation_with.html
2020-03-29T00:59:05
CC-MAIN-2020-16
1585370493121.36
[]
docs.appian.com
Wi-Fi To connect Reach to new Wi-Fi network, find it in the list of Available Networks, click on the name and fill in the password. If this network is not visible, you can add it by tapping on the Connect to a hidden network button. Enable Hotspot to switch Reach to the hotspot mode. Tap the Edit button to change the password of the network Reach creates. If a SIM card is inserted into Reach, you can enable sharing mobile data from Reach in hotspot mode..
https://docs.emlid.com/reachrs2/reachview/wifi/
2020-03-28T22:58:27
CC-MAIN-2020-16
1585370493121.36
[array(['../img/reachview/wifi/wifi.png', None], dtype=object)]
docs.emlid.com
This walkthrough will show you how to use Kickbox and SendGrid then select SendGrid. Click the User Icon on the top left and select Account Details. In the menu that appears on the bottom left, click API Keys. In the top right corner, click Create API Key. Name the API Key anything you want; the name is only for your own organizational purposes. Select the Restricted Access tile and a table of "Access Details" will appear. Scroll down to "User Account," and click the middle circle setting indicating "Read Access." Leave all other options at their default settings. Then, click Create & View. Your API Key will be generated and appear in the new window. Click the API Key to copy it. Switch back over to Kickbox and paste the API key into Kickbox’s Add Integration modal. Then, click Add Integration. This enables Kickbox to communicate with your SendGrid account and will allow you to import and export data between services. Your SendGrid credentials are never shared with Kickbox. Click Import List next to the list you want to import and verify. Your list will be imported and analyzed to determine the number of email addresses in the list, and then placed in your "Active List" queue. Click the Act-On account and 2) Download--which will download the results directly to your computer. Click Export, and a new window will appear. At the top of this modal you will see the "Export Action" drop down menu. "Create a new list" is selected by default and creates a new list containing only the addresses that were deliverable. If you prefer to keep additional results, you will need to click the blue Filter button to make your unique selections. Follow the text prompt at the bottom to see which information will be kept or omitted based on your selections. Click the green Export button and allow some time for Kickbox to send this information to SendGrid. When the process has completed, you will receive an email letting you know. If you have questions about this process, reach out to [email protected]. Updated 2 months ago
https://docs.kickbox.com/docs/integration-with-sendgrid
2020-03-29T00:14:46
CC-MAIN-2020-16
1585370493121.36
[]
docs.kickbox.com
LMP90100 Sensor AFE Evaluation Board¶ Overview¶ The Texas Instruments LMP90100 Sensor AFE Evaluation Board (EVB) is a development kit for the TI LMP90xxx series of analog sensor frontends. Requirements¶ This shield can only be used with a development board that provides a configuration for Arduino connectors and defines a node alias for the SPI interface (see Shields for more details). The SPIO connector pins on the LMP90100 EVB can be connected to the Arduino headers of the development board using jumper wires. For more information about interfacing the LMP90xxx series and the LMP90100 EVB in particular, see these TI documents: Samples¶ Zephyr RTOS includes one sample targeting the LMP90100 EVB:
https://docs.zephyrproject.org/latest/boards/shields/lmp90100_evb/doc/index.html
2020-03-29T00:07:55
CC-MAIN-2020-16
1585370493121.36
[]
docs.zephyrproject.org
Fade (Stamp Fleck) Multiplies your flecks and fades them out toward the edges, giving you a bigger, layered stamp. With stamp fleck equipped, press and hold Secondary motion controller Square button.the secondary square button Twist the secondary controller Motion controller 2 twist away from the primary controller to increase the effect, and toward it to decrease it.Next: Opacity (Brush, Draw & Rule Flecks) Info circled The Dreams User Guide is a work-in-progress. Keep an eye out for updates as we add more learning resources and articles over time.
https://docs.indreams.me/en/guide/dreams-workshop/motion-controller-gestures/modes/fade
2020-03-29T00:27:04
CC-MAIN-2020-16
1585370493121.36
[]
docs.indreams.me
All content with label archetype+as5+buddy_replication+cachestore+client+development+docs+expiration+gridfs+import+infinispan+loader+lock_striping+migration+mvcc+notification+store+xsd. Related Labels: publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, memcached, test, api, ehcache, maven, documentation, userguide, write_behind, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, mongodb, eviction, concurrency, out_of_memory, jboss_cache, index, events, configuration, hash_function, batch, write_through, cloud, remoting, tutorial, xml, read_committed, jbosscache3x, distribution, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, permission, transaction, async, interactive, xaresource, build, searchable, installation, scala, command-line, non-blocking, filesystem, jpa, tx, user_guide, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, repeatable_read, batching, consistent_hash, jta, faq, 2lcache, jsr-107, docbook, lucene, jgroups, locking, rest, hot_rod more » ( - archetype, - as5, - buddy_replication, - cachestore, - client, - development, - docs, - expiration, - gridfs, - import, - infinispan, - loader, - lock_striping, - migration, - mvcc, - notification, - store, - xsd ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/archetype+as5+buddy_replication+cachestore+client+development+docs+expiration+gridfs+import+infinispan+loader+lock_striping+migration+mvcc+notification+store+xsd
2020-03-29T01:24:40
CC-MAIN-2020-16
1585370493121.36
[]
docs.jboss.org
Confessions of a Mac Switcher at Microsoft Hi. My name is Blair and I'm a switcher. I guess it all started around OS X. I'd been working in a Linux shop for a little while. On DOS and Windows before that. I was already hooked on the command line, but couldn't find a windowing environment that kept me satisfied. That's when a friend of mine first turned me on to OS X. All the power of the command line that I was used to from Linux but with the tastiest UI I'd ever seen! I was hooked after trying OS X just once. I've been using Macs and OS X ever since. I knew I'd need to get my own Mac if I was going to use OS X the way I wanted to. A friend sold me his used 500 MHz G4 TiBook for $500. The thing was all beat up: Cracked hinge, broken latch (replaced with Velcro), missing command key, combo drive that made a lot of scratching noise (not to mention scratched up discs), and a downgraded 2Gb hard drive. I was sold. I upgraded the hard drive and bought an external CD-RW drive. I had to install OS X from another Mac with my PowerBook in Firewire mode, but that's all the work I had to do and that first PowerBook is running great even today. My roommate and I used to joke about what a perfect Switcher ad our house would make. We counted the retired PCs in our attic: 14 in various configurations, but every last one of the powered down. We were doing all our work on one PowerBook each. We didn't need the recognition or the fame of an Ellen Feiss. Running OS X was all we really needed, and one PowerBook each to keep us happy. I ended up being recruited to Microsoft for the pizza. True story. I'd gone back to college and wouldn't normally have attended the recruiting event, but I was hungry and, hey - free pizza. I told everybody at Microsoft that I use Mac OS X. It was cool. Nobody judged. Everybody seemed to think it was all right. I even met a few other Microsoft people that use OS X the way I do. I went ahead with a 3-month internship and by the time I was done with that internship I was hooked on Microsoft as well. It's just an amazing company to work for. Whenever I'd talk about OS X, or Apple, or my PowerBook, people would just be interested in hearing another opinion. Microsoft's been super cool. I worked on Visual Studio for a few years. My old roommate kept sending me Microsoft job postings for MacBU. "If you're going to work at Microsoft, at least work for these guys," he'd say. "No way, man. I'm happy where I'm at," or, "It's cool here," I'd say. But my situation changed after a few years and I decided to give the MacBU a shot. Now I'm a Program Manager working on Mac PowerPoint. It's been great! MacBooks, Mac Pros and PowerPCs everywhere you look, and all the OS X I can handle. I'm hooked.
https://docs.microsoft.com/en-us/archive/blogs/macmojo/confessions-of-a-mac-switcher-at-microsoft
2020-03-29T01:33:06
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
. monitoring!).. This prevents the Sensu Go agent API and socket from conflicting with the Sensu and output to /sensu_config_translated # Option: translate your config in sections according to resource type sensu-translator -d /etc/sensu/conf.d -o /sensu_config_translated If translation is successful, you should see a few callouts followed by, Core filters is replaced with JavaScript expressions in Sensu Go, opening up powerful possibilities to combine filters with filter { "metadata": { "name": "hourly", "namespace": "default" }, "action": "allow", "expressions": [ "event.check.occurrences == 1 || event.check.occurrences % (3600 / event.check.interval) == 0" ], "runtime_assets": null } 4. Translate handlers In Sensu Go, all check results are considered events and are processed by event handlers. Use the built-in incidents.. Step 3: Translate plugins and register assets Sensu plugins Within the Sensu Plugins org, see individual plugin READMEs for compatibility status with Sensu Go. For handler and mutators plugins, see the Sensu plugins README to map event data to the Sensu Go assets Assets are shareable, reusable packages that make it easy to deploy Sensu plugins. Although
https://docs.sensu.io/sensu-core/1.5/migration/
2020-03-28T23:29:03
CC-MAIN-2020-16
1585370493121.36
[array(['../../../images/install-sensu.png', 'Sensu architecture diagram'], dtype=object) ]
docs.sensu.io
Connecting Reach to the Internet¶ Connect Reach to the Internet to update ReachView to the latest version or to get the corrections from your NTRIP service. Connecting to ReachView¶ Connecting to Wi-Fi¶ Go to the Wi-Fi tab Choose Wi-Fi network Choose the available one if it’s visible If you can’t see your mobile hotspot, press Connect to a hidden network Fill in the connection form Steps for Wi-Fi network For Wi-Fi network fill in a password Steps for mobile hotspot For mobile hotspot fill in Network name. Choose Security type and add a password if you have it. Double check the password! You can unmask the password by clicking on an eye symbol at the end of a password field. Press the Connect button. Blue LED should start blinking on Reach Connecting process¶ Steps for Wi-Fi network If you connecting to Wi-Fi wait until the blue LED starts blinking slowly. Steps for mobile hotspot If you connecting to mobile hotspot do the following steps: - Enable Wi-Fi hotspot on your mobile device - Check that it has the same name and password as you filled in the previous step - Now reboot the device with the Power button - Reach should connect to your hotspot during the next boot If the connection is successful, you will see blue LED is blinking slowly If the connection fails, you will see the blue LED stays solid If the connection fails: - Connect to Reach hotspot again - Check entered password - Check that your network is configured correctly - Try another Wi-Fi network Go back to ReachView¶ Connecting to Reach with iOS/Android device - Connect your mobile device to the same Wi-Fi network as Reach - Scan for available Reach devices - Choose Reach from the list in the app Connecting via a web browser from any device - Connect your device to the same Wi-Fi network as Reach - Use one of the Network Scan utility or ReachView app to determine the Reach IP address - Go to IP address in a web browser Once Reach is connected to a Wi-Fi, you can:
https://docs.emlid.com/reachrs/common/quickstart/connecting-to-the-internet/
2020-03-29T00:21:53
CC-MAIN-2020-16
1585370493121.36
[array(['../img/quickstart/connecting-to-the-internet/hidden-network.gif', None], dtype=object) array(['../img/quickstart/connecting-to-the-internet/new-connection.gif', None], dtype=object) ]
docs.emlid.com
Getting Started If you're new to Sprout Invoices, start here! - Getting Started with Sprout Invoices - Payment Settings - Install and Activate Plugins/Add-ons - Translating Sprout Invoices via Localization (l10n) - Recurring Invoice, and Recurring Payments, and Sprout Billings Clarification - Edit Notification Templates - Can I Upgrade My License? - Two Plugins - Troubleshooting: Too Many Redirects After Installation - Finding your PayPal API Credentials - Troubleshooting Importing - Project Panorama Integration - Custom States and Countries - Translating Strings in Sprout Invoices - Change Taxes to GST - Upgrading to New PDF Service - Troubleshooting: Line Items Will Not Save - Custom States and Countries for Sprout Invoices - Change the default Currency Formatting - Adding a Phone Number to a Client Record
https://docs.sproutinvoices.com/category/13-getting-started
2020-03-29T00:12:17
CC-MAIN-2020-16
1585370493121.36
[]
docs.sproutinvoices.com
Adding Claim Dialects¶ In WSO2 Identity Server, there are two ways of adding a claim dialect. They are: Using the management console¶. Info The Dialect URI is a unique URI identifying the dialect (for example,). Click on the Add button. The claim dialect you added will appear on the list as follows. Using the configuration file¶ Follow the instructions below to add a new claim dialect through the configuration file. Note that you can only do this before the first start up of the WSO2 Identity Server instance. - Open the claim-config.xmlfile found in the <IS_HOME>/repository/conf/folder. To add a new claim dialect, add the following configuration to the file along with the new claims you want to add under the dialect. For this example, the new claim dialect is named SampleAppClaims. <Dialect dialectURI=""> <Claim> <ClaimURI></ClaimURI> <DisplayName>First Name</DisplayName> <MappedLocalClaim></MappedLocalClaim> </Claim> <Claim> <ClaimURI></ClaimURI> <DisplayName>Nick Name</DisplayName> <MappedLocalClaim></MappedLocalClaim> </Claim> </Dialect> Once you have edited the claim-config.xml file, start WSO2 Identity Server. The configurations will be applied and you can view the new claim dialect via the management console. Note The claim dialects configured in < IS_HOME>/repository/conf/claim-config.xmlfile. Related Links For information on how to add an external claim to this claim dialect, or add a local claim to the wso2 local claim dialect, see Adding Claim Mapping.Top
https://is.docs.wso2.com/en/next/learn/adding-claim-dialects/
2020-03-28T23:46:57
CC-MAIN-2020-16
1585370493121.36
[]
is.docs.wso2.com
Once you have completed your design, you will no doubt want to print it. KXStitch allows you to customize the printing, from what is printed to how big the final print is. This version of KXStitch allows a fully customized printing layout. The new printer configuration allows pages to be added, inserted or removed. The paper size and orientation can be selected and each page can contain text elements, pattern elements or key elements. For further details see the printer dialog
https://docs.kde.org/stable5/en/extragear-graphics/kxstitch/printingpatterns.html
2020-03-29T00:42:28
CC-MAIN-2020-16
1585370493121.36
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
TOPICS× Adobe Primetime DRM Server for Protected Streaming For streaming use cases where content is protected with Primetime DRM, such as for Adobe HTTP Dynamic Streaming, the software also includes Primetime DRM Server for Protected Streaming. This solution can be easily deployed on a servlet container such as Tomcat and can achieve a high level of scalability and performance to meet the largest content distribution needs.
https://docs.adobe.com/content/help/en/primetime/drm/drm-sdk-5-3-1/adobe-access-components/protected-streaming.html
2020-03-29T01:18:20
CC-MAIN-2020-16
1585370493121.36
[]
docs.adobe.com
TOPICS× Adobe Experience Cloud release notes - November. November 2018 Latest update: December 4, 2018 Adobe Cloud Platform Release notes for the Experience Cloud interface and platform core services. Includes Mobile Services, Launch, by Adobe, Dynamic Tag Management, GDPR API, and Experience Cloud ID Service. Dynamic Tag Management Adobe plans to sunset Dynamic Tag Management by the end of 2020. - 2019-07-01: Starting in July of next year, DTM will no longer allow the creation of new properties. This will have no impact on existing properties. - 2020-07-01: DTM properties will enter a read-only mode. You will not be able to create or edit tools, rules, or data elements. You will no longer be able to publish to DTM environments. This will have no impact on previously published libraries. - 2020-12-31: End of support. Servers will be decommissioned, documentation will go offline, and communities will be removed. This will have no impact on previously published libraries. For more information, see DTM Plans for a Sunset . Analytics Cloud Analytics New features and fixes in Adobe Analytics: For product documentation, see Analytics Help Home . Analysis Workspace Media Analytics SDK for iOS & Android (formerly VHL SDK) Analytics Fixes and Updates Analysis Workspace Fixed an issue where in Create metric from selection and Compare attribution models , the Percent Change calculated metric was incorrect. (AN-170471) Other Analytics Fixes - Calculated Metrics: Fixed an issue related to copying calculated metric parameters. (AN-169648) - Calculated Metrics: Fixed a localization issue in the calculated metric preview. (AN-165086) Data Workbench See Data Workbench release notes for the latest information. Important Notices for Analytics administrators Audience Manager New features and fixes in Adobe Audience Manager. Fixes, enhancements, and deprecations We updated the name of the Outbound History Report to Outbound File History Report. The previous name caused some customers to think the report would show outbound data for HTTP destinations. In fact, the report covers files delivered to S3 or FTP locations. Known issues - The latest version of Safari includes Intelligent Tracking Prevention (ITP) 2.0 tools. This affects Addressable Audience metrics for your Safari users and data collection using the h_referer signal. Read about Safari traffic as a Cause of Low Match Rates for Addressable Audiences and data collection using the h_ prefix . - The release of Trait Exclusions in Algorithmic Modeling introduced an issue for customers using Role-Based Access Controls . When creating a new model, if you only select data sources that you have access to, you can see their corresponding traits in the Exclusions window. However, if you select any other data sources than the ones you have access to, in addition to the ones that you have access to, you will see a blank list. Unselect the data sources that you don't have access to in order to see the traits. (AAM-42380) Documentation Updates - We added definitions and examples for all the metrics in the General Reports. Read our General Reports documentation . - We updated the Addressable Audience documentation to clarify the difference between customer-level and segment-level metrics. Read our Addressable Audience documentation . Marketing Cloud Experience Manager New features, fixes, and updates in Adobe Experience Manager. Adobe recommends customers with on-premise deployments to deploy the latest patches to ensure higher stability, security, and performance. Product releases - AEM Dispatcher 4.3.1 Adobe strongly recommends using the latest version of AEM Dispatcher to avail the latest functionality, the most recent bug fixes, and the best possible performance. See the AEM Dispatcher Release Notes . Self help - Configuring AEM Assets integration with Experience Cloud and Creative Cloud The documentation for configuring the AEM Assets integration with Experience Cloud and Creative Cloud has been updated. If you use this integration, Adobe recommends that you update the configuration to point to experiencecloud.adobe.com instead. Community - Experience League: Fast-track your Adobe Experience Cloud Expertise Learning any new software can be challenging. Even after the initial training, there may be a lot to learn and you may not know where to start or who to ask for help. When it comes to learning Adobe Experience Cloud , you can expect a different experience. Unlike any other guided learning program out there, our Experience League enablement program is uniquely tailored to your individual needs—and it's free for everyone. Experience League just launched Business Essentials for most of the solutions in Experience Cloud (two for AEM). It is also coming up with Implementation essentials very soon. For more information see the following: . Target Refer to the Adobe Target Release Notes for the latest release information about the following products: - Target Standard - Target Premium - Recommendations Classic Campaign Adobe Campaign provides an intuitive, automated way to deliver one-to-one messages across online and offline marketing channels. You can now anticipate what your clients want using experiences determined by their habits and preferences. Adobe Campaign Classic 18.10 Availability: November 5, 2018 For product documentation, see:
https://docs.adobe.com/content/help/en/release-notes/experience-cloud/previous/2018/11012018.html
2020-03-28T23:42:32
CC-MAIN-2020-16
1585370493121.36
[]
docs.adobe.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::Chime::Types::CreateAttendeeRequestItem - Defined in: - (unknown) Overview Note: When passing CreateAttendeeRequestItem as input to an Aws::Client method, you can use a vanilla Hash: { external_user_id: "ExternalUserIdType", # required } The Amazon Chime SDK attendee fields to create, used with the BatchCreateAttendee action. Instance Attribute Summary collapse - #external_user_id ⇒ String The Amazon Chime SDK external user ID. Instance Attribute Details #external_user_id ⇒ String The Amazon Chime SDK external user ID. Links the attendee to an identity managed by a builder application.
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/Chime/Types/CreateAttendeeRequestItem.html
2020-03-29T00:59:16
CC-MAIN-2020-16
1585370493121.36
[]
docs.aws.amazon.com
Configuring and Using the HBase REST API You can use the HBase REST API to interact with HBase services, tables, and regions using HTTP endpoints. Installing the REST Server Installing the REST Server Using Cloudera Manager Minimum Required Role: Full Administrator - Click the Clusters tab. - Click the Instances tab. - Click Add Role Instance. - Under HBase REST Server, click Select Hosts. - Select one or more hosts to serve the HBase Rest Server role. Click Continue. - Select the HBase Rest Server roles. Click. Using the REST API The HBase REST server exposes endpoints that provide CRUD (create, read, update, delete) operations for each HBase process, as well as tables, regions, and namespaces. For a given endpoint, the HTTP verb controls the type of operation (create, read, update, or delete). These examples use port 20050, which is the default port for the HBase REST server when you use Cloudera Manager. If you use CDH without Cloudera Manager, the default port for the REST server is 8080.
https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/admin_hbase_rest_api.html
2020-03-28T22:56:42
CC-MAIN-2020-16
1585370493121.36
[]
docs.cloudera.com
OKD Installation and Configuration topics cover the basics of installing and configuring OKD in your environment. Configuration, management, and logging are also covered. Use these topics for the one-time tasks required quickly set up your OKD environment and configure it based on your organizational needs. For day to day cluster administrator tasks, see Cluster Administration.
https://docs.okd.io/1.2/install_config/index.html
2020-03-29T01:05:17
CC-MAIN-2020-16
1585370493121.36
[]
docs.okd.io
Table of Contents Product Index A Gothic inspired set of original pattern Iray shaders for Daz Studio in a colour palette of purples, orange, red and black and white. Each pattern was created from scratch in Illustrator and Photoshop, designed to coordinate together to give you lots of mix and match options when using them. Included are lace, texture (bumps), colour and sheen presets for added versatility. It includes a set of fabric weaves (both bump and normal maps), sheens, colour, lace and tiling preset options. I have also created a set of coordinating simple linen and knit presets and a Photoshop Pattern file of the patterns. Apply them to clothing, furniture, walls, soft furnishings and props. Includes a PDF with instructions and tips and tricks. These Iray Shaders can be used as a Merchant Resource to create your own add-on texture sets for sale. Please be sure to read the full Terms.
http://docs.daz3d.com/doku.php/public/read_me/index/23215/start
2020-03-29T00:47:16
CC-MAIN-2020-16
1585370493121.36
[]
docs.daz3d.com
State Management with PHP Dapr offers a great modular approach to using state in your application. The best way to learn the basics is to visit the howto. Metadata Many state components allow you to pass metadata to the component to control specific aspects of the component’s behavior. The PHP SDK allows you to pass that metadata through: <?php $app->run( fn(\Dapr\State\StateManager $stateManager) => $stateManager->save_state('statestore', new \Dapr\State\StateItem('key', 'value', metadata: ['port' => '112']))); This is an example of how you might pass the port metadata to Cassandra. Every state operation allows passing metadata. Consistency/concurrency In the PHP SDK, there are four classes that represent the four different types of consistency and concurrency in Dapr: <?php [ \Dapr\consistency\StrongLastWrite::class, \Dapr\consistency\StrongFirstWrite::class, \Dapr\consistency\EventualLastWrite::class, \Dapr\consistency\EventualFirstWrite::class, ] Passing one of them to a StateManager method or using the StateStore() attribute allows you to define how the state store should handle conflicts. Parallelism When doing a bulk read or beginning a transaction, you can specify the amount of parallelism. Dapr will read “at most” that many keys at a time from the underlying store if it has to read one key at a time. This can be helpful to control the load on the state store at the expense of performance. The default is 10. Prefix Hardcoded key names are useful, but why not make state objects more reusable? When committing a transaction or saving an object to state, you can pass a prefix that is applied to every key in the object. <?php class TransactionObject extends \Dapr\State\TransactionalState { public string $key; } $app->run(function (TransactionObject $object ) { $object->begin(prefix: 'my-prefix-'); $object->key = 'value'; // commit to key `my-prefix-key` $object->commit(); }); <?php class StateObject { public string $key; } $app->run(function(\Dapr\State\StateManager $stateManager) { $stateManager->load_object($obj = new StateObject(), prefix: 'my-prefix-'); // original value is from `my-prefix-key` $obj->key = 'value'; // save to `my-prefix-key` $stateManager->save_object($obj, prefix: 'my-prefix-'); }); Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.dapr.io/developing-applications/sdks/php/php-state/
2021-10-15T23:05:32
CC-MAIN-2021-43
1634323583087.95
[]
docs.dapr.io
JDocumentRenderer is an abstract class which provides a number of methods and properties to assist in rendering a particular document type. Not all document types implement renderers in this way. Some of the methods listed will be overridden by the child class so you should check the child class documentation for further information. Defined in libraries/joomla/document/renderer.php Methods Importing jimport( 'joomla.document.renderer' ); Examples Code Examples.
https://docs.joomla.org/API16:JDocumentRenderer
2021-10-16T00:00:55
CC-MAIN-2021-43
1634323583087.95
[]
docs.joomla.org
You are here: Working with FME Desktop > Working with FME Server > Publishing to FME Server > Upload Connections Upload Connections If your workspace contains any database or web connections, use the Upload Connections dialog to specify the connections you want to upload. Unless previously uploaded, you must upload any connections you want to reference when the workspace runs from FME Server. Check the box for each connection you want to upload. Click Next to proceed to Register Services.
https://docs.safe.com/fme/2017.1/html/FME_Desktop_Documentation/FME_Workbench/Workbench/Upload-Connections.htm
2021-10-16T00:02:20
CC-MAIN-2021-43
1634323583087.95
[array(['../Resources/Images_WB/Upload_to_Server_Verify_Connections_rev.png', None], dtype=object) ]
docs.safe.com
Heavily fragmented indexes degrade the performance of your database and the applications running on it. Resolve index fragmentation by reorganizing or rebuilding an index. Fragmentation Manager automatically collects table and index information, analyzes the data, takes the appropriate reorganization or rebuild operations, and then performs post defragmentation analysis. Fragmentation Manager has a dedicated tab in SQL Sentry, Indexes. The Indexes tab displays index related statistics and charts, from the target level down to the individual index level, giving you a complete view of the fragmentation levels on your server. Having this information allows you to make intelligent decisions about index management in your environment such as when and how to perform defragmentation operations, when to adjust fill factors, or when an index definition should be changed. Set different schedules for instances and databases down to the individual table or index level, giving you complete granular control over any defragmentation actions. Additionally, set specific schedules for rebuilds or reorganizations explicitly. Several additional settings are available to help you calibrate the actions SQL Sentry takes, including: - The ability to set the scan level or mode that's used to obtain fragmentation statistics. - The ability to set minimum and maximum index size thresholds for the collection of fragmentation data. - The ability to set reorganization and rebuild fragmentation threshold percentages. - All index defragmentation settings work within the normal SQL Sentry hierarchy meaning that settings can be configured at one level and are automatically inherited by objects below it, allowing for easy automation within your environment. For a complete explanation of all the available settings, see the Fragmentation Manager Settings section. Enabling Fragmentation Manager Enable Fragmentation Manager through the right-click context menu of any instance or by opening the Indexes tab of SQL Sentry and selecting Enable Now. The first time you enable it, the Fragmentation Manager Wizard displays. Note: In versions 2020.8.31 and later, this feature is enabled by default for all targets, for new users. It may be disabled manually. Fragmentation Manager Wizard Options Selecting a Schedule To select a schedule to be used for analysis and/or defragmentation, choose a pre-existing schedule, or select the New command to create a new schedule. Select Next to confirm your settings, and then select Finish to complete the Wizard. Fragmentation Manager Related Settings The following are two groups of settings relevant to Fragmentation Manager: - Database Source settings—Used to configure the general collection of table and index information, including size collection thresholds, buffer collection thresholds, and index partition options. - Index Defragmentation settings—Used to configure the defragmentation and analysis operations, including scheduling, setting index reorganization, and rebuild thresholds. Database Source Settings Index Defragmentation Settings After you enable Fragmentation Manager the Index Defragmentation settings are accessed through the Settings pane. Index Defragmentation settings are configured at the following levels: global (All Targets), sites, target group, target, instance, database, table, or at the individual index. To configure the Index Defragmentation settings for a specific instance, complete the following steps: - Select the instance in the Navigator (View > Navigator), and then open the Settings pane (View > Settings). - Select Index Defragmentation from the bottom drop-down menu to configure your settings. Index Defragmentation Settings Note: Indexes that can't be rebuilt online, such as those with LOB columns, are never defragmented by a Rebuild schedule if Use Online Rebuild is set to True except within a specified Offline Rebuild Window. Otherwise, they are only analyzed. Note: When there are multiple partitions, Fragmentation Manager works on one partition at a time in a serial fashion. Parallelize the Index analysis and defragmentation in Fragmentation Manager by using one of the following methods: - Build multiple schedules that can be scheduled to run in an overlapping manner. - Increase the Maximum Concurrent Operations. Manual Fragmentation Operations Fragmentation Operations can also be initiated manually. Within the Navigator use the right-click context menu of any database, table, or index to initiate fragmentation operations, including analysis, reorganizations, or rebuilds. Manual Fragmentation Operations can also be initiated within the Indexes tab. From the Grid/Tree view found in the center of the screen, use the context menu of any database, table, or index to access fragmentation operations. Fragmentation Alert Conditions The following fragmentation related conditions are available to configure actions for: - Defragmentation Completed - Defragmentation Started - Defragmentation Failure To configure a fragmentation related condition, complete the following steps: - Select the node appropriate to the level you'd like to configure the action for in the Navigator pane (View > Navigator), and then open the General Conditions section in the Conditions pane (View > Conditions). - Select Add in the Conditions pane to open the Actions Selector window. - Expand the Index actions and then the appropriate condition. Use the check box(es) to select which actions should be taken in response to this condition being met. Select OK to save your setting. For more information about actions that can be taken when a condition is met, see the Actions topic. Indexes Tab For more information about the charts and statistics displayed in the Indexes tab, see the Indexes topic. Database Space Usage Band By enabling the Fragmentation Manager, the functionality of the Disk Space tab is also enhanced by providing additional information regarding the space usage of indexes. For more information, see the Disk Space topic.
https://docs.sentryone.com/help/fragmentation-manager
2021-10-16T00:49:28
CC-MAIN-2021-43
1634323583087.95
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/614393617011e8cc737b2484/n/sql-sentry-fragmentation-manager-wizard-welcome-prompt-202112.png', 'SQL Sentry Indexes tab select Enable Now to open the Fragmentation Manager Wizard Version 2021.12 Indexes tab select Enable Now to open the SQL Sentry Fragmentation Manger Wizard'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6143948e7011e837757b23d7/n/sql-sentry-fragmentation-manager-wizard-options-202112.png', 'SQL Sentry Fragmentation Manager Wizard Options Version 2021.12 Fragmentation Manager Wizard Options'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6143975a4cadb1bb007b24a3/n/sql-sentry-fragmentation-manager-wizard-select-schedule-202112.png', 'SQL Sentry Fragmentation Manager Wizard New Schedule Version 2021.12 Fragmentation Manager Wizard New Schedule'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/614398884cadb12c027b23d4/n/sql-sentry-fragmentation-manager-confirm-settings-202112.png', 'SQL Sentry Fragmentation Manager Wizard Confirm Settings Version 2021.12 Fragmentation Manger Wizard Confirm Settings'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e5d7e95ec161c4840ae48da/n/sentryone-indexes-tab-200.png', 'SQL Sentry Indexes tab Version 2021.12 SQL Sentry Indexes tab'], dtype=object) ]
docs.sentryone.com
Managing Overtime Contents Enable supervisors to plan and track overtime hours for a single agent or multiple agents by specifying time intervals and activities. In WFM Web for Supervisors, you can plan, define, and track how overtime requirements are met in the Master Schedule > Overtime view. To allow user access to this view, you must first set the user's security rights in the Configuration module. Select either Roles > Role Privileges > Access Overtime Requirement or Users > Role Privileges > Access Overtime Requirement. Overtime data is displayed in a grid, which has an editable Overtime Requirement column and a read-only Overtime Scheduled column. The Overtime Scheduled column is calculated based) Setting overtime for agents Purpose: To add an overtime work set for individual agents by using the Insert Multiple Wizard in WFM Web. Start of Procedure - In the Intra-Day, Agent Extended, or Weekly view, select Insert Multiple from one of the following: - Actions toolbar - Actions menu - On the agent's schedule, Right-click and select the Shortcut menu (not in Weekly view) - If you have unsaved changes, WFM Web prompts you to save them before proceeding. - In the Insert Multiple Wizard, select Insert Work set. - Create a new overtime work set by selecting Marked Time for the designated work type. End of Procedure Tracking scheduled overtime results You can use the following views to track scheduled overtime results: - Schedule > Schedule Scenarios > Intra-Day view in the Performance Data pane - Schedule > Master Schedule > Intra-Day view in the Performance Data pane - Schedule > Master Schedule > Overtime view The Performance Data pane separates the scheduled overtime part of the coverage within the calculated staffing graph and distinguishes it from the overtime requirement. See the figure below. Figure: Calculated Staffing Graph—Full Day View Overtime bidding Supervisors with the appropriate role privileges can create Overtime Offers for a specific activity or multi-site activity in Web for Supervisors' Overtime Bidding view in the Schedules module. Offers are associated with the Master Schedule only and WFM uses the offer properties and other data in the automatic overtime scheduling process—the other data being the overtime slots chosen by agents when they are bidding on open offers. Supervisors must have the Access Overtime Requirements and Overtime Bidding role privileges for the Schedule module. Then, they can track the overtime requirement that is currently scheduled, by checking the Marked Time that is denoted as "mark as overtime" in the agents' schedule for the activity or multi-site activity associated with the Overtime Offer. Agents use the Bidding module in Web for Agents to see open schedules bidding scenarios or overtime offers, on which to bid. When they select an overtime offer, they can then choose the overtime slots they want to be added to their schedules. Agents see overtime offers only after the supervisor marks the offer as "open" and only if the supervisor has associated the agent and his/her site with the offer. Continuous processing of overtime offers You can enable WFM to continuously process Overtime offers, which have FIFO selected for agent bid ranking criteria. To enable this functionality, contact your Genesys representative. For more information about Overtime bidding, see the Workforce Management Web for Supervisors Help and Workforce Management Agent Help. For information about WFM Role Privileges see Assigning Roles to Users and [[PEC-WFM/Current/Supervisor/SchdlRlP|]] topics in the Workforce Management Web for Supervisors Help.
https://all.docs.genesys.com/PEC-WFM/Current/Administrator/OTFnct
2021-10-16T00:37:23
CC-MAIN-2021-43
1634323583087.95
[array(['/images-supersite/6/6e/WM_812_overtime_rqmts_granularity.png', 'WM 812 overtime rqmts granularity.png'], dtype=object) array(['/images-supersite/d/dc/WM_812_shift_assignment_state.png', 'WM 812 shift assignment state.png'], dtype=object) array(['/images-supersite/2/28/WM_812_config_agents.png', 'WM 812 config agents.png'], dtype=object) array(['/images-supersite/e/ed/WM_812_overtime_advanced_fd_view.png', 'WM 812 overtime advanced fd view.png'], dtype=object) ]
all.docs.genesys.com
Overview As the saying goes, "Data is eating the software that is eating the world," and data professionals need a rich, coherent and cohesive ecosystem to build data-driven applications. Welcome to Composable DataOps Platform, the industry's leading Intelligent DataOps solution. Data professionals - software developers, data engineers, data scientists, analysts, IT and business users - are responsible for delivering and leveraging data products. And these data professionals are increasingly adopting DevOps and DataOps best practices through end-to-end platforms like Composable, to quickly and easily operationalize their data intelligence solutions. Composable is a full-stack data operations platform that is a proven technology for operationalizing AI and ML applications. Why DataOps Matters “28% of applications do not make it into production” - Gartner Research In this sections, we will review the main structure, navigation and product areas of the Composable DataOps Platform.
https://docs.composable.ai/en/latest/Composable-Platform/01.Overview/
2021-10-16T01:04:48
CC-MAIN-2021-43
1634323583087.95
[]
docs.composable.ai
Installation Markup Language Projects Interfaces Development - API Documentation - Release Notes SaveTemp¶ Builder to save temporary files - class disseminate.builders.save_temp.SaveTempFile(env, context, save_ext=None, **kwargs)¶ A Builder to save a string to a temporary file - Parameters - env: :obj:`.builders.Environment` The build environment - parameters, argsTuple[ pathlib.Path, str, tuple, list] The input parameters (dependencies), including the string to save to the temporary outfilepath, for the build - outfilepathOptional[ paths.TargetPath] If specified, the path for the output file. - save_extstr The extension for the saved temp.
https://docs.dissemia.org/projects/disseminate/en/latest/api/builders/save_temp.html
2021-10-15T22:55:24
CC-MAIN-2021-43
1634323583087.95
[]
docs.dissemia.org
Views The Scheduler supports different views to display its events. Default Views The Scheduler provides the following built-in views: day—Displays the events in a single day. week—Displays the events in a whole week. workWeek—Displays the events in a work week. month—Displays the events in a single month. year—Displays the events in a twelve months period. agenda—Displays the events from the current date until the next week (seven days). timeline—Displays the events for the day in line. timelineWeek—Displays the events in a whole week in line. timelineWorkWeek—Displays the events in a work week in line. timelineMonth—Displays the events for a month in line. By default, the Day and Week views are enabled. To enable other views or configure them, use the views option. The built-in Scheduler views are designed to render a time-frame that ends on the day it starts. To render views which start on one day and end on another, build a custom view. The following example demonstrates how to enable all Scheduler views. <div id="scheduler"></div> <script> $("#scheduler").kendoScheduler({ date: new Date("2013/6/6"), views: [ "day", // A view configuration can be a string (the view type) or an object (the view configuration). { type: "week", selected: true }, // The "week" view will appear as initially selected. "month", "year", "agenda" ], dataSource: [ { id: 1, start: new Date("2013/6/6 08:00 AM"), end: new Date("2013/6/6 09:00 AM"), title: "Breakfast" }, { id: 2, start: new Date("2013/6/6 10:15 AM"), end: new Date("2013/6/6 12:30 PM"), title: "Job Interview" } ] }); </script> Custom Views The Scheduler enables you to create custom views which meet the specific project requirements by extending the default View classes of the Scheduler. To implement a custom view, extend (inherit from) one of the existing views. The following source-code files contain the views implementation: kendo.scheduler.view.js—Contains the basic logic of the Scheduler views. Each of the other predefined views extends the kendo.ui.SchedulerViewclass. kendo.scheduler.dayview.js—Contains the logic which implements the MultiDayView. The MultiDayViewclass is further extended to create the DayView, the WeekView, and the WorkWeekView. kendo.scheduler.monthview.js—Contains the implementation of the MonthViewwhich extends the SchedulerView. kendo.scheduler.yearview.js—Contains the implementation of the YearViewwhich extends the SchedulerView. kendo.scheduler.timelineview.js—Implements the TimelineView, the TimelineWeekView, the TimelineWorkWeekView, and the TimelineMonthView. The TimelineWeekView, the TimelineWorkWeekView, and the TimelineMonthViewextend the TimelineViewclass. kendo.scheduler.agendaview.js—Implements the AgendaViewwhich extends the SchedulerView. You can override each method and property that are defined in the list by extending the respective class. In this way, the functionality and the appearance of the view will be altered by creating the new, custom view. For more information, refer to the kendo.scheduler.dayview.js and kendo.scheduler.timelineview.js files which contain definitions of views which extend the already defined MultiDayView and TimelineView views.
https://docs.telerik.com/kendo-ui/controls/scheduling/scheduler/views
2021-10-15T23:49:10
CC-MAIN-2021-43
1634323583087.95
[]
docs.telerik.com
Folders Plus provides additional capabilities not available in open source Folders, including the ability to: restrict the use of Controlled agents Folders Plus controlled agents can restrict the jobs that may execute on an agent. By default, jobs may be assigned to any labeled agent. This can be a problem when specific agents contain secrets or are intentionally task-specific. Consider the case of a Jenkins instance shared by an operations group and multiple developer groups. The operations group has specific credentials and tools that deploy the company website into production. The operations group has configured a dedicated agent with those credentials and tools. The agent is configured to only accept jobs assigned to that specific agent. However, without Folders Plus, a developer could configure a job to run on the operations agent and copy the credentials or the tools. Folders Plus controlled agents allow the operations group to limit their task-specific agent to jobs from specific folders. Configuring controlled agents The first step is to configure the agent to only accept jobs from within approved folders. Agents configured to only accept jobs from within approved folders will refuse to build jobs outside those folders. Open the configure screen of the agent and select the Only accept builds from approved folders option. At this point the person with permissions to configure the folder needs to create a request for a controlled agent. They need to open the folder in Jenkins and select the Controlled Agents action. This will display the list of controlled agents assigned to the folder. We want to add a new controlled agent so select the Create request action. Confirm the request creation and the Request Key should be generated and displayed. The request key needs to be given to the person with permissions to configure the controlled agent. At this point the person with permissions to configure the agent needs to approve the request. They need to select the Approved Folders action from the agent and select the Controlled Agents action. If there are no existing tokens for approving controlled agent requests you will need to create a new token by selecting the Create token action and confirming the creation otherwise the authorize button on any existing token can be used to authorize a request using that token. Either route should display the Authorize Request screen. Enter the Request Key and press the Authorize button. The Request Secret should be generated and displayed. The request key needs to be given to the person with permissions to configure the folder. At this point the person with permissions to configure the folder needs to return to the request screen and enter the Request Secret provided by the person responsible for configuring the agent and click the Authorize button. Once the controlled agent request has been completed, the controlled agent should appear in the list of agents and the folder should appear on the agent. Troubleshooting Issues with third-party plugins In order to enforce that only jobs belonging to approved folders can be built on a controlled agent it is necessary to identify the job that owns any build submitted to the build queue of Jenkins. Once the job has been identified, the folder hierarchy can be traversed to see if any parent folder is on the list of approved folders. Once of the extension points that Jenkins provides is the ability to define custom build types that can be submitted to the build queue. When a plugin author is developing such a plugin, they should ensure that the owning build type hierarchy eventually resolves to the build job to which the task belongs whenever a custom build type is associated with a build job. In the event that a third-party plugin author has not correctly ensured that their custom build type is correctly associated with its originating job, the controlled agents functionality will be unable to identify the originating job, and consequently will refuse to let the job build on the controlled agent. If it is critical to enable the third-party plugin to build on the agent, and the administrator accepts the risk of allowing other custom build types to run on the controlled agent, there is an advanced option that can be enabled on a per-agent basis. Issues with builds being blocked forever There are multiple reasons why Jenkins can block a build from executing. The controlled agent functionality adds an additional reason, however, it should be noted that when Jenkins evaluates a job for building, once it has identified a reason why the job cannot be started, it does not evaluate any of the other reasons. Therefore when Jenkins reports the reason as to why the build is blocked, it will only report the first reason it encountered. In such cases, the user may not realize that the reason for the build being blocked is because it is not within an approved folder. Jenkins does not re-evaluate whether builds are still blocked until an event causes a probability of jobs being executed. Such events include, but are not limited to A job being added to the build queue A job being removed from the build queue A job completing execution A node coming on-line A node going off-line If a job is submitted to the build queue and is blocked because it is not in an approved folder, it will not be re-considered for execution until one of the re-evaluation triggers occurs. Therefore, even if the folder containing the job is added to the list of approved folders, existing blocked builds will not become unblocked until a queue re-evaluation trigger is fired. Other features There are other smaller features available in Folders Plus. - Health reports for entire folders Icons for status reports and other Environment variables for all jobs in the folder - - Moving You can use the Move action to move a job, or entire subfolder, from one location to another. The configuration, build records, and so on will be physically relocated on disk. Beware that many kinds of configuration in Jenkins refer to jobs by their relative or full path. If you move a job to a different folder, you may also need to update configuration that was referring to that job. Health reports The base Folders plugin has a simple way of displaying a health report next to a folder in the dashboard: it simply indicates the worst status (such as failing) of any job in the folder. Folders Plus offers some alternate metrics: the average health of jobs within the folder whether some jobs are disabled counts of jobs with different statuses, such as stable or unstable Icons Normally folders all have a fixed "folder" icon. Folders Plus allows you to customize this with other kinds of icons: a blue, yellow, or red ball corresponding to the most broken job in the folder a variety of icons packaged with Jenkins, such as that for a "package" any icon you like, if you provide a URL Environment variables You may configure an Environment variables property on a folder giving a list of variable names and values. These will be accessible as environment variables during builds. This is a good way to define common configuration to be used by a lot of jobs, such as SCM repository locations. Newer versions of Jenkins also allow these variables to be used during SCM polling or other contexts outside of builds. Note: Folder environment variables will not override global environment variables (set at Manage Jenkins → Configue System → Environment variables), but Folder environment variables will override any parent Folder environment variables. List view column The Pull information from nested job list view column may be selected when configuring a view on a folder or at top level in Jenkins. This column allows you to display some aspect of a job (available as another kind of list view column) for every subfolder being displayed. For example, you might have a dozen folders corresponding to different products, each containing several jobs with predictable names: build, release, and so on. You can create a view at the Jenkins root which contains all these folders as entries. Then add this kind of list view column, specifying the job name build and the standard Status list view column. Now your custom view will display all of the product folders, and for each will show a blue, yellow, or red ball corresponding to the current status of the build job in that folder.
https://docs.cloudbees.com/docs/cloudbees-ci/latest/cloud-secure-guide/folders-plus
2021-10-15T23:36:38
CC-MAIN-2021-43
1634323583087.95
[]
docs.cloudbees.com
Using the Transactions Page The Transactions page in MemberPress shows you all of your member's transactions. To customize what you see on this page, use the Screen Options found at the top of the page: Hint: Use the Pagination option (as shown above) to enter in how many transactions you'd like to show per-page. This will allow you to see more transactions per-page and avoid scrolling through pages of transactions. Filter Options You can use the 'Filter by' option to filter the transactions Transactions based on like items. For example, click Gateway to sort by gateway, or Status to sort by the status of your Transactions. Transactions default to display the most recent by Created On date at the very top of the list down to the oldest. Search Options Use the search box to search for anything on the Transactions page. Use the 'by Field' option to select what you are searching for. The search options available are: Transaction (meaning the Transaction ID given by the gateway), Subscription (meaning the Subscription ID given by the gateway), Username, Email, ID, or Any. For accuracy, be sure to carefully enter what you are searching for in the search box without any spaces before or after the entry. Transactions Table Column Options Id - Internal ID of the Transaction. Transaction - The unique ID of the Transaction as given by the gateway. Hovering over this will reveal the following clickable options: Edit - Click to be taken to the edit page of the Transaction. Send Receipt - Click to send the Payment Receipt email found in your MemberPress Options > Emails tab. Refund - Click to process a refund for the transaction. Important note: this option will only work if you have correctly configured your gateway. Delete - Click to delete the transaction. Doing so will NOT refund the transaction, but it will likely cause the user's subscription to become inactive if this was the latest transaction. Subscription - The unique ID of the Subscription as given by the gateway that the Transaction is tied to. Click this to be taken to the users Subscription in the MemberPress > Subscriptions page. Status - Shows whether or not the Transaction is completed. There are four possible items you can see here: Pending, Failed, Refunded, or Complete. However, changing the status here will not change anything on the level of the gateway. For example, changing the status to refunded like this, will not refund the transaction at all. Membership - Shows which Membership the Transaction is associated with. Click the Membership's name to be taken to the edit page of that membership. Net - Shows the net value of the transaction - the Total minus any Tax charged. Tax - Shows any Tax collected for the Transaction. Total - Shows the total amount that the user paid, including tax. Name - The user's names if entered. User - The username of the user who purchased the Transaction. Click the username to be taken to the user's profile in WordPress. Gateway - This is the payment method the member used when purchasing the Membership the Transaction is associated with. Created On - This is the date that the Transaction was created. Expires On - This is the date that the Transaction will expire. Export Options At the very bottom of the Transactions page you'll see two options for exporting: Export all as CSV - Click this link to export only the records shown on the page you are currently viewing as a CSV file. Export table as CSV - Click this link to export all of the subscriptions on your site as a CSV file.
https://docs.memberpress.com/article/150-using-the-transactions-page
2021-10-15T22:50:04
CC-MAIN-2021-43
1634323583087.95
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f925c9fc9e77c001621aac0/file-uNRVDVfEkl.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f925cde52faff0016af2c23/file-el9zX8aPJ6.png', None], dtype=object) ]
docs.memberpress.com
Compatibility level for tabular models Applies to: SQL Server Analysis Services Azure Analysis Services Power BI Premium The compatibility level refers to release-specific behaviors in the Analysis Services engine. For example, DirectQuery and tabular object metadata have different implementations depending on the compatibility level. In-general, you should choose the latest compatibility level supported by your servers. The latest supported compatibility level is 1500 Major features in the 1500 compatibility level include: - Calculation groups - Many-to-many relationships - Supported in Power BI Premium Supported compatibility levels by version * 1100 and 1103 compatibility levels are deprecated in SQL Server 2017. Set compatibility level When creating a new tabular model project in Visual Studio, you can specify the compatibility level on the Tabular model designer dialog. If you select the Do not show this message again option, all subsequent projects will use the compatibility level you specified as the default. You can change the default compatibility level in SSDT in Tools > Options. To upgrade a tabular model project in SSDT, set the Compatibility Level property in the model Properties window. Keep in-mind, upgrading the compatibility level is irreversible. Check compatibility level for a tabular database in SSMS In SSMS, right-click the database name > Properties > Compatibility Level. Check supported compatibility level for a server in SSMS In SSMS, right-click the server name> Properties > Supported Compatibility Level. This property specifies the highest compatibility level of a database that will run on the server. The supported compatibility level is read-only cannot be changed. Note In SSMS, when connected to a SQL Server Analysis Services server, Azure Analysis Services server, or Power BI Premium workspace, the Supported Compatibility Level property will show 1200. This is a known issue and will be resolved in an upcoming SSMS update. When resolved, this property will show the highest supported compatibility level. See also Compatibility Level of a multidimensional database Create a new tabular model project
https://docs.microsoft.com/en-us/analysis-services/tabular-models/compatibility-level-for-tabular-models-in-analysis-services?view=asallproducts-allversions&viewFallbackFrom=sql-server-ver15
2021-10-16T01:25:44
CC-MAIN-2021-43
1634323583087.95
[array(['media/ssas-tabularproject-compat1200.png?view=asallproducts-allversions', 'ssas_tabularproject_compat1200'], dtype=object) ]
docs.microsoft.com
xPDOCacheManager.copyFile Last updated Apr 6th, 2019 | Page history | Improve this page | Report an issue xPDOCacheManager::copyFile¶¶ API Docs: boolean|array copyFile (string $source, string $target, [array $options = array()]) Example¶ Copy a file: $xpdo->cacheManager->copyFile('/my/path/to/file.txt','/my/new/path/dir/');
https://docs.modx.com/3.x/en/extending-modx/xpdo/class-reference/xpdocachemanager/xpdocachemanager.copyfile
2021-10-16T00:36:10
CC-MAIN-2021-43
1634323583087.95
[]
docs.modx.com
Join Predicate For join predicates except distance methods ST_Distance and ST_3DDistance (where GeoColExpression contains a column from Table2): SELECT a, b FROM { Table1, Table2 WHERE Table1.GeoCol.SupportedGeoMethod ( { Table2.GeoCol | GeoColExpression | Table1.GeoCol } ) = 1 | Table2 JOIN Table1 ON Table1.GeoCol.SupportedGeoMethod(Table2.GeoCol)=1 } The expressions must be set to evaluate to 1 (true) for these to qualify as join predicates. Join Distance Predicate For distance methods ST_Distance and ST_3DDistance: SELECT a, b FROM { Table1, Table2 WHERE Table1.GeoCol.SupportedGeoDistanceMethod ( { Table2.GeoCol | GeoColExpression | Table1.GeoCol ) { < | <= } DistanceLiteral | Table2 JOIN Table1 ON Table1.GeoCol.SupportedGeoDistanceMethod (Table2.GeoCol) { < | <= } DistanceLiteral } Syntax Elements - Table1 - Table2 - Table containing geospatial data columns. - GeoCol - An ST_Geometry column.At least one side of the join, the owner expression or the argument, must be an ST_Geometry column that has a geospatial index. - GeoColExpression - An expression that evaluates to an ST_Geometry value and contains one reference to a column in one of the tables being joined. - SupportedGeoMethod - Any of the geospatial methods, excluding ST_Distance and ST_3DDistance, listed in Geospatial Predicates and the Optimizer. - SupportedGeoDistanceMethod - Either ST_Distance or ST_3DDistance. - DistanceLiteral - A floating point value representing a distance.
https://docs.teradata.com/r/1drvVJp2FpjyrT5V3xs4dA/TP4XQ4G89NOyT7FwiXKHIQ
2021-10-16T01:11:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.teradata.com
Once you have created a report(s), you can schedule the automatic generation of the report in PDF, HTML, CSV, XLS, or RTF format and have it emailed to specific users you wish to notify. Adding a Scheduled Report Schedule the automatic generation of a report and have it sent to the users on the mailing list at a specified time. Note: The following three reports cannot be scheduled: - Configuration Change Detail - Current DHCP Usage - Past Deployment Note: Large-sized reports might not be deliverable if the recipient email server has limited the size of individual emails that it will accept. Consider generating reports on more narrow or specific objects to ensure the reports can be emailed successfully. To add or edit a scheduled report: - Select the Administration tab. Tabs remember the page you last worked on. Select the Administration tab again to ensure you are working with the Administration page. - Under Tracking, click Reporting. - Under Report Schedules, click New. - Under General, enter a descriptive name for the report that you are scheduling in the Description field. - one or more reports that you have already created from the Reports to Schedule drop-down list and click Add to add them on the email list. - Under Email, select one or more users to whom you wish to send the reports from the Recipients drop-down list and click Add to add them on the email to add the scheduled report and return to the Reporting page.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Scheduling-Reports/9.1.0
2021-10-16T00:01:33
CC-MAIN-2021-43
1634323583087.95
[]
docs.bluecatnetworks.com
Migrating CI Jobs from Templates to Multi-stage Tests OpenShift CI offers two mechanisms for setting up end-to-end tests: older template-based tests and newer multi-stage test workflows. Template-based tests cause problems for both DPTP and OpenShift CI users: - they need intensive support - backing implementation is buggy and notoriously difficult to maintain - jobs using templates are hard to extend, modify, and troubleshoot These concerns were addressed in designing the multi-stage workflows, which supersede template-based tests. DPTP wants to migrate all existing template-based jobs to multi-stage workflows in the medium term. We expect this to cause no service disruption. Migrating Handcrafted Jobs Some template-based jobs are not generated from ci-operator configuration but were written manually, usually to overcome some limitation of generated jobs or because some one-off customization was necessary. Multi-stage tests features remove most reasons to handcraft a job, so migrating jobs from templates to multi-stage involves also migrating them from handcrafted to generated. The high-level procedure for migrating a job looks like this: - Determine what the current job is doing: what tests does it run, what image does it use, are there any additional parameters, etc. - Determine if there is a multi-stage workflow doing the same thing, or at least if there are existing building blocks (individual steps) to use. - Add an item to the list of tests in the corresponding ci-operator configuration file, and regenerate jobs. - If done right, the generator will “adopt” the previously handcrafted job and will just update its definition, keeping the existing customizations as if it were a generated job from the start. - Test the newly generated job using rehearsals to see if it gives a correct signal. For an example of a handcrafted template job migrated to a generated multi-stage one, see this PR. Determine what the current job is doing Find the definition of the job under ci-operator/jobs: $ git grep periodic-ci-kubernetes-conformance-k8s ci-operator/jobs/ ci-operator/jobs/openshift/kubernetes/openshift-kubernetes-master-periodics.yaml: name: periodic-ci-kubernetes-conformance-k8s The job always uses one of the templates by mounting the respective ConfigMap. The volume is usually called job-definition and the ConfigMap always has a prow-job-cluster-launch-installer- prefix: The name of the config map usually contains a cloud platform and also hints what the template does: e2e templates run common OpenShift tests, src templates run a custom command in the context of the src image and custom-test-image run a custom command in the context of another, specified image. Additionally, some jobs specify a CLUSTER_VARIANT environmental variable that further affects how the job will install a testing cluster: This configuration together should give you an idea about what kind of test the job implements. Usually, it will be some combination of test (shared e2e or custom command), cloud platform (GCP, AWS, etc.) and an optional variant (FIPS, compact etc.). Determine what workflow and/or steps to use as a replacement Inspect the step registry and figure out if there is an existing content that implements the testing procedure you discovered in the previous step. Knowing how multi-stage tests work in general helps a lot in this step. For shared end-to-end tests, most of the existing platform/variant combinations are already covered by an existing openshift-e2e-$PLATFORM-$VARIANT workflow, such as openshift-e2e-aws-ovn. Jobs running tests in a custom test image or src image are expressed as workflow-using tests with overriden test section, so search for an appropriate install/teardown workflow, such as ipi-aws. Add a test to ci-operator config The following example shows how to express a job that was using prow-job-cluster-launch-installer-src template to run a custom command in the src image. The ...-custom-test-image template-based job would differ only in its from: stanza: The e2e workflows usually need even less configuration: Note that the behavior of the openshift-tests binary is controlled by parameters of the openshift-e2e-test step, not by overriding the actual executed command like it was in the template job. Make the generated job “adopt” the previous handcrafted job Handcrafted jobs often have custom modifications that e.g. control when they are triggered. In order for the generated job to keep this behavior, rename the original job to the name it will be given by the generator ( {periodic-ci-,pull-ci-}$ORG-$REPO-$BRANCH-$TEST) before running the generator. This will make the generator “adopt” the job and overwrite just the necessary parts of it, keeping the customization. Alternatively, you can just generate the new job and delete the old one, too. Doing this will not keep the customizations on the old job, though. Test the generated job All generated jobs are rehearsable by default, so a PR with the change will receive a rehearsal run of the new job which you can inspect to see if the new job gives the expected signal. Migrating Jobs Generated from ci-operator Configurations Most template-based jobs are generated from ci-operator configuration stanzas. Migrating these jobs is easier and can be done almost mechanically. Soon, DPTP will migrate all existing ci-operator configurations to multi-stage workflows automatically. openshift_installer The tests using the openshift_installer stanza install OpenShift using a generic IPI installation workflow and then execute the shared OpenShift E2E tests provided by the openshift-tests binary in tests image. Before (template-based) After (multi-stage) openshift_installer: upgrades The openshift_installer stanzas had a special upgrade: true member, which, when set, made the CI job install a baseline version of OpenShift and then triggered an observed upgrade to the candidate version (the meaning of “baseline” and “candidate” can differ depending on context, but on PR jobs “baseline” usually means “HEADs of branches” and “candidate” means “baseline+component candidate”). After (multi-stage) There are multiple different workflows that implement the upgrade test: openshift-upgrade-$PLATFORM: upgrade test workflow with logs collected via Loki (recommended) openshift_installer_src The tests using the openshift_installer_src stanza install OpenShift using a generic IPI installation workflow, execute a provided command in the context of the src image (which contains a git clone of the tested repository), and tear down the cluster. The template provides tests a writable $HOME and injects the oc binary from the cli image. These jobs can be migrated to multi-stage using an ipi-$PLATFORM workflow (like ipi-aws) and replacing its test stage with matching inline steps. The resulting configuration is more verbose. This is a consequence of multi-stage tests being more flexible, allowing configuration of the elements that in templates were hardcoded. Before (template-based) After (multi-stage) Gotchas These are the possible problems you may encounter when porting an openshift_installer_src test to a multi-stage workflow. Hardcoded kubeadmin password location Tests that use the file containing the kubeadmin password and hardcode its location ( /tmp/artifacts/installer/auth/kubeadmin-password) provided by the template will not find the file at that location anymore. Resolution: Port the test to use $KUBEADMIN_PASSWORD_FILE environmental variable instead. openshift_installer_custom_test_image The tests using the openshift_installer_custom_test_image stanza install OpenShift using a generic IPI installation workflow, execute a provided command in the context of the specified image, and tear down the cluster. The template provides tests a writable $HOME and injects the oc binary from the cli image. These jobs can be migrated to multi-stage in an almost identical way to the openshift_installer_src ones by using an ipi-$PLATFORM workflow (like ipi-aws) and replacing its test stage with a matching inline step. The only difference is that the specified image will be different. After (multi-stage) Gotchas These are the possible problems you may encounter when porting an openshift_installer_custom_test_image test to a multi-stage workflow. Tests rely on injected openshift-tests binary The openshift_installer_custom_test_image template silently injected openshift-tests binary to the specified image. This was a hack and will not be implicitly supported in multi-stage workflows. Resolution: Jobs that want to execute common OpenShift tests as well as some custom ones can do so in two separate steps. The first step would use the ocp/4.x:tests image to run openshift-tests. The second step would use the custom image to execute a custom test. This method should be sufficient for most users. Alternatively, it is possible to build a custom image using the images stanza by explicitly injecting openshift-tests into the desired image and use the resulting image to run the tests. User-facing Differences Between Template and Multi-Stage Tests Template-based tests used mostly hardcoded sequences of containers (usually called setup, test and teardown). Multi-stage tests usually consist of a higher number (usually, around ten) of Pods that each execute a separated test step, so the output change accordingly. The higher number of executed Pods may also slightly increase total job runtime; executing each Pod has an overhead which accumulates over the whole workflow. You can examine what exact steps an individual job will resolve to on the Job Search page. Collected Artifacts Template-based tests put all artifacts into a single directory together, no matter if the artifacts came from setup, test or teardown phase. Multi-stage workflows' artifacts are separated in a separate directory for each step executed. Most of the artifacts captured from the cluster after tests were executed are collected by the gather-must-gather step (surprisingly, runs must-gather) and gather-extra (gathers even more artifacts that must-gather does not) steps. Content of artifacts/$TEST-NAME/ (template-based) All captured artifacts are together: container-logs/ installer/ junit/ metrics/ network/ nodes/ pods/ apiservices.json audit-logs.tar ... <more JSON dumps> Content of artifacts/$TEST-NAME/ (multi-stage) Separate directories per step: gather-audit-logs/ gather-extra/ gather-must-gather/ ipi-conf-gcp/ ipi-conf/ ipi-deprovision-deprovision/ ipi-install-install/ ipi-install-rbac/ test/
https://docs.ci.openshift.org/docs/how-tos/migrating-template-jobs-to-multistage/
2021-10-15T22:47:30
CC-MAIN-2021-43
1634323583087.95
[]
docs.ci.openshift.org
. Resources Operate After you have completed setup, move on to configuring CloudBees CI. Use plugins One of the key features of Jenkins and CloudBees CI is extensibility through plugins, enabling it to meet the specific needs of nearly any project. With a huge variety of plugins in the Jenkins and CloudBees universe to choose from, CloudBees CI offer the CloudBees Assurance Program for users looking to simplify plugin management. CloudBees Assurance Program specifies the set of plugins, plugin versions, and plugin dependencies that are verified, compatible, or community-supported, depending on how much they have been tested. This provides greater stability and security for CloudBees CI environments. Which plugins you install beyond the default list can be determined by many factors including your exact usage of CloudBees CI, what integrations you would like to use, and how you would like to manage the build workload. To help you make your list of plugins to install, search the CloudBees CI plugin directory. Refer to CloudBees plugin support policies for details on how CloudBees classifies plugins into tiers according to how much risk a given plugin may post to a given installation’s stability. Resources Deliver continuously with Pipelines Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code". Resources Secure CloudBees CI’s security options follow the standard Jenkins security model, offering two axes to security, as well as options for adjusting how strict enforcement of these security settings should be. Administrators can force connected masters to delegate all of their security settings, allow teams complete control over their own security settings, or delegate only some security settings to teams. Resources Maintain The following options are available to help you to maintain CloudBees CI on modern cloud platforms:
https://docs.cloudbees.com/docs/cloudbees-ci/2.249.3.2/traditional-onboarding
2021-10-15T23:41:59
CC-MAIN-2021-43
1634323583087.95
[]
docs.cloudbees.com
app-workflow_builder Change Logs 2021.1.3 Maintenance Release [2021-10-05] Overview - 3 Bug Fixes - 3 Total Tickets Bug Fixes - app-workflow_builder:5.40.5-2021.1.28 [09-24-2021] - Fixed Templates list in side navbar when reviewing a Gen 1 workflow. - app-workflow_builder:5.40.5-2021.1.27 [09-16-2021] - Implemented a fix to prevent UI errors after deleting child job reference variables. - app-workflow_builder:5.40.5-2021.1.26 [09-13-2021] - Transformations can now be searched by ID in Gen 1 workflows. 2021.1.2 Maintenance Release [2021-09-07] Overview - 17 Bug Fixes - 17 Total Tickets Bug Fixes - app-workflow_builder:5.40.5-2021.1.25 [09-03-2021] - Extract output has been re-enabled on transformation tasks for Gen 1 canvas. - app-workflow_builder:5.40.5-2021.1.24 [08-19-2021] - The job variables _id and initiator no longer show an error when other job variables are also required. - app-workflow_builder:5.40.5-2021.1.23 [08-13-2021] - Fixed a bug where starting a job from the UI would freeze the page when the workflow had validation errors. - app-workflow_builder:5.40.5-2021.1.22 [07-28-2021] - Fixed a bug that prevented the navigation drawer (burger menu) from closing. Burger menu is now hidden when hovering mouse on edited automation. - app-workflow_builder:5.40.5-2021.1.21 [07-28-2021] - Fixed bug that caused the disappearance of a search string when routing from Automation Studio to related applications. - app-workflow_builder:5.40.5-2021.1.20 [07-26-2021] - The tooltp description now updates on hover for the Query task. - app-workflow_builder:5.40.5-2021.1.19 [07-20-2021] - Changed the tooltip description for revert error and revert failure transitions. Users can now differentiate between the two states when editing a workflow. - app-workflow_builder:5.40.5-2021.1.18 [07-20-2021] - Importing a Gen 1 workflow containing a transformation task will no longer crash Workflow Builder. - app-workflow_builder:5.40.5-2021.1.17 [07-20-2021] - Fixed issue in the Job Variables Value dropdown list. Users can now see and select a value for the JST Task job variables. - app-workflow_builder:5.40.5-2021.1.16 [07-20-2021] - Fixed a bug that caused an incorrect value in the search field when editing automation settings. - app-workflow_builder:5.40.5-2021.1.15 [07-19-2021] - Implemented an exception handler to reject workflow renaming request when payload is invalid. - app-workflow_builder:5.40.5-2021.1.14 [07-16-2021] - JST list will now retrieve from the same server. - app-workflow_builder:5.40.5-2021.1.13 [07-16-2021] - Incoming variable values are now maintained when refreshing a JST in the transformation task. - app-workflow_builder:5.40.5-2021.1.12 [07-07-2021] - Renaming an API will return an updated name for the workflow document. - app-workflow_builder:5.40.5-2021.1.11 [07-07-2021] - ID values in a transformation are now included in the transformation array used to populate the navigation bar. - app-workflow_builder:5.40.5-2021.1.10 [07-07-2021] - JST-on-transition tasks now reference the transformation on export and de-reference on import. - app-workflow_builder:5.40.5-2021.1.9 [06-30-2021] - Added refresh transformation button to transformation task. 2021.1.1 Maintenance Release [2021-07-06] Overview - 8 Bug Fixes - 1 Chores - 9 Total Tickets Bug Fixes - app-workflow_builder:5.40.5-2021.1.8 [06-21-2021] - The Create button is disabled when a name has not been given to the workflow. - app-workflow_builder:5.40.5-2021.1.7 [06-16-2021] - The transformation task's extractedOutput value is now assigned a default value. If a previous value existed it is not overridden. - app-workflow_builder:5.40.5-2021.1.6 [06-14-2021] - JST creation with an invalid (existing) name fails with error notification. Users may create a new JST, search for it, and assign it to the child job. - app-workflow_builder:5.40.5-2021.1.5 [06-07-2021] - Added workflow groups to the Gen 1 canvas. - app-workflow_builder:5.40.5-2021.1.4 [06-04-2021] - Transformation task options in the UI now pull from the options for that task. - app-workflow_builder:5.40.5-2021.1.3 [06-02-2021] - Workflow Builder no longer freezes when adding a new connector. Logic added to handle infinite loops when traversing through tasks in a workflow. - app-workflow_builder:5.40.5-2021.1.1 [06-01-2021] - The same transformation can now be imported into the JST transformation task multiple times. - app-workflow_builder:5.40.5-2021.1.0 [05-28-2021] - Fixed an issue with processing BPA rules that could cause errors in complex workflows. Chores - app-workflow_builder:5.40.5-2021.1.2 [06-02-2021] - Updated dependencies for 2021.1 release. 2021.1 Feature Release [2021-05-28] Overview - 5 New Features - 13 Improvements - 39 Bug Fixes - 1 Security Fixes - 5 Chores - 1 Deprecations - 64 Total Tickets New Features - app-workflow_builder:5.38.0 [04-15-2021] - Added masking to JST task. - app-workflow_builder:5.35.0 [04-06-2021] - Implemented the ability to mask and unmask input job variables. - app-workflow_builder:5.33.0 [03-29-2021] - Added the ability to mask incoming and outgoing variables in the Gen1 canvas. - app-workflow_builder:5.32.2 [03-25-2021] - Added decorators to JSON Schema to support importing workflows which use masking. - app-workflow_builder:5.30.0 [02-11-2021] - Role authorization applied to the getTaskList API when viewing tasks on a canvas. Improvements - app-workflow_builder:5.40.0 [05-17-2021] - When typing job variables a new dialog to review IAP naming conventions is now available. - app-workflow_builder:5.39.18 [05-16-2021] - A job started from the Gen1 canvas no longer triggers a failed response before showing the job variables. - app-workflow_builder:5.39.1 [04-16-2021] - Completed the Automated Test construction for importing of automations with decorators. - app-workflow_builder:5.39.0 [04-15-2021] - Enum list defined for a task now allows one of the enum options to be assigned as a default value. - app-workflow_builder:5.37.0 [04-09-2021] - Renamed Automations to Workflows. - app-workflow_builder:5.36.0 [04-13-2021] - Added Pre-Automation Time and SLA fields to the workflow metadata dialog. This will allow users the ability to set the preAutomation and the SLA times in the Gen1 canvas. - app-workflow_builder:5.32.1 [03-22-2021] - Added BPA rule for job variable expansion. Job variable expansion syntax is not best practice and should be replaced with alternatives. - app-workflow_builder:5.31.9 [03-12-2021] - Added the ability to edit options in the transformation task. - app-workflow_builder:5.31.1 [02-17-2021] - Added description and summary fields to JST tasks such that the Summary will show up on the canvas and the Description is available for users to write info about their JST. - app-workflow_builder:5.31.0 [02-12-2021] - Transformation task details now display a dropdown if the incoming schema is an enum. - app-workflow_builder:5.29.26 [01-28-2021] - Added intermediate status to start job modal to indicate automation is running. - app-workflow_builder:5.29.24 [01-26-2021] - Added a toast message warning for the wrong datatype in job variables. - app-workflow_builder:5.29.19 [01-05-2021] - Added transformation to dropdown menu in create modal. Bug Fixes - app-automation_studio:3.34.16 [04-30-2021] - Fixed an issue where some adapter tasks would not properly register their adapter variables. - app-workflow_builder:5.40.13 [06-09-2021] - Workflow Builder no longer freezes when adding a new connector. Logic added to handle infinite loops when traversing through tasks in a workflow. - app-workflow_builder:5.40.10 [06-01-2021] - The same transformation can now be imported into the JST transformation task multiple times. - app-workflow_builder:5.40.7 [05-28-2021] - Updated sidebar in the Workflow Builder Gen 1 UI to use the current Automation Studio Templates API in place of the deprecate Template Builder API. - app-workflow_builder:5.40.6 [05-28-2021] - Fixed an issue with processing BPA rules that could cause errors in complex workflows. - app-workflow_builder:5.40.5 [05-26-2021] - The Pre-workflow time popup will now render above the Workflow Group field. - app-workflow_builder:5.40.3 [05-24-2021] - JST options are no longer overwritten as "true" in transformation task. - app-workflow_builder:5.40.1 [05-20-2021] - Fixed a bug where task groups were imported as strings instead of as ObjectIds. - app-workflow_builder:5.39.17 [05-13-2021] - Applications the user does not have permission for will no longer appear in the navigation bar. - app-workflow_builder:5.39.16 [05-12-2021] - Fixed bug where an outgoing variable could be marked as a job variable by default. - app-workflow_builder:5.39.15 [05-11-2021] - Modified childJob in the Gen1 canvas workflows to only pass in variables to the child job if the child job does not loop. - app-workflow_builder:5.39.14 [05-11-2021] - Updated links on button bar and app banner for app-workflow_builder in app-studio. - app-workflow_builder:5.39.13 [05-10-2021] - Special characters in document names are now correctly encoded when used in URLs. - app-workflow_builder:5.39.11 [05-05-2021] - Fixed an issue in the Gen1 canvas workflows where some previous reference tasks were not showing when editing a task. - app-workflow_builder:5.39.10 [05-04-2021] - Save buttons in task modals are now always visible. - app-workflow_builder:5.39.9 [05-03-2021] - Encoded inputs in renderJsonSchema task. - app-workflow_builder:5.39.8 [04-29-2021] - Fixed XSS vulnerability in Gen1 Workflow Builder UI. - app-workflow_builder:5.39.7 [04-26-2021] - Fixed Gen1 canvas rendering for Internet Explorer 11. - app-workflow_builder:5.39.5 [04-22-2021] - Fixed a bug that caused a login error when closing the workflow settings dialog. - app-workflow_builder:5.39.4 [04-21-2021] - Input variables for merge and deepmerge tasks will no longer reset when reference tasks are removed from the canvas. - app-workflow_builder:5.39.3 [04-21-2021] - Improved validation check for referenced tasks when editing a workflow. - app-workflow_builder:5.39.2 [04-19-2021] - Added text wrapping to task names and descriptions to fix visibility. This will allow long task names in WFB to be easily read by the user. - app-workflow_builder:5.33.1 [03-30-2021] - Fixed job variable inference of the variable value. - app-workflow_builder:5.31.8 [03-10-2021] - Clicking the Reference Variable dropdown now checks for vars in real-time if the reference task is a transformation. - app-workflow_builder:5.31.7 [03-08-2021] - Clicking a transformation in the navbar now opens it in a new tab. - app-workflow_builder:5.31.5 [02-24-2021] - Fixed a bug that trimmed user input when editing task input variables. - app-workflow_builder:5.31.4 [02-23-2021] - Automations are now allowed to run with invalid inputs. - app-workflow_builder:5.31.3 [02-23-2021] - Fixed an issue where providing an empty authorization could prevent tasks from showing up in the task sidebar. - app-workflow_builder:5.31.2 [02-19-2021] - Fixed a bug that would cause imported workflows to not encode their input and output schemas if they were already present. - app-workflow_builder:5.29.28 [02-09-2021] - Security updated for illegal characters in automation names. - app-workflow_builder:5.29.27 [03-02-2021] - Fixed a bug in automation export where Gen 2 fields were excluded from the resulting document. - app-workflow_builder:5.29.25 [01-27-2021] - Removed method role not listed in top level roles. - app-workflow_builder:5.29.23 [01-19-2021] - Renamed automation options to Gen 1 and Gen 2. - app-workflow_builder:5.29.22 [01-14-2021] - Added schema validation for child job Loop Array. - app-workflow_builder:5.29.21 [01-08-2021] - Fixed an issue where workflow builder canvas search on job variables was not working on JST tasks. - app-workflow_builder:5.29.18 [12-21-2020] - Workflow pre-builts containing the JST task now render correctly and can be saved. - app-workflow_builder:5.29.18 [12-17-2020] - Added dialog to display missing and inactive groups for automations and manual tasks. - app-workflow_builder:5.29.16 [12-17-2020] - Added default row and column values when creating a JST from a childjob task. - app-workflow_engine:9.6.3 [05-11-2021] - Fixed a bug in the Form Builder app. Manual task is now released after cancellation so user can claim the task and work it. Security Fixes - app-workflow_builder:5.39.12 [05-11-2021] - Axios security updated for internal packages. Chores - app-workflow_builder:5.40.9 [06-02-2021] - Updated dependencies for next release cycle. - app-workflow_builder:5.39.6 [04-23-2021] - Filtering added to hide transition transformations from navbar. - app-workflow_builder:5.33.2 [03-30-2021] - Moved project to master pipeline. - app-workflow_builder:5.29.20 [01-05-2021] - Updated dependencies. - app-workflow_builder:5.29.16 [12-17-2020] - Reverted unnecessary package lock changes to resolve a deployment issue. Deprecations - app-workflow_builder:5.40.4 [05-25-2021] - Removed the extractOutput option from transformation tasks in the Gen 1 canvas. - 8 Improvements - 15 Bug Fixes - 26.
https://docs.itential.com/2021.1/changelog/app-workflow_builder/
2021-10-15T23:35:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.itential.com
Choose how to authorize access to blob data with Azure CLI Azure Storage provides extensions for Azure CLI that enable you to specify how you want to authorize operations on blob data. You can authorize data operations in the following ways: - With an Azure Active Directory (Azure AD) security principal. Microsoft recommends using Azure AD credentials for superior security and ease of use. - With the account access key or a shared access signature (SAS) token. Specify how data operations are authorized Azure CLI commands for reading and writing blob data include the optional --auth-mode parameter. Specify this parameter to indicate how a data operation is to be authorized: - Set the --auth-modeparameter to loginto sign in using an Azure AD security principal (recommended). - Set the --auth-modeparameter to the legacy keyvalue to attempt to retrieve the account access key to use for authorization. If you omit the --auth-modeparameter, then the Azure CLI also attempts to retrieve the access key. To use the --auth-mode parameter, make sure that you have installed Azure CLI version 2.0.46 or later. Run az --version to check your installed version. Note who do not already possess the account keys must use Azure AD credentials to access blob data. Important If you omit the --auth-mode parameter or set it to key, then the Azure CLI attempts to use the account access key for authorization. In this case, Microsoft recommends that you provide the access key either on the command or in the AZURE_STORAGE_KEY environment variable. For more information about environment variables, see the section titled Set environment variables for authorization parameters. If you do not provide the access key, then the Azure CLI attempts to call the Azure Storage resource provider to retrieve it for each operation. Performing many data operations that require a call to the resource provider may result in throttling. For more information about resource provider limits, see Scalability and performance targets for the Azure Storage resource provider. Authorize with Azure AD credentials When you sign in to Azure CLI with Azure AD credentials, an OAuth 2.0 access token is returned. That token is automatically used by Azure CLI to authorize subsequent data operations against Blob or Queue storage. For supported operations, you no longer need to pass an account key or SAS token with the command. You can assign permissions to blob data to an Azure AD security principal via Azure role-based access control (Azure RBAC). For more information about Azure roles in Azure Storage, see Assign an Azure role for access to blob data. Permissions for calling data operations The Azure Storage extensions are supported for operations on blob data. Which operations you may call depends on the permissions granted to the Azure AD security principal with which you sign in to Azure CLI. Permissions to Azure Storage containers are assigned via Azure RBAC. For example, if you are assigned the Storage Blob Data Reader role, then you can run scripting commands that read data from a container. If you are assigned the Storage Blob Data Contributor role, then you can run scripting commands that read, write, or delete a container or the data it contains. For details about the permissions required for each Azure Storage operation on a container, see Call storage operations with OAuth tokens. Example: Authorize an operation to create a container with Azure AD credentials The following example shows how to create a container from Azure CLI using your Azure AD credentials. To create the container, you'll need to sign in to the Azure CLI, and you'll need a resource group and a storage account. To learn how to create these resources, see Quickstart: Create, download, and list blobs with Azure CLI. Before you create the container, assign the Storage Blob Data Contributor role to yourself. Even though you are the account owner, you need explicit permissions to perform data operations against the storage account. For more information about assigning Azure roles, see Assign an Azure role for access to blob data. Important Azure role assignments may take a few minutes to propagate. Call the az storage container create command with the --auth-modeparameter set to loginto create the container using your Azure AD credentials. Remember to replace placeholder values in angle brackets with your own values: az storage container create \ --account-name <storage-account> \ --name sample-container \ --auth-mode login Authorize with the account access key If you possess the account key, you can call any Azure Storage data operation. In general, using the account key is less secure. If the account key is compromised, all data in your account may be compromised. The following example shows how to create a container using the account access key. Specify the account key, and provide the --auth-mode parameter with the key value: az storage container create \ --account-name <storage-account> \ --name sample-container \ --account-key <key> --auth-mode key Important must access data with Azure AD credentials. Authorize with a SAS token If you possess a SAS token, you can call data operations that are permitted by the SAS. The following example shows how to create a container using a SAS token: az storage container create \ --account-name <storage-account> \ --name sample-container \ --sas-token <token> Set environment variables for authorization parameters You can specify authorization parameters in environment variables to avoid including them on every call to an Azure Storage data operation. The following table describes the available environment variables.
https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli
2021-10-16T00:53:42
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Niryo_robot_gazebo¶ Usage¶ This package contains models, materials & Gazebo worlds. When launching the Gazebo version of the ROS Stack, the file niryo_robot_gazebo_world.launch.xml will be called to generate the Gazebo world. Create your own world¶ Create your world’s file and put it on the folder worlds. Once it done, you have to change the parameter world_name in the file niryo_robot_gazebo_world.launch.xml. You can take a look at the Gazebo world by launching it without robot by precising the world name in the arg world_name: roslaunch niryo_robot_gazebo niryo_gazebo_world.launch world_name:=niryo_cube_world
https://docs.niryo.com/dev/ros/v3.1.2/en/source/ros/niryo_robot_gazebo.html
2021-10-16T00:33:07
CC-MAIN-2021-43
1634323583087.95
[]
docs.niryo.com
Restoring from backup (using a restore job) Controller with CloudBees Backup Plugin 3.38 or later installed have support for the Restore job type. The operations center does not have a restore option, so to restore an operations center when using CloudBees CI on traditional platforms you can follow the steps for Restoring manually. If you are using CloudBees CI on modern cloud platforms, follow to steps to Backup and restore on Kubernetes - Using a rescue-pod. To restore a backup: Select New Item from the left pane. Select Backup and Restore and enter an item name. Select OK. After you select OK, you are directed to the restore job configuration. Configuration of a restore job is very similar to that of a Freestyle job. Under the Build heading, select Add build step and Restore from backup. Then, just like a backup job, you must configure where the backup is retrieved from. The latest backup available in the given source will be used to restore. When running the restore job, the backup file is downloaded from the configured backend to the Jenkins filesystem, checked for integrity, then unpacked. After these steps are completed, you will be prompted to restart the Jenkins instance. Upon restart, $JENKINS_HOMEis replaced with the content of the backup file, and the restore process is complete. Excluding files from a restore job When a restore job is performed, everything in the $JENKINS_HOME directory is moved to a file named archive_<timestamp>. If you have files that are introduced by the file system in your $JENKINS_HOME directory, for example .snapshot files, you must exclude them from the restore process or you will receive an error when you try to restore. To exclude system files from the restore process on a CloudBees CI on traditional platforms client controller: In the backup job, add a comma-separated list of Ant-style patterns in the system configuration excludes list. In the restore job, select the Advanced button. Select the Preserve Jenkins home contents checkbox under Restore options. Restart the client controller, using the following system property: -Dcb.backup.restore.keepFilesPattern='<Ant-style patterns> Example with Ant-style patterns to exclude .snapshotsfiles: -Dcb.backup.restore.keepFilesPattern='.snapshots,**/.snapshots' To exclude system files from the restore process on a CloudBees CI on modern cloud platforms managed controller: In the backup job, add a comma-separated list of Ant-style patterns in the system configuration excludes list. In the restore job, select the Advanced button. Select the Preserve Jenkins home contents checkbox under Restore options. From operations center select Manage Jenkins > Configure Controller Provisioning. Under the Additional options heading, add the following system property to Global System Properties: cb.backup.restore.keepFilesPattern=<Ant-style patterns> Example with Ant-style patterns to exclude .snapshotsfiles: cb.backup.restore.keepFilesPattern=.snapshots,**/.snapshots Select Save. If you are restoring a backup from the controller, a message will appear in the restore log about creating a [.confxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] hidden file in the $JENKINS_HOME folder and then restarting Jenkins. Example: In order to initiate the restore after shutting this Jenkins instancedown, you will need to touch the file [.confxxxxxxxxxxxxxxxxxxxxxxxxxxxx] in the Jenkins Home directory. Select the gear icon for the controller. Select Restart under Manage in the left navigation pane. After you have reprovisioned the controller, you should see the new system property listed the controller’s Manage Jenkins > System Information.
https://docs.cloudbees.com/docs/admin-resources/latest/backup-restore/restoring-from-backup-plugin
2021-10-16T00:14:58
CC-MAIN-2021-43
1634323583087.95
[]
docs.cloudbees.com
Date: Thu, 12 Jul 2007 01:14:56 +1000 From: Norberto Meijome <[email protected]> To: Eric F Crist <[email protected]> Cc: User Questions <[email protected]> Subject: Re: Some hosting weirdness... Message-ID: <20070712011456.016882a2@localhost> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Wed, 11 Jul 2007 07:19:09 -0500 Eric F Crist <[email protected]> _________________________ . Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1013679+0+/usr/local/www/mailindex/archive/2007/freebsd-questions/20070715.freebsd-questions
2021-10-16T00:55:35
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
Quip Last updated Apr 6th, 2019 | Page history | Improve this page | Report an issue The Quip Snippet¶ This snippet displays all the comments for a given thread. Usage¶ Simply place the snippet wherever you would like to display a comment thread, and specify what the name of that thread will be. [[!Quip? &thread=`myThread`]] Available Properties¶ Quip Chunks¶ There are 4 chunks that are processed in Quip. Their corresponding parameters are: Examples¶ A sample code line for a blog post that's on a Resource with no threading: [[!Quip? &thread=`blog-post-[[*id]]` &threaded=`0`]] A threaded comment thread, but only allowed to go 3 levels deep, and auto-close after 21 days: [[!Quip? &thread=`blog-post-[[*id]]` &maxDepth=`3` &closeAfter=`21`]] A comment thread, with threading, with Gravatars disabled, and only allowing logged-in comments: [[!Quip? &thread=`blog-post-[[*id]]` &useGravatar=`0` &requireAuth=`1`]] A comment thread, pagination enabled, having only 5 root comments per page, and a class on each pagination link li tag called 'pageLink': [[!Quip? &thread=`blog-post-[[*id]]` &limit=`5` &pageCls=`pageLink`]]
https://docs.modx.com/3.x/en/extras/quip/quip
2021-10-15T23:16:05
CC-MAIN-2021-43
1634323583087.95
[]
docs.modx.com
Releasing Instructions for releasing CHT tools Process for updating dependencies Tips for fixing e2e tests Notes for how to bulk load users Notes for developing on Windows Download and run the publicly available Docker image for CHT applications Notes for getting logs using ADB Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.communityhealthtoolkit.org/core/guides/
2021-10-16T00:19:06
CC-MAIN-2021-43
1634323583087.95
[]
docs.communityhealthtoolkit.org
Logging in using cqlsh How to create a cqlshrc file to avoid having enter credentials every time you launch cqlsh. Typically, after configuring authentication, you log into cqlsh using the -u and -p options to the cqlsh command. To avoid having enter credentials every time you launch cqlsh, you can create a cqlshrc file in the .cassandra directory, which is in your home directory. When present, this file passes default login information to cqlsh. Note: Sample cqlshrc files are available in: - Package installations: /etc/cassandra - Tarball installations: install_location/conf Procedure - Open a text editor and create a file that specifies a user name and password. [authentication] username = fred password = !!bang!!$Note: Additional settings in the cqlshrc file are described in Creating and using the cqlshrc file. - Save the file in your home/.cassandra directory and name it cqlshrc. - Set permissions on the file.To protect database login information, ensure that the file is secure from unauthorized access.
https://docs.datastax.com/en/cassandra-oss/2.1/cassandra/security/secure_login_cqlsh_t.html
2021-10-16T00:26:16
CC-MAIN-2021-43
1634323583087.95
[]
docs.datastax.com
Join the Free Intel & Microsoft IoT Hands-On Lab on September 19! We are teaming up with Intel to deliver an end-to-end, free-to-attend Internet of Things (IoT) hands-on lab on September 19. Topics covered in this full day workshop will range from learning Intel’s IoT integrated ecosystem to working with IoT gateways, coding scripts and connecting data collected from sensors up to the Microsoft Azure cloud. In this hands-on -lab, you will develop client-side logic with Node-Red and configure the Azure IoT platform to create an end-to-end IoT solution. Sign up to participate or share with your colleagues! The event is free and will be held in English. Event Date: Tuesday, September 19, 2017 Time: 9am to 5pm Registration Deadline: Tuesday, September 12, 2017 Workshop Location: Citizen Space CH, Bahnhofstrasse 3, Zurich, Switzerland Link to Registration and detailed Agenda What to Bring: The Commercial IoT Developer kit will be provided. All you need is your personal laptop with at least 2 USB ports. You will also need to register for a Microsoft Azure account at least 2 days prior to the event. Technical Skills required: Knowledge of code editors, debuggers, IDEs, open source alternatives and working knowledge of scripting programming languages.
https://docs.microsoft.com/en-us/archive/blogs/microsoft_developer_switzerland_news/join-the-free-intel-microsoft-iot-hands-on-lab-on-september-19
2021-10-16T00:42:41
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Summary A security update for the .NET agent corrects an issue where full SQL queries may be sent to the agent log. Release date: January 16, 2020 Vulnerability identifier: NR20-01 Priority: Medium Affected software The following New Relic agent versions are affected: Vulnerability information In order to generate explain plans, a copy of the SQL query is created and the query is reissued with a request for the execution plan. If the explain plan fails, the agent may log the full SQL statement which could include the parameter values. Mitigating factors The agent will only log this information when set to the DEBUG or FINEST logging levels. Workarounds - Ensure that logging level is not set to DEBUGor FINEST. - Disable capturing of explain plans. - Ensure that file location of log files is secured. - Update to the latest New Relic .NET agent..
https://docs.newrelic.com/docs/security/new-relic-security/security-bulletins/security-bulletin-nr20-01/
2021-10-15T23:42:44
CC-MAIN-2021-43
1634323583087.95
[]
docs.newrelic.com
Family ID¶ This Condition located on the Family category tab in Search Builder requires that you know the Family ID # for the family. It is the number at the end of the URL when you are viewing the Family Page for a family. Example: Enter the ID # and the results will be everyone in that family. You can select Greater Than and enter a number to find families whose family ID # is greater than the number indicated if you are looking for families whose records were created after another specific family. Tip The easiest way to find everyone in the same family is to click the Family Members link on the people record of anyone in the family. If you want to include the extended family, click the Related Families link. Each of these links will take you to Search Builder with the search built for you. The arrow will expand the selection, and the link will build the search.
http://docs.touchpointsoftware.com/SearchBuilder/QB-FamilyId.html
2019-03-18T19:38:29
CC-MAIN-2019-13
1552912201672.12
[]
docs.touchpointsoftware.com
. 150 (1987) Up Up 76 Op. Att'y Gen. 150, 150 (1987) Fees; Licenses And Permits; Mineral Rights; Natural Resources, Department Of; All staff work necessary to determine whether an applicant meets the requirements of the Metallic Mining Reclamation Act must be included in the cost of evaluating the permit, including any evaluation of compliance with other environmental requirements. The withdrawal of a mining permit application by the applicant prior to a final decision on the application does not relieve the applicant from the obligation to pay the cost of evaluation. OAG 35-87 June 29, 1987 76 Op. Att'y Gen. 150, 150 (1987) Carroll D. Besadny , Secretary Department of Natural Resources 76 Op. Att'y Gen. 150, 150 (1987) You have requested my opinion on two questions related to the assessment of fees to cover the actual cost to the Department of Natural Resources, hereafter DNR, of evaluating a mining permit application. You ask what staff work should be covered by the application fee under section 144.85(2)(a), Stats., and Wis. Admin. Code NR 132.06(3)(a). You also ask whether the withdrawal of a mining permit application prior to a final decision on the application affects the department's obligation to assess the mining permit fee. These questions arise from the mining permit application of Exxon Coal and Minerals Company that has recently been withdrawn by the company. 76 Op. Att'y Gen. 150, 150 (1987) In answer to your first question, all staff time required to determine whether the applicant meets the requirements of the Metallic Mining Reclamation Act, hereafter MMRA, must be included in the mining permit application fee. The fact that such staff work may include a determination of compliance with nonmining permit, license or approval requirements related to the mining site does not exempt such work from the assessment. 76 Op. Att'y Gen. 150, 150 (1987) My answer to your second question is that withdrawal of a mining permit application by the applicant prior to a final decision on the application by the DNR does not relieve the applicant of the obligation to pay the entire cost of evaluation. 76 Op. Att'y Gen. 150, 150 (1987) Section 144.85(1)(a) provides that no person may engage in mining for metallic minerals without a mining permit covering the site. Under this section, a person desiring to mine must apply to the DNR for a mining permit. Section 144.85(2)(a) requires that: 76 Op. Att'y Gen. 150, 151 (1987) The application shall be accompanied by a fee established by the department, by rule, which shall cover the estimated cost of evaluating the mining permit application. After completing its evaluation, the department shall revise the fee to reflect the actual cost of evaluation. The fee may be revised for persons to reflect the payment of fees for the same services to meet other requirements. 76 Op. Att'y Gen. 150, 151 (1987) Pursuant to this section, the DNR has promulgated a rule relating to the payment of fees. Wisconsin Administrative Code NR 132.06(3)(a) provides: 76 Op. Att'y Gen. 150, 151 (1987) The application shall be accompanied by the following: 76 Op. Att'y Gen. 150, 151 (1987) . 76 Op. Att'y Gen. 150, 151 (1987) You have asked for clarification as to what staff work should be included in the cost of evaluating the operator's mining permit application. Specifically, you point out that, in order to construct and operate a mine, an applicant will also be required to obtain several additional departmental permits. In any particular mining operation, these permits would regulate activities affecting the surface water, groundwater and activities resulting in air pollution as well as the generation, treatment, storage and disposal of wastes. You ask if section 144.85(2)(a) requires that the cost of evaluating the nonmining permits be included in the mining permit application fee. 76 Op. Att'y Gen. 150, 151-152 (1987) This raises a question of statutory interpretation, the purpose of which is to ascertain and give effect to the intent of the Legislature. DeMars v. LaPour , 123 Wis. 2d 366, 370, 366 N.W.2d 891 (1985). When interpreting a statute, the primary source of interpretation is the language of the statute itself. State v. Consolidated Freightways Corp. , 72 Wis. 2d 727, 737, 242 N.W.2d 192 (1976). The threshold question to be addressed when construing a statute is whether the language used is ambiguous. State v. Engler , 80 Wis. 2d 402, 406, 259 N.W.2d 97 (1977). As stated in Stoll v. Adriansen , 122 Wis. 2d 503, 510, 362 N.W.2d 182 (Ct. App. 1984), "if the statutory language is plain and clearly understood, that meaning must be given to the statute." A statute is ambiguous if reasonable persons could disagree as to its meaning. The test for ambiguity is whether the statute is capable of being understood by reasonably well-informed persons in two or more different senses. State v. Derenne , 102 Wis. 2d 38, 45, 306 N.W.2d 12 (1981). When two interpretations of a statute are possible, the intent of the Legislature is disclosed by the language of the statute in relation to the scope, history, context, subject matter and the object intended to be remedied or accomplished. State v. Automatic Merchandisers , 64 Wis. 2d 659, 663, 221 N.W.2d 683 (1974). 76 Op. Att'y Gen. 150, 152 (1987) The phrase "cost of evaluation" of the mining permit application as used in section 144.85(2)(a) is ambiguous because it is unclear whether the DNR costs associated with evaluating nonmining permits, licenses and approvals related to the mining operation are to be included in the application fee. One could take the position that the mining permit fee application applies only to DNR staff work which is solely attributable to the evaluation of the mining permit and that work done on other permits related to the mine and to the ultimate approval or disapproval of the permit to mine are not subject to the fee. An alternative view is that the mine permit is an "umbrella" permit under which all other permitting falls and therefore the costs of evaluating all DNR requirements related to the mining activities are subject to the permit fee. 76 Op. Att'y Gen. 150, 152 (1987) For the reasons stated below, it is my opinion that all staff work necessary to determine whether a mining permit should be granted, including the evaluation of other environmental requirements, must be included in the mining permit application fee. 76 Op. Att'y Gen. 150, 152 (1987) The intent of a statute must be derived from the act as a whole. State ex rel. B'nai B'rith F. v. Walworth County , 59 Wis. 2d 296, 308, 208 N.W.2d 113 (1973). A review of the MMRA, sections 140.82 to 144.94, reveals that the process of obtaining a mining permit was intended to provide for the comprehensive regulation of a metallic mineral mining operation. Section 144.83 gives the DNR broad authority to regulate exploration, prospecting, mining and reclamation. 76 Op. Att'y Gen. 150, 152-153 (1987) The Legislature was aware that a mining operation would necessarily require the obtaining of nonmining permits, licenses and approvals from the DNR. It provided that the hearing on a mining permit application is to be a master hearing and, when possible, is to include all environmentally related activities regulated by the DNR. Section 144.836 provides: 76 Op. Att'y Gen. 150, 153 (1987) (1) SCOPE. (a) The hearing on the prospecting or mining permit shall cover the application and any statements prepared under s. 1.11 [environmental impact statements] and, to the fullest extent possible, all other applications for approvals, licenses and permits issued by the department. 76 Op. Att'y Gen. 150, 153 (1987) The DNR has promulgated rules designed to use the mining permit as the means by which a comprehensive evaluation of the proposed mine operation takes place. Wis. Admin. Code NR 132.01. The mining plan required to be submitted by the applicant must include plans for complying with all environmental requirements. Wis. Admin. Code NR 132.07. 76 Op. Att'y Gen. 150, 153 (1987) Each approval or denial of an application must be accompanied by detailed findings of fact, conclusions of law and an order. Sec. 144.85(5)(a)f.2., Stats. In order to issue a mining permit, the department must find, among other things, that: 76 Op. Att'y Gen. 150, 153 (1987) The proposed operation will comply with all applicable air, groundwater, surface water and solid and hazardous waste management laws and rules of the department. 76 Op. Att'y Gen. 150, 153 (1987) .... 76 Op. Att'y Gen. 150, 153 (1987) The proposed mine will not endanger public health, safety or welfare. 76 Op. Att'y Gen. 150, 153 (1987) Sec. 144.85(5)(a)1.b. and d., Stats. 76 Op. Att'y Gen. 150, 153 (1987) Plainly, the statutes require that DNR include in its evaluation of the mining permit application an evaluation of the applicant's compliance with all related environmental laws. This evaluation necessarily includes all information submitted in the application, all information received at the master hearing and any other information required by the department for its consideration when deciding whether to issue the permit. 76 Op. Att'y Gen. 150, 153-154 (1987) The Legislature recognized that the mining application fee would necessarily overlap with costs normally assessed by the DNR when carrying out other regulatory responsibilities. It specifically provided for an offset so there would be no duplicative charges. Section 144.85(2)(a) provides: "[t]he fee may be revised for persons to reflect the payment of fees for the same services to meet other requirements." This provision manifests the intent of the Legislature to include all of the actual costs of the evaluation even when the evaluation included "services" necessary to insure that other requirements are being met. The only limitation on the fee assessment is that it should be reduced in those circumstances where the services were already paid for under nonmining regulatory fees. 76 Op. Att'y Gen. 150, 154 (1987) These other regulations provide for a variety of fee structures. For example, when an environmental impact statement is required, the full cost of preparation by the DNR is charged to the applicant. Sec. 23.40(3), Stats. Thus, any charges for work performed in preparing the environmental impact statement cannot be charged as a cost of the mining permit application fee. 76 Op. Att'y Gen. 150, 154 (1987) Other fees assessed by the DNR are fixed amounts and are not based on actual costs. For example, a major new source of air pollution would be required to pay a basic fee of $2,550. Wis. Admin. Code NR 410.03(1)(a)3.¯ 1 In this instance any evaluation costs associated with assuring compliance with air pollution control regulations that are in excess of the air permit fees and which have not been charged as part of the cost of preparation of an environmental impact statement are chargeable to the applicant as part of the mining permit application fee. 76 Op. Att'y Gen. 150, 154 (1987) There are also DNR approvals that may be required in mining operations for which there is no fee. For example, section 144.025(2)(e) and Wis. Admin. Code ch. NR 112 require approval for high capacity wells. There is no fee charged for processing the approval. Thus, when the approval is done as part of the mining permit evaluation, the full cost of work done must be charged as part of the application fee. 76 Op. Att'y Gen. 150, 154-155 (1987) The assessment of all of the actual costs associated with approving a metallic mineral mining operation reflects a legislative purpose to have the operator of the mine, rather than the public, bear the costs of approval. Like the preparation of an environmental impact statement, the effort required to decide whether to issue a mining permit may be substantial. In most instances, the evaluation of nonmining permits that are related to the mining operation is a far more complex and demanding process than is normally the case in nonmining circumstances. The usual permit fees fall far short of covering these costs. In order to cover these expenses, the Legislature provided that the applicant pay all of the actual costs of all work necessary to decide on the application.¯ 2 76 Op. Att'y Gen. 150, 155 (1987) Your second question asks whether withdrawal of the mining permit application prior to a final decision by the DNR affects your obligation to assess the mining permit fee. You point out that Wis. Admin. Code 132.06(3)(a) provides that the initial mining fee is $10,000 and that upon completion of the evaluation the fee is to be adjusted to reflect the actual cost of evaluation. The rule states that the completion occurs upon the issuance of an order to grant or deny a mining permit. You note that Exxon Minerals and Coal Company withdrew its application shortly before the master hearing and that therefore the permit application will not be acted upon. In my opinion, the withdrawal of the application necessarily terminates the evaluation process and therefore the fee adjustment must reflect costs up to the time of withdrawal. 76 Op. Att'y Gen. 150, 155 (1987) While it is correct that Wis. Admin. Code NR 132.06(3)(a) does not appear to provide for this possibility, the rule should be construed to avoid an unreasonable or absurd result.¯ 3 Wis. Environmental Decade v. Public Service Comm. , 84 Wis. 2d 504, 528, 267 N.W.2d 609 (1978). Rules are subject to the same principles of construction as applied to statutes. Law Enforce. Stds. Bd. v. Lyndon Station , 101 Wis. 2d 472, 488, 305 N.W.2d 89 (1981). 76 Op. Att'y Gen. 150, 155 (1987) The statute upon which the rule is based does not define when completion of the evaluation occurs. Sec. 144.85(2)(a), Stats. The legislative purpose to require reimbursement to the DNR for incurred costs would not be fulfilled if an applicant could make an eleventh hour withdrawal and thereby pass the costs to the public. 76 Op. Att'y Gen. 150, 155 (1987) DJH:RAS 76 Op. Att'y Gen. 150, 150 (1987) - Footnote Destination-131 1 Section 144.399(3) provides for an offset of air pollution permit fees for costs assessed by the DNR when doing air quality analysis on the same pollution source in conjunction with the preparation of an environmental impact statement. Thus, as in section 144.85, the Legislature sought to avoid duplicative costs while at the same time not allowing an applicant to avoid costs incurred by the DNR merely because the costs were related to one or more regulatory programs. 76 Op. Att'y Gen. 150, 150 (1987) - Footnote Destination-132 2 The Legislature also took other measures to insure that an approved mining operation would not result in costs to the public. Section 144.86 requires a mine operator to post a bond to cover any state reclamation costs in the event the operator fails to comply with the MMRA or permit. This provision is further evidence of legislative intent that the costs of mining related activities should not be paid by the public. 76 Op. Att'y Gen. 150, 150 (1987) - Footnote Destination-133 3 If the lack of a denial of the application were to prevent DNR from recovering costs, the department could simply go through the otherwise meaningless exercise of issuing a denial, on the basis of the withdrawal. ___________________________ Down Down /misc/oag/archival/_278 false oag /misc/oag/archival/_278 oag/vol76-150 oag/vol76-150 section true » Miscellaneous Documents » Opinions of the Attorney General » Opinions of the Attorney General - prior to 2000 » 76 Op. Att'y Gen. 150
http://docs-preview.legis.wisconsin.gov/misc/oag/archival/_278
2019-03-18T20:03:53
CC-MAIN-2019-13
1552912201672.12
[]
docs-preview.legis.wisconsin.gov
Configuring backup schedule Overview This section contains instructions for configuring backup schedule for laptops and mobile devices. About configuring backup schedule You can automate backups by defining backup interval and schedule for laptops and mobile devices. You can also allow users to modify the backup interval for laptops and pause on-going backups. If the user’s laptop is off, sleeping, or hibernating then inSync does not take a backup until the laptop is in network. The backup starts at its next scheduled interval. In this section In this section, you can find:
https://docs.druva.com/004_inSync_Professional/5.3.1/040_Data_Backup_and_Restore/050_Configuring_backup_schedule
2019-03-18T20:01:22
CC-MAIN-2019-13
1552912201672.12
[]
docs.druva.com
Configuration Reference Overview This section lists the engine variables that you can set to configure your MemSQL cluster. Prior to setting these variables, you should understand how sync and non-sync variables work. This section also explains the configuration files used by the MemSQL management tools and by the lower-level management tool, memsqlctl. Finally, this section lists the cluster, database and table limits for your MemSQL configuration.
https://docs.memsql.com/configuration-reference/v6.7/overview-reference/
2019-03-18T20:15:13
CC-MAIN-2019-13
1552912201672.12
[]
docs.memsql.com
Admin order address manager. Collects order addressing information, updates it on user submit, etc. Admin Menu: Orders -> Display Orders -> Address. Definition at line 8 of file order_address.php. Iterates through data array, checks if specified fields are filled in, cleanups not needed data Definition at line 48 of file order_address.php. Executes parent method parent.render(), creates oxorder object and passes it's data to Smarty engine. Returns name of template file "order_address.tpl". Reimplemented from oxAdminDetails. Definition at line 17 of file order_address.php. Saves ordering address information. Reimplemented from oxAdminView. Definition at line 89 of file order_address.php.
https://docs.oxid-esales.com/sourcecodedocumentation/4.8.0.333c29d26a16face3ac1a14f38ad4c8efc80fefc/class_order___address.html
2019-03-18T20:22:54
CC-MAIN-2019-13
1552912201672.12
[]
docs.oxid-esales.com
TOC & Recently Viewed Recently Viewed Topics Install a Nessus Scanner This section details instructions for installing a Nessus Scanner on Mac OS X, Unix, and Windows operating systems. During the browser portion of the Nessus Scanner install, you will enter settings to link the Nessus Scanner to Tenable.io. To get started, view the hardware requirements and software requirements, and then complete the installation steps.
https://docs.tenable.com/cloud/Content/AdditionalResources/InstallANessusScanner.htm
2019-03-18T19:54:02
CC-MAIN-2019-13
1552912201672.12
[]
docs.tenable.com
. 156 (1987) Up Up 76 Op. Att'y Gen. 156, 156 (1987) Compatibility; Commander in the Brown County sheriff's department, a supervisory position, is not incompatible with the office of president of a village within Brown County. Abstention from participation in discussions or voting or resignation from one of the offices should be considered if a conflict arises. OAG 36-87 June 29, 1987 76 Op. Att'y Gen. 156, 156 (1987) Kenneth J. Bukowski , Corporation Counsel Brown County 76 Op. Att'y Gen. 156, 156 (1987) You have asked whether the offices of village president and commander in the Brown County sheriff's department are compatible. 76 Op. Att'y Gen. 156, 156 (1987) You explain that the four commanders in the sheriff's department are in management positions working directly under the supervision of the sheriff. The commander is a deputized, non-bargaining unit, supervisory position in the department. 76 Op. Att'y Gen. 156, 156 (1987) Currently, one of the commanders in the Brown County sheriff's department is president of a village that has contracted the department to provide law enforcement services for the village. The agreement requires the county to furnish full-time protection services to the village and requires the village to pay for the services at the same cost as the contract rates for the sheriff's department non-supervisory employes, to pay certain fringe benefits for the five deputies assigned to the village and to pay a support charge equal to forty percent of the contract costs. 76 Op. Att'y Gen. 156, 156 (1987) The contract further provides that the assignment of officers to the village is within the discretion of the sheriff's department and shall be made on the same basis as assignments to other sections of the county. The contract, however, includes an exception that states that no officer shall be assigned to the village without the approval of the village. The approval cannot be withheld unreasonably or without just cause. 76 Op. Att'y Gen. 156, 156 (1987) Your letter does not state whether the village president in his office of commander has any duties that affect the assignment of officers to the village or that directly affect the village in any other way. 76 Op. Att'y Gen. 156, 156-157 (1987) Offices can be incompatible because a statute declares them to be so or they can be incompatible under the common law rule. I am not aware of any statute that declares the offices of village president and supervisory deputy sheriff incompatible. The discussion of the common law of incompatibility set forth in 63 Am. Jur. 2d Public Officers and Employees 78 (1984) was quoted in 74 Op. Att'y Gen. 50, 52 (1985): 76 Op. Att'y Gen. 156, 157 (1987) Incompatibility is to be found in the character of the offices and their relation to each other, in the subordination of the one to the other, and in the nature of the duties and functions which attach to them. They are generally considered incompatible where such duties and functions. Two offices or positions are incompatible if there are many potential conflicts of interest between the two, such as salary negotiations, supervision and control of duties, and obligations to the public to exercise independent judgment. If the duties of the two offices are such that when placed in one person they might disserve the public interests, or if the respective offices might or will conflict even on rare occasions, it is sufficient to declare them legally incompatible. Incompatibility has been said to exist when there is a built-in right of the holder of one position to interfere with that of the other, as when the one is subordinate to, or subject to audit or review by, the second; obviously, in such circumstances, where both posts are held by the same person, the design that one act as a check on the other would be frustrated. Incompatibility exists only when the two offices or positions are held by one individual, and it does not exist where the two offices or positions are held by two separate individuals, even though such individuals are husband and wife. 76 Op. Att'y Gen. 156, 157 (1987) An incompatability exists whenever the statutory functions and duties of the offices conflict or require the officer to choose one obligation over another. 76 Op. Att'y Gen. 156, 157-158 (1987) Under these standards, the offices of supervisory deputy sheriff and village president are not incompatible. The offices are parts of separate governmental bodies and neither is subordinate to the other. The duties and functions of each office do not, by their nature, clash. Being roles in separate governmental bodies, the two offices would not normally face conflicts over salary negotiations, supervision and control of duties or obligations to the public to exercise independent judgment. 76 Op. Att'y Gen. 156, 158 (1987) The duties of the two offices are consistent to the extent that each is required to maintain peace and good order and see that the laws of its jurisdiction are obeyed. See secs. 59.24(1) and 61.24, Stats. 76 Op. Att'y Gen. 156, 158 (1987) The contract between the village and the county creates a possibility of conflict between the offices of sheriff's commander and village president, however. Pursuant to paragraph C-4 of the contract, the village can refuse to approve an officer assigned to the village. In his role as commander, the village president might assign an officer to the village who would not meet with the approval of other villagers. The president then might be faced with a conflict in his role as commander and his role as president representing the villagers. 76 Op. Att'y Gen. 156, 158 (1987) Paragraph C-5 of the contract provides that the person designated as liaison between the village and the county for the provision of police services "shall be the person holding the position of Patrol Commander within the organizational structure of the Brown County Sheriff's Department and the Allouez Village Administrator or their respective designee." A conflict may arise if the village president is the patrol commander. In that event, he would be representing the county as commander and in his role as village president he could strongly influence the village administrator who may be subordinate to him. 76 Op. Att'y Gen. 156, 158 (1987) If either of these situations create possible conflicts, the advice previously given by this office is applicable: "In the event a conflict does arise, the individual should abstain from participation in discussions and from voting on such issue and might consider resigning from one of the offices." 74 Op. Att'y Gen. at 54. 76 Op. Att'y Gen. 156, 158 (1987) As in this case, where the Legislature has not specified that the offices are incompatible, and where conflicts are not readily visible by comparison of the normal duties of the offices, "the question of whether one person should hold more than one office is best left to the electors." 74 Op. Att'y Gen. at 54. In this case, that choice would be left with the voters of the village and the sheriff of Brown County. 76 Op. Att'y Gen. 156, 158 (1987) DJH:SWK ___________________________ Down Down /misc/oag/archival/_279 false oag /misc/oag/archival/_279 oag/vol76-156 oag/vol76-156 section true » Miscellaneous Documents » Opinions of the Attorney General » Opinions of the Attorney General - prior to 2000 » 76 Op. Att'y Gen. 156
http://docs-preview.legis.wisconsin.gov/misc/oag/archival/_279
2019-03-18T19:23:33
CC-MAIN-2019-13
1552912201672.12
[]
docs-preview.legis.wisconsin.gov
Using SOAtest’s the data was added to the database correctly. Such validation is performed by adding one of SOAtest’s XML validation tools to the output from the DB tool; this is done using the same strategy demonstrated in the tutorial’s Creating Regression Tests Using the Diff Tool exercise. In addition, the DB tool can also be used to perform database setup or cleanup actions as needed to support your testing efforts.
https://docs.parasoft.com/display/SOAVIRT9104/Validating+the+Database+Layer
2019-03-18T19:29:42
CC-MAIN-2019-13
1552912201672.12
[]
docs.parasoft.com
Task implementation¶ Each task you want to run in Selinon has to be of a type SelinonTask. The only thing you need to define is the run() method which accepts node_args parameter based on which this task computes its results. The return value of your Task is after that checked against JSON schema (if configured so) and stored in a database or a storage if a storage was assigned to the task in the YAML configuration. from selinon import SelinonTask class MyTask(SelinonTask): def run(self, node_args): # compute a + b return {'c': node_args['a'] + node_args['b']} Now you need to point to the task implementation from YAML configuration files ( nodes.yaml): tasks: # Transcripts to: # from myapp.tasks import MyTask - name: 'MyTask' import: 'myapp.tasks' See YAML configuration section for all possible configuration options. In order to retrieve data from parent tasks or flows you can use prepared SelinonTask methods. You can also access configured storage and so. Task failures¶ First, make sure you are familiar with retry options that can be passed in the YAML configuration. If your task should not be rescheduled due to a fatal error, raise FatalTaskError. This will cause fatal task error and task will not be rescheduled. Keep in mind that max_retry from YAML configuration file will be ignored! If you want to retry, just raise any appropriate exception that you want to track in trace logs. In case you want to reschedule your task without affecting max_retry, just call self.retry(). Optional argument countdown specifies countdown in seconds for rescheduling. Note that this method is not fully compatible with Celery’s retry mechanism. Check SelinonTask code documentation. Some implementation details¶ Here are some implementation details that are not necessary helpful for you: SelinonTaskis not Celery task - the constructor of the task is transparently called by SelinonTaskEnvelope, which handles flow details propagation and also Selinon tracepoints SelinonTaskEnvelopeis of type Celery task
https://selinon.readthedocs.io/en/latest/tasks.html
2019-03-18T19:44:15
CC-MAIN-2019-13
1552912201672.12
[]
selinon.readthedocs.io
Do Not Publish Phones¶ This condition located on the Contact Info category tab in Search Builder allows you to find every individual with the Do Not Publish Phones box checked on their people record. This setting relates to whether or not any phone numbers will be published in the online Member or Family Directory for an organization. A My Data user can check the box for this when updating their own people record. Or a staff member with Edit can set this for someone. Use Case If you have a number of classes that plan to use the online Member or Family Directory for their class, you can run this search in order to find out who will not have their phone numbers listed the directory.
http://docs.touchpointsoftware.com/SearchBuilder/QB-DoNotPublishPhones.html
2019-03-18T19:27:29
CC-MAIN-2019-13
1552912201672.12
[]
docs.touchpointsoftware.com
Code Metrics enables you to evaluate code metrics right in the code editor while writing code. By default, a metric appears as a number to the left of a member declaration. The following metrics are available. Use the Show Metrics options page to specify the shown metric and the way it appears. You can specify the shown metric via the drop-down menu as well. To call the menu, click a metric. You can enable or disable Code Metrics from either the Show Metrics options page or from the CodeRush Classic toolbar. This product is designed for outdated versions of Visual Studio. Although Visual Studio 2015 is supported, consider using the CodeRush extension with Visual Studio 2015 or higher.
https://docs.devexpress.com/CodeRush/6532/concepts/visualization-tools/code-metrics
2019-03-18T19:32:15
CC-MAIN-2019-13
1552912201672.12
[]
docs.devexpress.com
.5.1 "start_runit" About an hour ago Up About an hour calico-node Next Steps Now that you have a basic two node Ubuntu cluster setup, see Security using Calico Profiles
https://docs.projectcalico.org/v2.5/getting-started/docker/installation/vagrant-ubuntu/
2019-03-18T19:23:14
CC-MAIN-2019-13
1552912201672.12
[]
docs.projectcalico.org
After reading this guide, you’ll know: Meteor integrates with Cordova, a well-known Apache open source project, to build mobile apps from the same codebase you use to create regular web apps. With the Cordova integration in Meteor, you can take your existing app and run it on an iOS or Android device with a few simple commands.... (If you want to know what features are supported on what browsers and versions, caniuse.com is a great resource.). Important: The Crosswalk project is not maintained anymore. The last Crosswalk release was Crosswalk 23. Read more in this announcement. In order to build and run mobile apps, you will need to install some prerequisites on your local machine. In order to build and run iOS apps, you will need a Mac with Apple Xcode developer tools installed. We recommend installing the latest version, but you should also check the Meteor history for any specific version dependencies.. (You will still be expected to have read and understood the Xcode and Apple SDKs Agreement). As of Cordova iOS 4.3.0 you may also need to sudo gem install cocoapodsto resolve a dependency with PhoneGap Push Plugin. In order to build and run Android apps, you will need to: ANDROID_HOMEand add the tools directories to your PATH On Linux, you may want to use your distribution’s package manager to install a JDK; on Ubuntu, you can even use Ubuntu Make to install Android Studio and all dependencies at the same time. the correct version of the Android Studio SDK Tools: To install an older version of SDK tools: tools/folder in ~/Library/Android/sdk/ Note: If you’re using older version of Meteor, you may also need to install an older version of Android SDK, for example with the Android SDK Manager that comes with Android Studio. Then, you can install Ubuntu Make itself: sudo apt-get install ubuntu-make And finally you use Ubuntu Make to install Android Studio and all dependencies: umake android ANDROID_HOMEand adding the tools directories to your PATH Cordova will detect an Android SDK installed in various standard locations automatically, but in order to use tools like android or adb from the terminal, you will have to make some changes to your environment. ANDROID_HOMEenvironment variable to the location of the Android SDK. If you’ve used the Android Studio setup wizard, it should be installed in ~/Library/Android/sdkby default. $ANDROID_HOME/tools, and $ANDROID_HOME/platform-toolsto your PATH You can do this by adding these lines to your ~/.bash_profile file (or the equivalent file for your shell environment, like ~/.zshrc): # Android export ANDROID_HOME="$HOME an AVD with an API level that is supported by the version of Cordova Android you are using. This will run your app on a default simulated iOS device. You can open Xcode to install and select another simulated device.... You will also need to join the Apple Developer Program to deploy your app on the Apple iOS App Store. meteor run ios-deviceto open your project in Xcode. Allow USB debugging?prompt on the device. meteor run android-deviceto build the app, install it on the device, and launch it. To check if your device has been connected and set up correctly, you can run adb devicesto get a list of devices. A full-stack mobile app consists of many moving parts, and this can make it difficult to diagnose issues.: consolelogging calls from server-side code. consolelogging calls from client-side code.’:. See this article for instructions on how to remote debug your Android app with the Chrome DevTools. location.reload()in the DevTools console to reload a running app, this time with the remote debugger connected. hash.: METEOR_CORDOVA_COMPAT_VERSION_IOS=3ed5b9318b2916b595f7721759ead4d708dfbd46 meteor run ios-device # or METEOR_CORDOVA_COMPAT_VERSION_IOS=3ed5b9318b2916b595f7721759ead4d708dfbd46 meteor build ../build --server=127.0.0.1:3000: meteor add cordova-plugin-camera meteor add cordova-plugin-gyroscope METEOR_CORDOVA_COMPAT_VERSION_EXCLUDE='cordova-plugin-camera,cordova-plugin-gyroscope' meteor run ios-device your compatibility version would not change. The METEOR_CORDOVA_COMPAT_VERSION_* env vars must be present while building your app through run, build or deploy... Hot code pushing new JavaScript code to a device could accidentally push code containing errors, which might leave users with a broken app (a “white. // The timeout is specified in milliseconds! App.setPreference('WebAppStartupTimeout', 30000);. Be warned however, that. Note:. Some Cordova plugins require certain parameters to be set as part of the build process. For example, com-phonegap-plugins-facebookconnect requires you to specify an APP_ID and APP_NAME. You can set these using App.configurePlugin in your mobile-config.js.': '' }); You can remove a previously added plugin using meteor remove: meteor remove cordova:cordova-plugin-camera meteor remove cordova:com.phonegap.plugins.facebookconnect meteor remove cordova:cordova-plugin-underdevelopment); }); Just as you can use Meteor.isServer and Meteor.isClient to separate your client-side and server-side code, you can use Meteor.isCordova to separate your Cordova-specific code from the rest of your code. if (Meteor.isServer) { console.log("Printed on the server"); } if (Meteor.isClient) { console.log("Printed in browsers and mobile apps"); } if (Meteor.isCordova) { console.log("Printed only in mobile Cordova apps"); }.. cordova.plugins.diagnosticplugin. meteor add cordova:[email protected] if (Meteor.isCordova) { cordova.plugins.diagnostic.isCameraAuthorized( authorized => { if (!authorized) { cordova.plugins.diagnostic.requestCameraAuthorization( granted => { console.log( "Authorization request for camera use was " + (granted ? "granted" : "denied")); }, error => { console.error(error); } ); } }, error => { console.error(error); } ); } mobile-config.jsfile in the root of your app directory during build, and uses the settings specified there to generate Cordova’s config.xml. App.info({ id: 'com.meteor.examples.todos', name: 'Todos', version: "0.0.1" }); App.setPreference('BackgroundColor', '0xff0000ff'); App.setPreference('Orientation', 'default'); App.setPreference('Orientation', 'all', 'ios'); Refer to the preferences section of the Cordova documentation for more information about supported options.. If you have installed the Crosswalk plugin you will need to manually copy the APK file cp ~/build-output-directory/android/project/build/outputs/apk/android-armv7-release-unsigned.apk ~. Note: Ensure that you have secure backups of your keystore ( ~/.keystore. © 2011–2017 Meteor Development Group, Inc. Licensed under the MIT License.
https://docs.w3cub.com/meteor~1.5/mobile/
2019-03-18T19:36:18
CC-MAIN-2019-13
1552912201672.12
[]
docs.w3cub.com
14.7.4.21 PolylineSetPatch A patch containing a set of polylines. For performance reasons, the geometry of each patch is described in only one 1D array of 3D points, which aggregates the nodes of all the polylines together. To be able to separate the polyline descriptions, additional information is added about the type of each polyline (closed or not) and the number of 3D points (node count) of each polyline. This additional information is contained in two arrays, which are associated with each polyline set patch. The dimension of these arrays is the number of polylines gathered in one polyline set patch. - The first array contains a Boolean for each polyline (closed or not closed). - The second array contains the count of nodes for each polyline. Derived Classes: (none)
http://docs.energistics.org/RESQML/RESQML_TOPICS/RESQML-500-265-0-R-sv2010.html
2019-03-18T19:24:21
CC-MAIN-2019-13
1552912201672.12
[]
docs.energistics.org
Slide player¶ The slide player is a config player in the MPF media controller that is used to play slide content, including showing slides, hiding slides, and removing slides. (This player is part of the MPF media controller and only available if you’re using MPF-MC for your media controller.) List of settings and options¶ Refer to the slide_player section of the config file reference for a full explanation of how to use the slide player in both config and show files.
http://docs.missionpinball.org/en/latest/config_players/slide_player.html
2019-03-18T20:18:10
CC-MAIN-2019-13
1552912201672.12
[]
docs.missionpinball.org
Org Search Member¶ This Condition is located on the Enrollments category tab in Search Builder and was created for internal use. When you Convert to Search from Organization Search page, this is the Condition that is used. However, if you can want, you can use this Condition and select (or enter data in) the fields that you would normally filter by on the Organization Search page. The results will be all current members in orgs matching your filters.
http://docs.touchpointsoftware.com/SearchBuilder/QB-OrgSearchMember.html
2019-03-18T19:43:05
CC-MAIN-2019-13
1552912201672.12
[]
docs.touchpointsoftware.com
API Reference Number of Palette Colors colors Determines the size of the default palette returned by the palette parameter. Default value is 6. Valid values are in the range 0 – 16. The value only specifies the number of colors in the default palette, given that the dominance palette may or may not be present. does. See the palette docs for more information about the palettes. <link rel="stylesheet" href="">
https://docs.imgix.com/apis/url/color-palette/colors
2018-09-18T18:27:59
CC-MAIN-2018-39
1537267155634.45
[array(['https://static.imgix.net/treefrog.jpg?w=640&h=320&fit=crop&auto=format', 'A very colorful little frog'], dtype=object) ]
docs.imgix.com
This shortcode allows you to embed links and banners, please refer to: Affiliate Links This feature is available in Affiliates Pro and Affiliates Enterprise. You can easily add any number of affiliate banners to any page on your site. Affiliate (text) links are used by your affiliates to link to your site without using a specific banner image. You can use the [affiliates_affiliate_link] shortcode on any page to show affiliates how their link looks like and what code they have to embed on their site to link to yours. Embed an affiliate link using this shortcode on a page: [affiliates_affiliate_link /] Whenever an affiliate is logged in and visits the page where this is embedded, they will see their actual affiliate link to your site. Affiliates need to know how to reproduce this link on their site, emails, etc. To provide the code they need, the render shortcode attribute is used: [affiliates_affiliate_link render="code" /] So, the first shortcode shows an affiliate how the link will appear when embedded on a page using the code. The second shortcode provides affiliates the code they have to embed on their page to make the link appear. Example: creating a page showing a basic affiliate link Create a new page on your blog, then copy and paste the following: “Your affiliate link: [affiliates_affiliate_link /]" Embed this code in your emails or tweets: [affiliates_affiliate_link render="code" /] Save the changes and visit the page logged in as an affiliate. You will see the affiliate link and the code used to embed the affiliate link. Example: linking to a specific page Place the following on a post or page to create an affiliate link to the page at – of course the URL must point to a page on your site. Your affiliate link: [affiliates_affiliate_link url="" /] Embed this code in your emails or tweets: [affiliates_affiliate_link url="" render="code" /] Shortcode attributes. Basic attributes url : (optional) Use this attribute to link to a specific page on your site. By default this attribute is empty, thus the affiliate link will point to your site’s root. render : (optional) Either code or html (default). If set to code, the HTML that must be used to produce the affiliate link is shown. If set to html, the affiliate link is rendered. content : (optional) For text links, this will determine the link text that will appear linked to your site. This option will override any content enclosed by the shortcode. attachment_id : (optional) Use this to render a banner based on an attachment uploaded to your site’s Media. type : (optional) One of append, auto (default) or parameter. Use append when other URL parameters are present and the affiliate ID should be appended as a URL parameter. Use parameter to append the affiliate ID as a URL parameter. Link attributes Link attributes allow to specify determined attributes of the HTML <a> tag that is used to construct the affiliate link. Supported attributes are listed below, prefixed by a_ any can be passed to the shortcode. Please refer to HTML/Elements/a or The A element for an explanation of supported attributes. a_class a_id a_style a_title a_name a_rel a_rev a_target a_type Image attributes Image attributes allow to specify determined attributes of the HTML <img> tag that is used to construct the affiliate banner. Supported attributes are listed below, prefixed by img_ these can be passed to the shortcode. Please refer to Including an image: the IMG element for an explanation of supported attributes. Please note that these attributes are only applied when an image is specified through the img_src attribute. Some may not be applied when the attachment_id is provided. img_alt img_class img_height img_id img_name img_src img_title img_width Examples Note that in each case you will usually embed the same shortcode twice, once without the render attribute to show how the link or banner looks like [affiliates_affiliate_link … /] and once again to show the code that affiliates have to embed on their site: [affiliates_affiliate_link … render="code" /] Make sure you don’t actually use the ellipsis … in the shortcode, as it just represents possible attributes. Simple: [affiliates_affiliate_link]Affiliate Link[/affiliates_affiliate_link] Given text as content : [affiliates_affiliate_link content="Really, click me, I'm a great link!"]Awesome link[/affiliates_affiliate_link] Given text as content with link attributes : [affiliates_affiliate_link content="Click me" a_class="clickme" a_id="clickme1" a_style="font-size:2em;" a_target="_blank" a_title="Click here"]Awesome link[/affiliates_affiliate_link] Link content inside shortcode with link attributes: [affiliates_affiliate_link a_class="clickit" a_id="link1" a_style="font-style:italics;" a_target="_blank" a_title="Click this"]Click here[/affiliates_affiliate_link] In most cases it will be sufficient to use the most basic version of the shortcodes for affiliate links and banners based on the attachment_id of an image you have uploaded to your site’s Media. If you are not familiar with basic HTML element attributes, do not just copy the above examples to your page. In that case, just use the affiliate area generation option and add simple versions of affiliate links and banners.
http://docs.itthinx.com/document/affiliates-enterprise/shortcodes/advanced-shortcodes/affiliates_affiliate_link/
2018-09-18T18:02:25
CC-MAIN-2018-39
1537267155634.45
[]
docs.itthinx.com
CAmkES 2.2.0 Release Notes New Features - realtime extensions: CAmkES systems can now run on realtime seL4. It is possible to configure the realtime properties of systems built to run on realtime seL4. - For more details, see the Realtime Extensions section of the CAmkES Manual. - support for seL4 3.1.0 Documentation Additions - “Keywords” section in manual
https://docs.sel4.systems/releases/camkes/camkes-2.2.0.html
2021-05-06T07:19:17
CC-MAIN-2021-21
1620243988741.20
[]
docs.sel4.systems
Online Authorization To be able to complete online authorization, you need to have an internet connection. In order to authorize your software online, here are the steps you need to follow. 1. Install the software from the distribution media. 2. Once you have installed your plug - in, open the main application (3ds max or Maya).. If you have Internet connection, please connect and click "Authorize" button. Note: If you own multiple licenses of the same product and you want to authorize all of them on the same computer, you can enter multiple serials numbers separated by a comma sign as shown on the image above.
http://docs.afterworks.com/FumeFX5maya/Online%20Authorization.htm
2021-05-06T05:55:09
CC-MAIN-2021-21
1620243988741.20
[array(['Resources/Images/Doc/Online Authorization_280x319.png', None], dtype=object) array(['Resources/Images/Doc/Authorization_2_396x572.png', None], dtype=object) ]
docs.afterworks.com
HEROW Click & Collect To enable the HEROW SDK to continue tracking end-users location events (such as geofences' detection and position statements) happening in a Click & Collect context when the app isInUse. Android: In Java HerowDetectionManager.getInstance().launchClickAndCollect(); In Kotlin HerowDetectionManager.getInstance().launchClickAndCollect() How to stop the Click & Collect Call the following method to disable the background service: In Java HerowDetectionManager.getInstance().stopClickAndCollect(); In Kotlin HerowDetectionManager.getInstance().stopClickAndCollect()
https://docs.herow.io/sdk/6.3/android/click-and-collect.html
2021-05-06T06:40:36
CC-MAIN-2021-21
1620243988741.20
[]
docs.herow.io
Get started with TeamPulse faster. This section will guide you through the tool’s interface and basic functionalities. Learn how to deploy and configure TeamPulse in your environment. Learn how to activate TeamPulse, add users, set permissions, and more. Learn how manage projects in TeamPulse. Learn how to integrate TeamPulse with Microsoft TFS, Git, SVN and Test Studio. See common scenarios other users have asked about. Learn about how to install, setup, use and moderate theTeamPulse feedback add-on. Learn how to use the REST API to integrate TeamPulse with other systems.
https://docs.telerik.com/teampulse/
2021-05-06T06:53:25
CC-MAIN-2021-21
1620243988741.20
[]
docs.telerik.com
This mode is used during an in-place upgrade, where the installer creates a backup of the files modified or newly introduced by you in the old installation into a .migrate.bkp.<old_version> backup directory and merges them into a new installation. This mode has been designed for the installer, but can be invoked by you too. Note: This mode must not be invoked by you if there are multiple .migrate.bkp.<version> directories under the <BASEDIR/smarts directory.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.2/sa-common-install-guide-1012/GUID-356F998F-A633-4EFB-B67A-9477B4C5E6EF.html
2021-05-06T08:02:26
CC-MAIN-2021-21
1620243988741.20
[]
docs.vmware.com
Incident resolution with first call resolution Francie Stafford is a service desk analyst who works on the Calbro Services service desk. She receives a call from Joe Unser, a Calbro Services benefits agents who cannot access one of his key applications, because he is locked out of his user account. Francie creates an incident request, resolves the incident for Joe, and then closes the incident request. Related Topic The following table describes the typical steps involved in this user scenario: Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/servicedesk1908/incident-resolution-with-first-call-resolution-878331417.html
2021-05-06T07:38:23
CC-MAIN-2021-21
1620243988741.20
[]
docs.bmc.com
Notes A new version of the agent has been released. Follow standard procedures to update your Infrastructure agent. Changed - Agent service binary is now newrelic-infra-serviceinstead of newrelic-infra. Later is now a child process spawned by the prior. See further details. Added - Command channel. Enables NR platform to trigger commands to agents via command-api HTTPS endpoint . See further details. - Added log entry for when initializing docker client fails: unable to initialize docker client. Bug fixes - Disabled keep-alive on cloud metadata requests to avoid leaking open connections.
https://docs.newrelic.com/jp/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-1559/
2021-05-06T07:50:26
CC-MAIN-2021-21
1620243988741.20
[]
docs.newrelic.com
dryad_metadata()to download Dryad file metadata dryad_package_dois()to get file DOIs for a Dryad package DOI (a package can have many files) (#22) dryad_files(formerly download_url()) now scrapes Dryad page to get URLs to Dryad files instead of using their API, which was not dependable (#26) dryad_fetchgains a parameter try_file_names(a boolean) which if TRUEwe try to extract file names out of URLs (#26) rdryadfunctions to hard code use of xmlreturn format, and followlocation to follow any redirects (#27) download_url()is now defunct, see dryad_files() solriumpackage instead of solrpackage for interaction with Dryad’s Solr backend (#21) (#24) crulinstead of httrfor HTTP requests (#23) handle2doiand doi2handleto convert between handles and DOIs, and DOIs and handles, respectively (#25) download_urlfunction name has been changed to dryad_files, but you can still use download_urluntil the next version. In addition, download_url/ dryad_filesparameters idis changed to doi. dryad_fetchis improved, and uses curl::curl_downloadinstead of download.file. It now accepts >1 input URL, but destilelength must equal number of urls. d_*()) to interact with Dryad Solr search engine (#10) oaipackage. (#14)
https://docs.ropensci.org/rdryad/news/index.html
2021-05-06T07:30:37
CC-MAIN-2021-21
1620243988741.20
[]
docs.ropensci.org
This HowTo gives an introduction into compact data structures. It is designed independently of SeqAn but it covers design patterns and concepts that are the foundations of efficient C++ used throughout the library. A compact data structure maintains data, and the desired extra data structures over it, in a form that not only uses less space than usual, but is able to access and query the data in compact form, that is, without decompressing them. Thus, a compact data structure allows us to fit and efficiently query, navigate, and manipulate much larger datasets in main memory than what would be possible if we used the data represented in plain form and classical data structures on top. A good source of reading (which is online available for free): Gonzalo Navarro, Compact Data Structures Bitvectors are fundamental in the implementation of most compact data structures because of their compactness. For example, they can be used to mark positions of an array or to encode and compress data. Therefore, an efficient implementation is of utmost importance. A bitvector is a bit array of length of values (bits). In C++ there is no type that defines a single bit, such that you could use std::vector<bit> for this purpose. If you would use, let's say, std::vector<bool> instead, you would use 1 byte = 8 bit for each entry (note that many compilers actually optimize this special case to indeed a single bit representation but let's assume we want a global solution), or even worse std::vector<int> which uses 4 byte = 32 bit. Let us design a compact data structure that stores single bits as values while still supporting the basic operations: As noted above we need to represent bits using larger entities available in the C++ language. It usually pays off if you choose an integer with the size of a machine word , which is 64 on modern architectures, because most compilers offer a special set of functions for integers of this size. Therefore, we will use the C++ type uint64_t for our representation. In the previous sections we talked about arrays of bits, as in a consecutive storage of bits. In C++ we will the use type std::vector for storing values. Implement a new data structure (a struct) called Bitvector that has a member variable called data of type std::vector over uint64_t. The constructor should have one parameter that is the number of bits you would like to store in your bitvector. If you are inexperienced with C++, take a look at this code snippet and fill in the TODO: The bitvector needs to support access to the single bits via a member function read(i), which returns either 0 or 1 depending on the bit at position i. We now face the problem that we do not store single bits but groups of 64 bits in one uint64_t integer. For example, let a data vector contain the number 17. The 64 bit binary representation of 17 is It thus represents a group of 64 bits where the bits at position 59 and 63 (starting from the left and 0) are set to 1. So how do we access single bits within the integer? This can be achieved using bit manipulation. Read up on basic bit manipulation mechanisms in this tutorial if you are new to this. Copy and paste the following implementation of a function into your Bitvector struct and complete the code: Your function should be able to access the bitvector in constant time (don't use loops)! Given the following main function Your program should output If you are inexperienced with C++, use the provided code snippet of the read function and fill in the TODOs: We now want to support writing to our bitvector by implementing the member function write(i, x). This is a bit more tricky so we recommend this as a bonus question for experienced C++ users. Otherwise you may just copy and paste the solution. Complete the following implementation of a function that can access the compact bitvector: Two very useful operations that a bitvector should support are the following: We will implement the rank operation for our compact bitvector representation. In order to support rank and select queries we need two helper data structures (given machine world length ): The block and superblock values obviously need to be stored using an arithmetic data type (e.g. uint8_t, uint16_t, uint32_t, uint64_t). Given an arbitrary bitvector of length n, word length bits and a superblock length of bits, which type would you choose for superblock entries, and which type for block entries and why? If a superblock spans 1600 bits, then the last block value within a superblock can at most count 1599 1s. Thus, a uint16_t is the smallest type that is still able represent this number and should be preferred to larger types to reduce the space of the block data structure. Since we do not know how large our bitvector might be, one should choose a large data type, like uint64_t, to store to prefix sums for the superblock values. Now that we know which data types to work with, let's implement the data structures. Given a bitvector B from the previous assignments: std::vector<uint16_t> blocksand std::vector<uint64_t> superblocks. uint16_t block_sizeand uint16_t superblock_size. void construct(size_t const new_block_size = 64, size_t const new_superblock_size = 512)that overwrites the member variables block_sizeand superblock_sizeand then fills the member variables blocksand superblocks. With the following main function: your program should print the following: If you are inexperienced with C++, you can use the following code snippet and fill in the TODOs: Note that most compilers provide special bit operations on integers. One example is popcount that counts the number of 1s in an integer. This would be very helpful in our application because instead of iterating over every position in our bitvector B we could directly use popcount on every entry of B to get the value for the block (of course this only works since we chose the block size wisely). The code could then look like this (This is compiler specific (GCC)): If you have some time to spare, increase the size of B, and do some runtime tests for the construction. The construction using popcount should be considerably faster. Now that we have the helper data structures of block and superblocks, we will implement the actual support for rank queries. Implement a member function that returns the number of occurrences of bit in , for any ; in particular . If omitted, we assume . Use the same example of the bitvector and block/superblock sizes as in the previous assignment. Given the following main function: Your program should output If you are inexperienced with C++, you can use the following code snippet: Here is the rank function: Here is the full solution:
https://docs.seqan.de/seqan/learning-resources/bitvectors.html
2021-05-06T06:06:42
CC-MAIN-2021-21
1620243988741.20
[]
docs.seqan.de
Understanding the Mechanism of Sanity Functions¶ This section describes the mechanism behind the sanity functions that are used for the sanity and performance checking. Generally, writing a new sanity function is as straightforward as decorating a simple Python function with the reframe.utility.sanity.sanity_function() decorator. However, it is important to understand how and when a deferrable function is evaluated, especially if your function takes as arguments the results of other deferrable functions. What Is a Deferrable Function?¶ A deferrable function is a function whose a evaluation is deferred to a later point in time. You can define any function as deferrable by wrapping it with the reframe.utility.sanity.sanity_function() decorator before its definition. The example below demonstrates a simple scenario: import reframe.utility.sanity as sn @sn.sanity_function def foo(): print('hello') If you try to call foo(), its code will not execute: >>> foo() <reframe.core.deferrable._DeferredExpression object at 0x2b70fff23550> Instead, a special object is returned that represents the function whose execution is deferred. Notice the more general deferred expression name of this object. We shall see later on why this name is used. In order to explicitly trigger the execution of foo(), you have to call evaluate on it: >>> from reframe.utility.sanity import evaluate >>> evaluate(foo()) hello If the argument passed to evaluate is not a deferred expression, it will be simply returned as is. Deferrable functions may also be combined as we do with normal functions. Let’s extend our example with foo() accepting an argument and printing it: import reframe.utility.sanity as sn @sn.sanity_function def foo(arg): print(arg) @sn.sanity_function def greetings(): return 'hello' If we now do foo(greetings()), again nothing will be evaluated: >>> foo(greetings()) <reframe.core.deferrable._DeferredExpression object at 0x2b7100e9e978> If we trigger the evaluation of foo() as before, we will get expected result: >>> evaluate(foo(greetings())) hello Notice how the evaluation mechanism goes down the function call graph and returns the expected result. An alternative way to evaluate this expression would be the following: >>> x = foo(greetings()) >>> x.evaluate() hello As you may have noticed, you can assign a deferred function to a variable and evaluate it later. You may also do evaluate(x), which is equivalent to x.evaluate(). To demonstrate more clearly how the deferred evaluation of a function works, let’s consider the following size3() deferrable function that simply checks whether an iterable passed as argument has three elements inside it: @sn.sanity_function def size3(iterable): return len(iterable) == 3 Now let’s assume the following example: >>> l = [1, 2] >>> x = size3(l) >>> evaluate(x) False >>> l += [3] >>> evaluate(x) True We first call size3() and store its result in x. As expected when we evaluate x, False is returned, since at the time of the evaluation our list has two elements. We later append an element to our list and reevaluate x and we get True, since at this point the list has three elements. Note Deferred functions and expressions may be stored and (re)evaluated at any later point in the program. An important thing to point out here is that deferrable functions capture their arguments at the point they are called. If you change the binding of a variable name (either explicitly or implicitly by applying an operator to an immutable object), this change will not be reflected when you evaluate the deferred function. The function instead will operate on its captured arguments. We will demonstrate this by replacing the list in the above example with a tuple: >>> l = (1, 2) >>> x = size3(l) >>> l += (3,) >>> l (1, 2, 3) >>> evaluate(x) False Why this is happening? This is because tuples are immutable so when we are doing l += (3,) to append to our tuple, Python constructs a new tuple and rebinds l to the newly created tuple that has three elements. However, when we called our deferrable function, l was pointing to a different tuple object, and that was the actual tuple argument that our deferrable function has captured. The following augmented example demonstrates this: >>> l = (1, 2) >>> x = size3(l) >>> l += (3,) >>> l (1, 2, 3) >>> evaluate(x) False >>> l = (1, 2) >>> id(l) 47764346657160 >>> x = size3(l) >>> l += (3,) >>> id(l) 47764330582232 >>> l (1, 2, 3) >>> evaluate(x) False Notice the different IDs of l before and after the += operation. This a key trait of deferrable functions and expressions that you should be aware of. Deferred expressions¶ You might be still wondering why the internal name of a deferred function refers to the more general term deferred expression. Here is why: >>> @sn.sanity_function ... def size(iterable): ... return len(iterable) ... >>> l = [1, 2] >>> x = 2*(size(l) + 3) >>> x <reframe.core.deferrable._DeferredExpression object at 0x2b1288f4e940> >>> evaluate(x) 10 As you can see, you can use the result of a deferred function inside arithmetic operations. The result will be another deferred expression that you can evaluate later. You can practically use any Python builtin operator or builtin function with a deferred expression and the result will be another deferred expression. This is quite a powerful mechanism, since with the standard syntax you can create arbitrary expressions that may be evaluated later in your program. There are some exceptions to this rule, though. The logical and, or and not operators as well as the in operator cannot be deferred automatically. These operators try to take the truthy value of their arguments by calling bool on them. As we shall see later, applying the bool function on a deferred expression causes its immediate evaluation and returns the result. If you want to defer the execution of such operators, you should use the corresponding and_, or_, not_ and contains functions in reframe.utility.sanity, which basically wrap the expression in a deferrable function. In summary deferrable functions have the following characteristics: You can make any function deferrable by wrapping it with the reframe.utility.sanity.sanity_function()decorator. When you call a deferrable function, its body is not executed but its arguments are captured and an object representing the deferred function is returned. You can execute the body of a deferrable function at any later point by calling evaluateon the deferred expression object that it has been returned by the call to the deferred function. Deferred functions can accept other deferred expressions as arguments and may also return a deferred expression. When you evaluate a deferrable function, any other deferrable function down the call tree will also be evaluated. You can include a call to a deferrable function in any Python expression and the result will be another deferred expression. How a Deferred Expression Is Evaluated?¶ As discussed before, you can create a new deferred expression by calling a function whose definition is decorated by the @sanity_function or @deferrable decorator or by including an already deferred expression in any sort of arithmetic operation. When you call evaluate on a deferred expression, you trigger the evaluation of the whole subexpression tree. Here is how the evaluation process evolves: A deferred expression object is merely a placeholder of the target function and its arguments at the moment you call it. Deferred expressions leverage also the Python’s data model so as to capture all the binary and unary operators supported by the language. When you call evaluate() on a deferred expression object, the stored function will be called passing it the captured arguments. If any of the arguments is a deferred expression, it will be evaluated too. If the return value of the deferred expression is also a deferred expression, it will be evaluated as well. This last property lets you call other deferrable functions from inside a deferrable function. Here is an example where we define two deferrable variations of the builtins sum and len and another deferrable function avg() that computes the average value of the elements of an iterable by calling our deferred builtin alternatives. @sn.sanity_function def dsum(iterable): return sum(iterable) @sn.sanity_function def dlen(iterable): return len(iterable) @sn.sanity_function def avg(iterable): return dsum(iterable) / dlen(iterable) If you try to evaluate avg() with a list, you will get the expected result: >>> avg([1, 2, 3, 4]) <reframe.core.deferrable._DeferredExpression object at 0x2b1288f54b70> >>> evaluate(avg([1, 2, 3, 4])) 2.5 The return value of evaluate(avg()) would normally be a deferred expression representing the division of the results of the other two deferrable functions. However, the evaluation mechanism detects that the return value is a deferred expression and it automatically triggers its evaluation, yielding the expected result. The following figure shows how the evaluation evolves for this particular example: Implicit evaluation of a deferred expression¶ Although you can trigger the evaluation of a deferred expression at any time by calling evaluate, there are some cases where the evaluation is triggered implicitly: When you try to get the truthy value of a deferred expression by calling boolon it. This happens for example when you include a deferred expression in an ifstatement or as an argument to the and, or, notand in( __contains__) operators. The following example demonstrates this behavior: >>> if avg([1, 2, 3, 4]) > 2: ... print('hello') ... hello The expression avg([1, 2, 3, 4]) > 2is a deferred expression, but its evaluation is triggered from the Python interpreter by calling the bool()method on it, in order to evaluate the ifstatement. A similar example is the following that demonstrates the behaviour of the inoperator: >>> from reframe.utility.sanity import defer >>> l = defer([1, 2, 3]) >>> l <reframe.core.deferrable._DeferredExpression object at 0x2b1288f54cf8> >>> evaluate(l) [1, 2, 3] >>> 4 in l False >>> 3 in l True The deferis simply a deferrable version of the identity function (a function that simply returns its argument). As expected, lis a deferred expression that evaluates to the [1, 2, 3]list. When we apply the inoperator, the deferred expression is immediately evaluated. Note Python expands this expression into bool(l.__contains__(3)). Although __contains__is also defined as a deferrable function in _DeferredExpression, its evaluation is triggered by the boolbuiltin. When you try to iterate over a deferred expression by calling the iterfunction on it. This call happens implicitly by the Python interpreter when you try to iterate over a container. Here is an example: >>> @sn.sanity_function ... def getlist(iterable): ... ret = list(iterable) ... ret += [1, 2, 3] ... return ret >>> getlist([1, 2, 3]) <reframe.core.deferrable._DeferredExpression object at 0x2b1288f54dd8> >>> for x in getlist([1, 2, 3]): ... print(x) ... 1 2 3 1 2 3 Simply calling getlist()will not execute anything and a deferred expression object will be returned. However, when you try to iterate over the result of this call, then the deferred expression will be evaluated immediately. When you try to call stron a deferred expression. This will be called by the Python interpreter every time you try to print this expression. Here is an example with the getlistdeferrable function: >>> print(getlist([1, 2, 3])) [1, 2, 3, 1, 2, 3] How to Write a Deferrable Function?¶ The answer is simple: like you would with any other normal function! We’ve done that already in all the examples we’ve shown in this documentation. A question that somehow naturally comes up here is whether you can call a deferrable function from within a deferrable function, since this doesn’t make a lot of sense: after all, your function will be deferred anyway. The answer is, yes. You can call other deferrable functions from within a deferrable function. Thanks to the implicit evaluation rules as well as the fact that the return value of a deferrable function is also evaluated if it is a deferred expression, you can write a deferrable function without caring much about whether the functions you call are themselves deferrable or not. However, you should be aware of passing mutable objects to deferrable functions. If these objects happen to change between the actual call and the implicit evaluation of the deferrable function, you might run into surprises. In any case, if you want the immediate evaluation of a deferrable function or expression, you can always do that by calling evaluate on it. The following example demonstrates two different ways writing a deferrable function that checks the average of the elements of an iterable: import reframe.utility.sanity as sn @sn.sanity_function def check_avg_with_deferrables(iterable): avg = sn.sum(iterable) / sn.len(iterable) return -1 if avg > 2 else 1 @sn.sanity_function def check_avg_without_deferrables(iterable): avg = sum(iterable) / len(iterable) return -1 if avg > 2 else 1 >>> evaluate(check_avg_with_deferrables([1, 2, 3, 4])) -1 >>> evaluate(check_avg_without_deferrables([1, 2, 3, 4])) -1 The first version uses the sum and len functions from reframe.utility.sanity, which are deferrable versions of the corresponding builtins. The second version uses directly the builtin sum and len functions. As you can see, both of them behave in exactly the same way. In the version with the deferrables, avg is a deferred expression but it is evaluated by the if statement before returning. Generally, inside a sanity function, it is a preferable to use the non-deferrable version of a function, if that exists, since you avoid the extra overhead and bookkeeping of the deferring mechanism. Deferrable Sanity Functions¶ Normally, you will not have to implement your own sanity functions, since ReFrame provides already a variety of them. You can find the complete list of provided sanity functions here. Similarities and Differences with Generators¶ Python allows you to create functions that will be evaluated lazily. These are called generator functions. Their key characteristic is that instead of using the return keyword to return values, they use the yield keyword. I’m not going to go into the details of the generators, since there is plenty of documentation out there, so I will focus on the similarities and differences with our deferrable functions. Similarities¶ Both generators and our deferrables return an object representing the deferred expression when you call them. Both generators and deferrables may be evaluated explicitly or implicitly when they appear in certain expressions. When you try to iterate over a generator or a deferrable, you trigger its evaluation. Differences¶ You can include deferrables in any arithmetic expression and the result will be another deferrable expression. This is not true with generator functions, which will raise a TypeErrorin such cases or they will always evaluate to Falseif you include them in boolean expressions Here is an example demonstrating this: >>> @sn.sanity_function ... def dsize(iterable): ... print(len(iterable)) ... return len(iterable) ... >>> def gsize(iterable): ... print(len(iterable)) ... yield len(iterable) ... >>> l = [1, 2] >>> dsize(l) <reframe.core.deferrable._DeferredExpression object at 0x2abc630abb38> >>> gsize(l) <generator object gsize at 0x2abc62a4bf10> >>> expr = gsize(l) == 2 >>> expr False >>> expr = gsize(l) + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'generator' and 'int' >>> expr = dsize(l) == 2 >>> expr <reframe.core.deferrable._DeferredExpression object at 0x2abc630abba8> >>> expr = dsize(l) + 2 >>> expr <reframe.core.deferrable._DeferredExpression object at 0x2abc630abc18> Notice that you cannot include generators in expressions, whereas you can generate arbitrary expressions with deferrables. Generators are iterator objects, while deferred expressions are not. As a result, you can trigger the evaluation of a generator expression using the nextbuiltin function. For a deferred expression you should use evaluateinstead. A generator object is iterable, whereas a deferrable object will be iterable if and only if the result of its evaluation is iterable. Note Technically, a deferrable object is iterable, too, since it provides the __iter__method. That’s why you can include it in iteration expressions. However, it delegates this call to the result of its evaluation. Here is an example demonstrating this difference: >>> for i in gsize(l): print(i) ... 2 2 >>> for i in dsize(l): print(i) ... 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/users/karakasv/Devel/reframe/reframe/core/deferrable.py", line 73, in __iter__ return iter(self.evaluate()) TypeError: 'int' object is not iterable Notice how the iteration works fine with the generator object, whereas with the deferrable function, the iteration call is delegated to the result of the evaluation, which is not an iterable, therefore yielding TypeError. Notice also, the printout of 2in the iteration over the deferrable expression, which shows that it has been evaluated.
https://reframe-hpc.readthedocs.io/en/stable/deferrables.html
2021-05-06T05:46:49
CC-MAIN-2021-21
1620243988741.20
[]
reframe-hpc.readthedocs.io
This guide reflects the Classic Console (V1) for Amazon SES. For information about the New Console (V2) for Amazon SES, see the new Amazon Simple Email Service Developer Guide. Las traducciones son generadas a través de traducción automática. En caso de conflicto entre la traducción y la version original de inglés, prevalecerá la version en inglés. Send an email using SMTP with C# The following procedure shows how to use Microsoft Visual Studio Before you perform the following procedure, complete the setup tasks described in Before you begin with Amazon SES and Send an email through Amazon SES using SMTP... nota password.e The email addresses are case-sensitive. Make sure that the addresses are exactly the same as the ones you verified. [email protected]—Replace with your "From" email address. You must verify this address before you run this program. For more information, see Verifyinge Your SMTP credentials are different from your AWS credentials. For more information about credentials, see Types of Amazon SES credentials.), Uso de conjuntos de configuración de Amazon SES..
https://docs.aws.amazon.com/es_es/ses/latest/DeveloperGuide/send-using-smtp-net.html?tag=chimeneaelectrica1-20
2021-05-06T07:01:03
CC-MAIN-2021-21
1620243988741.20
[]
docs.aws.amazon.com
Gumlet does not store your original media. You must store original media in storage of your choice and give us read access to those storage systems. Storing media on your storage system ensures you always retain full control over media. You can prevent vendor lock-ins and have full rights to your original media. We support the following storage providers. You can setup your source with any of them. If you have source images stored in a folder in your server, you can use it as image source for Gumlet. For example, let's assume all your images are stored at example.com/source-images/ You can specify this as base URL while setting image source. Once setup is complete, your image example.com/source-images/yourimage.jpg will be available at example.gumlet.com/yourimage.jpg. You can now manipulate the image as example.gumlet.com/yourimage.jpg?width=300 A Web Proxy source allows your Gumlet source to serve any image with a publicly-addressable URL. If you use this source, we recommend you URL encode all the external URLs supplied. If the external URL of image is and you want to serve it from your web source named example, you should write URL as per below. You can also specify referrer restriction for web proxy source. You can add comma separated list of domain names and Gumlet will only allow requests from those domains. This prevents unauthorised users from using your Gumlet source on their website. By default, any referer is allowed until u add a referrer restriction. You can use images stored in Amazon S3 bucket as image source. Gumlet needs GetObject, ListBucket and GetBucketLocation permissions to access the images. Please create access token with these permissions and add them while creating the image store. You can optionally specify base path while creating Amazon S3 image source. If your image is stored at s3://yourbucket You can use images stored in DigitalOcean Spaces as image source. Please create a new access token as described in this article and create a Gumlet source with those credentials. You can optionally specify base path while creating DigitalOcean Spaces image source. If your image is stored at do://your_space Gumlet supports storing images in Wasabi storage. You can configure the storage in same way and we need same permissions as Amazon S3 for wasabi storage. You can use images stored in Google Cloud Storage bucket as image source. Gumlet needs read bucket and list object permissions to access the images. Please create a service account with those permissions, create a JSON key and use that key while creating image source. You can find a tutorial for the same at Create and manage service account keys. If your image is stored at gs://yourbucket/some/image/path/lenna.jpg, your resultant URL for Gumlet will become yourdomain.gumlet.com/some/image/path/lenna.jpg. If you have stored images in Cloudinary storage, you can setup Gumlet to fetch original images from Cloudinary and process them through our system.
https://docs.gumlet.com/gumlet-product/original-media-storage
2021-05-06T07:11:00
CC-MAIN-2021-21
1620243988741.20
[]
docs.gumlet.com
A validator that checks if a matches a regular expression pattern. More... #include <seqan3/argument_parser/validators.hpp> A validator that checks if a matches a regular expression pattern. On construction, the validator must receive a pattern for a regular expression. The pattern variable will be used for constructing an std::regex and the validator will call std::regex_match on the command line argument. Note: A regex_match will only return true if the strings matches the pattern completely (in contrast to regex_search which also matches substrings). The class than acts as a functor, that throws a seqan3::validation_error exception whenever string does not match the pattern. Constructing from a vector. Tests whether cmp lies inside values. Tests whether every filename in list v matches the pattern.
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1regex__validator.html
2021-05-06T07:46:52
CC-MAIN-2021-21
1620243988741.20
[]
docs.seqan.de
Margo¶ Margo is a C library that helps develop distributed services based on RPC and RDMA. Margo provides Argobots-aware wrappers to Mercury functions. It simplifies service development by expressing Mercury operations as conventional blocking functions so that the caller does not need to manage progress loops or callback functions. Internally, Margo suspends callers after issuing a Mercury operation, and automatically resumes them when the operation completes. This allows other concurrent user-level threads to make progress while Mercury operations are in flight without consuming operating system threads. The goal of this design is to combine the performance advantages of Mercury’s native event-driven execution model with the progamming simplicity of a multi-threaded execution model. This section will walk you through a series of tutorials on how to use Margo. We highly recommend to read all the tutorials before diving into the implementation of any Margo-based service, in order to understand how we envision designing such a service. This will also help you greatly in understanding other Margo-based services.
https://mochi.readthedocs.io/en/latest/margo.html
2021-05-06T07:21:58
CC-MAIN-2021-21
1620243988741.20
[]
mochi.readthedocs.io
This article does not yet show and describe the graphical user interface of Checkmk version 2.0.0. We will update this article as soon as possible. 1. Introduction It is becoming increasingly common in cloud and container environments that hosts to be monitored can not only be generated but also expire automatically. Keeping up to date with the monitoring’s configuration in such an environment is no longer possible manually. Classic infrastructures such as for example, VMware clusters can also be very dynamic, and even if manual care is still possible it is in any case cumbersome. From version 1.6.0 the Checkmk Enterprise Editions supports your Checkmk in this process with a new tool: the Dynamic Configuration Daemon – or DCD for short. The dynamic configuration of hosts means that, based on information from monitoring AWS, Azure, Kubernetes, VMware and other sources, hosts can be added to, and removed from the monitoring in a fully-automated procedure. The DCD is very generic, and is not limited only to host creation. The DCD forms the basis for all future extensions of Checkmk which will dynamically adjust the configuration. This can also mean the management of users, for example. For this purpose the DCD works with so-called connectors. Each connector can get information from a very specific type of source, and has its own specific configuration. Special connectors can also make it much easier in the future to automatically take hosts into Checkmk from an existing CMDB. 2. Managing hosts with the DCD 2.1. The Piggyback-Connector In version 1.6.0 of Checkmk the DCD is at first equipped with only one connector: the one used for piggyback data. This is very universal, since the piggyback mechanism is used by Checkmk in all situations where the query from a host (usually by special agent) provides data of other hosts (usually virtual machines or cloud objects). Here are a couple of examples in which Checkmk uses piggyback in the monitoring: In all of these cases the monitoring automatically retrieves data from other hosts (for example, the VMs) which are not contacted directly via the network and on which also no Checkmk agent needs to run. With the DCD you can add and also remove such hosts automatically in WATO so as to always reflect the real situation in a timely manner. To do this the DCD analyzes the existing piggback data and compares it to the hosts which already exist in WATO, and then re-creates any missing hosts, or respectively, removes redundant ones. There are hosts which are automatically created by the DCD but which are still editable for you in WATO. 2.2. Setting-up dynamic configuration Is piggyback data present? The only requirement to be able to use the DCD is to have piggyback data. You will always have this data if you have correctly set up the monitoring of AWS, Azure and Co. You can easily verify that via the command line as well, because the piggyback data from Checkmk will have been created in the tmp/check_mk/piggyback directory: OMD[mysite]:~$ ls tmp/check_mk/piggyback myvm01 myvm02 myvm03 If this directory is not empty, piggyback data has been generated in this instance. General connector settings Now go to the host administration of WATO. There find the button Dynamic config. This will take you to the configuration of the DCD or its connectors: Create a new connection with New connection. The first part of the configuration is the General properties: Here you assign, as so often, a unique ID and a title for this connection. Also important is the selection of the Checkmk instance on which this connector should run. Because piggyback data is always processed locally, the connector must always be assigned to a specific instance. Properties of the connector The second part is the Connection properties: The connector Piggyback data is already preselected here (and is currently the only one possible). The Sync interval determines how often the connector should search for new hosts. If you keep the regular check interval of one minute, it makes no sense to do that much more often, since a piggyback data change can take place once a minute at most. In very dynamic environments you can use both check interval as well as the connector interval set to much lower values. However this also results in a higher CPU utilization on the Checkmk server. Now it is important to add at least one Piggyback creation option (Add new element). Here you can specify two important things: In which folder the hosts should be created (here for example AWS Cloud 02), and which host attributes should be set. Four important attributes are preset which are mostly applicable for piggy-hosts: No monitoring via SNMP No Checkmk agent on the host itself (data comes via piggyback) Piggyback data is always expected (and there is an error if it is missing) The hosts do not have an IP address Important: Only if you enable Delete vanished hosts will hosts be deleted when they disappear from your dynamic environment. If you do not want to automatically create all hosts, you can do this by restricting the Only add matching hosts option with a regular expression. Important: here we mean the hosts that are being created, and not the hosts you have set up to monitor AWS, for example. The latter can be achieved with the Only add hosts from matching source hosts option. This refers to the names of the hosts that generate piggyback data. Activate Changes Two further options deal with the automatic activation of changes – for the case that hosts really have been created or removed, since only then will they appear in an operational monitoring. If an Activate changes takes a long time in your installation, you can use Group ‘Activate changes’ to make sure that it does not start immediately with each new host, but rather once a few hosts have been ‘collected’. Furthermore, you can also completely stop the automatic activation of changes for specified times during the day – for example, for the times when your monitoring system is being actively looked-after. Because if the DCD activates changes, all other changes that you or a colleague have just made will also become active! After saving the connector appears in the list. It can however not yet run before you have performed an Activate Changes – only then does it start functioning. So therefore do not be irritated by the message Failed to get the status from DCD (The connection ‘piggy01’ does not exist) which appears right after saving. 3. Starting the connector 3.1. The first activation After saving the connectivity properties, and following an Activate Changes, the connection will automatically start its operation. This can go so quickly that right after activating the changes you will immediately see how hosts are being created in WATO: If you reload this page shortly afterwards, these changes will probably have already disappeared, because they were automatically activated by the DCD. The new hosts are already in the monitoring and will be regularly monitored. 4. Automatic deletion of hosts 4.1. When are hosts being deleted? As mentioned above, you can of course allow hosts which ‘no longer exist’ to be deleted automatically from WATO by the DCD. That sounds at first very logical. What exactly is meant by ‘no longer exists’ is however at second glance a bit more complex, as there are several situations to be considered. In the following overview we assume that you have enabled the delete option – since otherwise hosts will never be removed automatically. 4.2. Configuration Options In addition to the question of whether hosts should be removed automatically at all, in the connector properties there are three more options that affect the deletion – options which we skipped discussing earlier: The first setting – Prevent host deletion right after initialization – affects a complete reboot of the Checkmk server itself. In this situation piggyback data for all hosts will at first be missing until the hosts are queried for the first time. To avoid the senseless deletion and reappearance of hosts (which is also accompanied by repeated alarms for known problems), deletions will by default be generally waived during the first 10 minutes. This time limit can be customized here. The Keep hosts while piggyback source sends no piggyback data at all option handles the situation where a host, whose monitoring data created several hosts automatically, returns no piggyback data. This can be the case, e.g. when access to AWS and Co. has stopped working. Or also of course if you have removed the special agent from the configuration. The automatically-generated hosts will remain for the set time in the system before being removed from WATO. The Keep hosts while piggyback source sends piggyback data only for other hosts option is similar, but treats the case that even if piggyback data is being received, but not from some hosts. This is the normal case if, e.g. virtual machines or cloud services are no longer available. If you want the corresponding objects to disappear from Checkmk in a timely manner, then set a correspondingly short time span here. 5. Diagnoses 5.1. Execution History If you want to watch the DCD at work, for each entry in the list of connectors you will find the icon. This takes you to the execution history: In the example shown, you will see an error that occured when creating the configuration: The host with the name Guest_Introspection_(4) could not be created because the parentheses in the name do not produce a valid Checkmk Hostname. 5.2. The WATO Audit Log If you are in WATO on the page where you can activate changes, you will find the button named Audit Log. This will take you to a list of all changes made in WATO – regardless of whether they have already been activated or not. Look for entries from the automation user. The DCD works under this account and generates changes there – so here you can follow which hosts the DCD has created or removed, and when. 5.3. The DCD Log File The DCD’s log file can be found on the command line in the var/log/dcd.log file. Here is an example which fits the above description. Here you willn also find the error message that a specific host could not be created: 2019-09-25 14:45:22,916 [20] [cmk.dcd] --------------------------------------------------- 2019-09-25 14:45:22,916 [20] [cmk.dcd] Dynamic Configuration Daemon (1.6.0-2019.09.25) starting (Site: mysite, PID: 7450)... 2019-09-25 14:45:22,917 [20] [cmk.dcd.ConnectionManager] Initializing 0 connections 2019-09-25 14:45:22,918 [20] [cmk.dcd.ConnectionManager] Initialized all connections 2019-09-25 14:45:22,943 [20] [cmk.dcd.CommandManager] Starting up 2019-09-25 15:10:58,271 [20] [cmk.dcd.Manager] Reloading configuration 2019-09-25 15:10:58,272 [20] [cmk.dcd.ConnectionManager] Initializing 1 connections 2019-09-25 15:10:58,272 [20] [cmk.dcd.ConnectionManager] Initializing connection 'piggy01' 2019-09-25 15:10:58,272 [20] [cmk.dcd.ConnectionManager] Initialized all connections 2019-09-25 15:10:58,272 [20] [cmk.dcd.ConnectionManager] Starting new connections 2019-09-25 15:10:58,272 [20] [cmk.dcd.piggy01] Starting up 2019-09-25 15:10:58,273 [20] [cmk.dcd.ConnectionManager] Started all connections 2019-09-25 15:10:58,768 [40] [cmk.dcd.piggy01] Creation of "Guest_Introspection_(4)" failed: Please enter a valid hostname or IPv4 address. Only letters, digits, dash, underscore and dot are allowed.
https://docs.checkmk.com/latest/en/dcd.html
2021-05-06T06:29:57
CC-MAIN-2021-21
1620243988741.20
[array(['../images/icon_dcd_connections.png', 'icon dcd connections'], dtype=object) array(['../images/dcd_connections_empty.png', 'dcd connections empty'], dtype=object) array(['../images/dcd_connection_general.png', 'dcd connection general'], dtype=object) array(['../images/dcd_connection_properties.png', 'dcd connection properties'], dtype=object) array(['../images/dcd_connection_properties_2.png', 'dcd connection properties 2'], dtype=object) array(['../images/dcd_pending_changes.png', 'dcd pending changes'], dtype=object) array(['../images/dcd_deletion_tuning.png', 'dcd deletion tuning'], dtype=object) array(['../images/dcd_execution_history.png', 'dcd execution history'], dtype=object) ]
docs.checkmk.com
- Schedules - Rotations - View schedule rotations - Page an on-call responder On-call Schedule Management Introduced in GitLab Premium 13.11. Use on-call schedule management to create schedules for responders to rotate on-call responsibilities. Maintain the availability of your software services by putting your teams on-call. With an on-call schedule, your team is notified immediately when things go wrong so they can quickly respond to service outages and disruptions. To use on-call schedules, users with Maintainer permissions must do the following: If you have at least Maintainer permissions to create a schedule, you can do this manually. Schedules Set up an on-call schedule for your team to add rotations to. Follow these steps to create a schedule: - Go to Operations > On-call Schedules and select Add a schedule. - In the Add schedule form, enter the schedule’s name and description, and select a timezone. - Click Add schedule. You now have an empty schedule with no rotations. This renders as an empty state, prompting you to create rotations for your schedule. Edit a schedule Follow these steps to update a schedule: - Go to Operations > On-call Schedules and select the Pencil icon on the top right of the schedule card, across from the schedule name. - In the Edit schedule form, edit the information you wish to update. - Click the Edit schedule button to save your changes. If you change the schedule’s time zone, GitLab automatically updates the rotation’s restricted time interval (if one is set) to the corresponding times in the new time zone. Delete a schedule Follow these steps to delete a schedule: - Go to Operations > On-call Schedules and select the Trash Can icon on the top right of the schedule card. - In the Delete schedule window, click the Delete schedule button. Rotations Add rotations to an existing schedule to put your team members on-call. Follow these steps to create a rotation: - Go to Operations > On-call Schedules and select Add a rotation on the top right of the current schedule. In the Add rotation form, enter the following: - Name: Your rotation’s name. - Participants: The people you want in the rotation. - Rotation length: The rotation’s duration. - Starts on: The date and time the rotation begins. - Enable end date: With the toggle set to on, you can select the date and time your rotation ends. - Restrict to time intervals: With the toggle set to on, you can restrict your rotation to the time period you select. Edit a rotation Follow these steps to edit a rotation: - Go to Operations > On-call Schedules and select the Pencil icon to the right of the title of the rotation that you want to update. - In the Edit rotation form, make the changes that you want. - Select the Edit rotation button. Delete a rotation Follow these steps to delete a rotation: - Go to Operations > On-call Schedules and select the Trash Can icon to the right of the title of the rotation that you want to delete. - In the Delete rotation window, select the Delete rotation button. View schedule rotations You can view the on-call schedules of a single day or two weeks. To switch between these time periods, select the 1 day or 2 weeks buttons on the schedule. Two weeks is the default view. Hover over any rotation shift participants in the schedule to view their individual shift details. Page an on-call responder When an alert is created in a project, GitLab sends an email to the on-call responder(s) in the on-call schedule for that project. If there is no schedule or no one on-call in that schedule at the time the alert is triggered, no email is sent.
https://docs.gitlab.com/13.11/ee/operations/incident_management/oncall_schedules.html
2021-05-06T06:40:50
CC-MAIN-2021-21
1620243988741.20
[]
docs.gitlab.com
digitalSTROM Binding This binding integrates the digitalSTROM-System.. Thing Configuration [ } }
https://docs.openhab.org/v2.1/addons/bindings/digitalstrom/readme.html
2021-05-06T06:22:40
CC-MAIN-2021-21
1620243988741.20
[array(['doc/DS-Clamps.jpg', 'various_digitalSTROM_clamps'], dtype=object)]
docs.openhab.org
The following procedure can be followed to apply a partial payment to an invoice: Alternatively, if you know before hand that a client is planning on making payment for an invoice in two or more instalments. e.g. 50% immediately, 50% in 30 days. You can create two separate invoices each containing 50% of the original quote amount and mark each as paid when payment is received. SnapBill does not currently have a complete partial application of payment to invoices feature. We have a ticket open on our development board for the implementation of such a feature. New features are pushed into development according to the amount of user interest they receive on our development board. If you like this idea then please put your vote behind it or comment on it here:.
https://docs.snapbill.com/applying_partial_payments?do=edit
2021-05-06T07:18:12
CC-MAIN-2021-21
1620243988741.20
[]
docs.snapbill.com
This prose specification is one component of a Work Product that also includes: Code lists for constraint validation: Context/value Association files for constraint validation: Document models of information bundles: Default validation test environment: XML examples: Annotated XSD schemas: Runtime XSD schemas: The ZIP containing the complete files of this release is found in the directory: This specification supersedes: [UBL-2.1] Universal Business Language Version 2.1. Edited by Jon Bosak, Tim McGrath and G. Ken Holman. 04 November 2013. OASIS Standard.. This document was last revised or approved by the OASIS Universal Business Language TC TC’s email list. Others should send comments to the TC’s public comment list, after subscribing to it by following the instructions at the “Send A Comment” button on the TC’s web page at. This Committee Specification Public Review Draft Technical Committee web page (). Note that any machine-readable content (aka. When referencing this specification the following citation format should be used: [UBL-2.2] Universal Business Language Version 2.2. Edited by G. Ken Holman. 01 November 2017. OASIS Committee Specification Public Review Draft. cl/gc/default micro, small and medium-size enterprises (MSMEs). Standardized training, resulting in many skilled workers. A universally available pool of system integrators. Standardized, inexpensive data input and output tools. A standard target for inexpensive off-the-shelf business software. UBL is designed to provide a universally understood and recognized syntax for legally binding business documents an implementation of UN/CEFACT Core Components Technical Specification 2.01, the UBL Library is based on a conceptual model of information components known as Business Information Entities (BIEs). These components are assembled into specific document models such as Order and Invoice. These document models are then transformed in accordance with UBL Naming and Design Rules’ [UBL-NDR] use of the OASIS Business Document Naming and Design Rules [BD-NDR]). The intended primary audiences for this specification are: those who analyse and document business or processes or systems, assessing the business model or its integration with technology; those involved in the identification of business requirements for solutions to support the exchange of the digital business documents; those involved in the design, operation and implementation of software and services for the exchange of digital business documents; or those involved in the design, integration and operation of business applications dealing with digital documents.D-NDR] Business Document Naming and Design Rules Version 1.0. Edited by Tim McGrath, Andy Schoka and G. Ken Holman. 14 July 2016. OASIS Committee Specification 01.. Latest version:. [BOV-FSV] ISO/IEC 15944-20 Information technology - Business operational view - Linking business operational view to functional service view [CPFR] Voluntary Interindustry Commerce Standards, Collaborative Planning, Forecasting, and Replenishment Version 2.0, Global Commerce Initiative Recommended Guidelines, June 2002 [CPFRoverview] CPFR: An Overview, 18 May 2004 .. [Governance] UBL Maintenance Governance Procedures Version 2.2. Edited by Ole Madsen, Tim McGrath and G. Ken Holman. 04 March 2015. OASIS Committee Note 01.. Latest version:. [ISO11179] ISO/IEC 11179-1:1999 Information technology — Specification and standardization of data elements — Part 1: Framework for the specification and standardization of data elements [ODFP] OASIS Standard, Open Document Format for Office Applications (OpenDocument) Version 1.2 - Part 3 Packages, December 2006. [RELAX NG] ISO/IEC 19757-2, Information technology — Document Schema Definition Language (DSDL) — Part 2: Regular-grammar-based validation — RELAX NG , Information technology — Document Schema Definition Language (DSDL) — Part 2: Regular-grammar-based validation — RELAX NG AMENDMENT 1: Compact Syntax [UBL-NDR] UBL Naming and Design Rules Version 3.0. Edited by G. Ken Holman. 20 July 2016. OASIS Committee Note 01.. Latest version:. .2 business documents. They are normative insofar as they provide semantics for the UBL document schemas, but they should not be construed as limiting the application of those schemas. UBL 2.2 extends the generalized supply chain processes of UBL 2.0 (including the commercial collaborations of international trade) to include support for collaborative planning, forecasting, and replenishment; vendor managed inventory; utility billing; tendering; and intermodal freight management. The following diagrams illustrate the business context use case covered by UBL 2.2. The document types included in UBL 2.2 are listed in Section 3, “UBL 2.2 Schemas”. It is important to note that, as with previous UBL releases, the UBL 2.2 library is designed to support the construction of a wide variety of document types beyond those provided in the 2.2 package. It is expected that implementers will develop their own customized document types and components and that more UBL document types will be added as the library evolves. For guidance in customizing UBL document types, see the UBL Guidelines for Customization [Customization]. For guidance in submitting recommended additions to and new UBL document types, see the UBL Maintenance Governance Procedures [Governance]. This section describes some of the requirements and general business rules that are assumed for collaborations and document exchanges using UBL 2.2. All information items in a UBL document are specified by the sender either as they are valued or as they are determined by some manner of a calculation model. For examples, an element may contain a fixed value, such as a name, or may contain a calculated value, such as one that is derived as the sum of other elements’ values. The way a value is established or perhaps based upon a calculation model may or may not be documented by the sender. This imposes obligations on the sender when creating the UBL. All fixed and calculated values must be manifest in the UBL instance. The receiver cannot presume to know that the sender has omitted an absent value as an assumption or as an indication of any kind that is pertinent to how the information is processed. Moreover, the sender cannot rely on the receiver deriving absent values from received values. The onus is on the sender to include all information, such as all pertinent indications and all relevant sums or calculations. The receiver need not make any assumptions nor perform any computations whatsoever when dealing with the sender’s information. An example receiver application is a print facility that can print any instance of a given UBL document type without having to perform any calculations nor need even know the underlying calculation model..2.11, Section 2.3.3.4-fields, 3, ignments takes place “behind the scenes” through the involvement of a freight forwarder, who becomes both a second consignee and a second consignor (Figure de-consolidate the consignment. Note that the word “consignment” in the context of transportation has a meaning different from that of “consignment” in sales and vendor-managed inventory (Section 2.3.3.5, . There are two methods of capturing Transport Event information: at the Consignment level and at the Shipment Stage level. A Consignment may pass through several shipment stages in its lifetime, for maritime shipments this would typically be pre-carriage, main carriage and on-carriage stages. Each of these stages has events such as pickups and deliveries. In these scenarios the Shipment Stage is the appropriate structure for containing the Transport Event information. But it is also possible for the information to be a snapshot of the status of a Consignment (for example where the consignee and consignor are not aware of these stages). This view of the Consignment is as one set of Transport Events. In these scenarios the Consignment is the appropriate structure for holding the Transport Event information. UBL 20022 guides for details). UBL is also designed to support basic trade financing practices (invoice financing, factoring, pre-shipment/order financing, Letter of Credit, etc.). The structure and semantics of UBL.1, the UBL 2.2 library and documents support an increased range of different business processes. See Section B.5, “Minor Revision: UBL 2.2” for a detailed summary of the changes to the library and documents. The UBL business processes now supported can be categorized as follows (those with document type additions in 2.2 are shown in italicized boldface): Section 2.3.2.1, “Collaborative Planning, Forecasting, and Replenishment” Section 2.3.2.1.1, “Collaborative Planning, Forecasting, and Replenishment Introduction” Section 2.3.2.1.2, “Collaboration Agreement and Joint Business Planning” Section 2.3.2.1.3, “Sales Forecast Generation and Exception Handling” Section 2.3.2.1.4, “Order Forecast Generation and Exception Handling” Section 2.3.3, “Source (procurement)” Section 2.3.3.1, “Tendering (pre-award)” Section 2.3.3.1.1, “Tendering Introduction” Section 2.3.3.1.2, “Contract Information Preparation” Section 2.3.3.1.3, “Contract Information Notification” Section 2.3.3.1.4, “Invitation to Tender” Section 2.3.3.1.5, “Expression of Interest” Section 2.3.3.1.6, “Unsubscribe from Procedure” Section 2.3.3.1.7, “Submission of Qualification Information” Section 2.3.3.1.8, “Qualification Application” Section 2.3.3.1.9, “Enquiry” Section 2.3.3.1.10, “Submission of Tenders” Section 2.3.3.1.11, “Tender Status” Section 2.3.3.1.12, “Tender Withdrawal” Section 2.3.3.1.13, “Awarding of Tenders” Section 2.3.3.1.14, “Tender Contract” Section 2.3.3.2, “Catalogue” Section 2.3.3.3, “Quotation” Section 2.3.3.4, “Ordering (post-award)” Section 2.3.3.5, “Vendor Managed Inventory” Section 2.3.5, “Deliver ” Section 2.3.5.1, “Logistics” Section 2.3.5.2, “Transport ” Section 2.3.5.3, “Freight Status Reporting” Section 2.3.5.4, “Certification of Origin of Goods” Section 2.3.5.5, “Cross Border Regulatory Reporting” Section 2.3.5.6, “Intermodal Freight Management” Section 2.3.5.6.1, “Intermodal Freight Management Introduction” Section 2.3.5.6.2, “Announcing Intermodal Transport Services” Section 2.3.5.6.3, “Establishing a Transport Execution Plan” Section 2.3.5.6.4, “Providing an Itinerary for a Transport Service” Section 2.3.5.6.5, “Reporting Transport Means Progress Status” Section 2.3.7.1, “Billing” Section 2.3.7.2, “Freight Billing” Section 2.3.7.3, “Utility Billing” Section 2.3.7.4, “Payment Notification” Section 2.3.7.5, “Report State of Accounts” Section 2.3.8, “Business Directory and Agreements” The VICS Collaborative Planning, Forecasting, and Replenishment (CPFR®) guidelines [CPFR] formalize the processes by which two trading partners agree upon a joint plan to forecast and monitor sales through replenishment and to recognize and respond to any exceptions. In the UBL. (UBL. 10, “CPFR Steps 3, 4, and 5” and Figure 11, “Create Sales Forecast”,. In many cases some time may 13, “CPFR Steps 6, 7, 8 and 9”,..3.2.1.3, . Tendering is the case where a contracting authority (the Originator) initiates a procurement project to buy goods, services, or works during a specified period, as shown in the following diagram. A similar but less formally defined process than tendering is quotation (see Section 2.3.3.3, “Quotation”)..3.1.3, . An economic operator expresses interest in a tendering process by submitting an Expression of Interest. The Contracting Authority replies with an Expression of Interest Conformation to confirm the economic operator will receive any modification of the terms and documents related with that tendering process. An economic operator requests to be unsubscribed from a tendering process by submitting an Unsubscribe From Procedure. The Contracting Authority replies with an Unsubscribe From Procedure Conformation to confirm the economic operator will be removed from the list of interested economic operators and will not receive any modification of the terms and documents related with that tendering process. contracting authority makes a description of the required qualification application request (In Europe: ESPD Request) to an Economic Operator (the tenderer). The Economic Operator (the tenderer) makes a description of the required application qualification response (In Europe: ESPD Response) to a Contracting Authority in order to become eligible to participate in the tendering process. A requester sends a question to a responder using an Enquiry document and the responder replies with a Response document. strict deadlines for tender presentation. An economic operator asks about the details and the status of a tendering procedure. In reply to this enquiry, the contracting authority sends information to the economic operator describing the status of a tendering process. An economic operator requests to withdraw a submitted tender to the contracting authority. Based on that document, the contracting authority will remove the tender from the tendering system.-out request is a form of Request For Quotation (see Section 2.3.3.3, “Quotation”), the exchange transaction is tightly coupled to the specific catalogue application and is considered outside the scope of UBL; thus, the only UBL document type involved in this process is Quotation. Less formally defined than a tender (see Section 2.3.3.1, “Tendering (pre-award)”), a quotation process is the case where the Originator asks for a Quotation via a Request Section 2.3.3.4.4, “Order Response”..3.3.4. UBL document types used here are Catalogue, Despatch Advice, and Receipt Advice. The sales and inventory movement information is transferred from the retailer to the producer using Product Activity.. A UBL. Document types used here are Instruction For Returns, Despatch Advice, and Receipt Advice.. Order, Despatch Advice, and Receipt Advice are used in this process. Code field in each changed Catalogue Line.) using a Stock Availability Report.”). Document types used in this process include Order, Order Change, Despatch Advice, and Receipt Advice. Section 2.3.3.5.4.4, by sending an updated Catalogue document. Item change is indicated by an optional Action Code field in each changed Catalogue Line. The process for changing the catalogue in Replenishment On Customer Demand is the same as in CRP (see Figure 51, “Changes to the Item Catalogue”). The make processes include, production activities, packaging, staging product, and releasing. It also includes managing the production network, equipment and facilities, and transportation. As these are traditionally internal organizational activities they are not included in this release. However we anticipate and welcome submissions from the industry for document types that may be utilized in these processes. Fulfilment is the collaboration in which the goods or services are transferred from the Despatch Party to the Delivery Party. Document types in these processes are Despatch Advice, Receipt Advice, Order Cancellation, Order Change, and Fulfilment Cancellation..3.3.4, “Ordering (post-award)”). The Seller may have a fulfilment (or customer) service dealing with anomalies. Cancellation of a Despatch Advice or Receipt Advice is accomplished using the Fulfilment Cancellation document (see Section 2.3.5.1.4, “Fulfilment Cancellation Business Rules”).. It also 55, “Fulfilment with Despatch Advice”). Similarly, a Receipt Advice may later be cancelled by the customer (see Figure 56, “Fulfilment with Receipt Advice”) if the customer discovers an error in ordering (failure to follow formal contractual obligations, incorrect product identification, etc.) or a problem with a delivered item (malfunction, missing part, etc.). In this case, the billing and payment process may be put on hold. Freight management for domestic trade is typically accomplished using Despatch Advice and Receipt Advice (see Section 2.3.5.1, “Logistics”). The additional processes shown in Figure 57, .3.5.3, .2.12, “Shipment vs. Consignment”. For a discussion of the difference between transport and transportation, see Section 2.2.13, “Transport vs. Transportation”... A Weight Statement is a transport document verifying the declared true gross mass of a packed container. Working with this knowledge avoids injury, container loss, damage to cargo, etc. Formally verifying the gross mass may be a condition for. The major applications for Cross Border Regulatory reporting are: Single Window Systems Co-ordinated Border Management Data Re-use Supply Chain Security Security Filing Trade & Transport Data Pipelines Trade Data Intelligence Work is currently in progress within the UBL Technical Committee to develop UBL documents that work with the cross border regulatory requirements. These will provide a link between the information contained in commercial business documents and the information required for reporting to customs and other government agencies for the clearance of goods, cargo and means of transport. These UBL documents will complement the WCO Data Model standards. 63, .3.5.3, “Freight Status Reporting”). Organizations may be required to handle the return of containers, packaging, or defective product. The return involves the management of business rules, return inventory, assets, transportation, and regulatory requirements. Currently there are no specific UBL digital business documents associated with these processes. However we anticipate and welcome submissions from the industry for document types that may be utilized in these processes.forma invoice (pre Despatch Note Note (e.g., the Consignor) and Transport Service Provider (e.g., a Freight Forwarder).. One of the increasing challenges with undertaking digital business is discovering and recording the specific operational and technical capabilities of trading organizations to reciprocate in digital trading agreements that are interoperable. As the market relies less and less on single service provider hubs and moves to a federated 4-corner model for document exchanges, this information becomes distributed across various parties. The Business Card allows a standardized way of presenting general trading capability information as well as company’s main communication channels and references to company presentations such as flyers and brochures. The Digital Capability allows a standardized way of presenting digital trading capability ratification in a form that can be published or exchanged with trading partners. The digital capabilities of business partners are the source for building a Digital Agreement. The data structures have been derived from the work of ebXML CPPA (Collaboration Protocol Profile and Agreement), OpenPEPPOL and other directory services initiatives., the following are roles that extend the Party structure: Customer Party, Supplier Party, Contracting Party, Endorser Party, and Qualifying Party. The UBL XSD schemas [XSD1][XSD2] are the only normative representations of the UBL document types and library components for the purposes of XML [XML] document validation and conformance. All of the UBL XSD schemas are contained in the xsd subdirectory of the UBL release package (see Appendix A, Release Notes (Non-Normative) for more information regarding the structure of the release package and Section 3.4, “Schema Dependencies” for information regarding dependencies among the schema modules). The xsd directory is further subdivided into an xsd/maindoc subdirectory containing the schemas for individual document types. Along with a link to the normative schema for each document type, each table provides links to the corresponding “runtime” schema, model spreadsheets and summary report in HTML (see Appendix C, The UBL 2.2 Data Model (Non-Normative)), and example instance, if any (see Appendix F, UBL 2 Bill of Lading and compare with Waybill. Description: A document used to provide information about a business party and its business capabilities. Description: A document used by a Contracting Party to define a procurement project to buy goods, services, or works during a specified period. Description: A document that describes items, prices, and price validity. See Catalogue. Description: A document used to update information (e.g., technical descriptions and properties) about Items in an existing Catalogue. Description: A document used to update information about prices in an existing Catalogue. Description: A document that describes the Certificate of Origin. support business parties agreeing on a set of digital processes, terms and conditions to ensure interoperability. Description: A document used to provide information about a business party and its digital capabilities. Description: A document used to provide information about document status. Description: A document used to request the status of another document. Description: A document sent by a requestor to a responder requesting information about a particular business process. Description: A document sent by a responder to a requester answering a particular enquiry. Description: A document used to specify the thresholds for forecast variance, product activity, and performance history beyond which exceptions should be triggered. Description: A document used to notify an exception in forecast variance, product activity, or performance history. Description: A document whereby an Economic Operator (the tenderer) makes an Expression Of Interest in a Call For Tenders to a Contracting Authority Description: A document whereby a Contracting Authority accepts receiving an Expression Of Interest from an Economic Operator (the tenderer) Description: A document issued to a forwarder, giving instructions regarding the action to be taken for the forwarding of goods described therein. See Forwarding Instructions. Description: A document stating the charges incurred for a logistics service. Description: A document used to cancel an entire Despatch Advice or Receipt Advice. Description: A document providing details relating to a transport service, such as transport movement, identification of equipment and goods, subcontracted service providers, etc. Description: A document to notify the deposit of a bid bond guarantee. whereby a Contracting Authority makes a description of the required qualification Application (In Europe: ESPD Request) to an Economic Operator (the tenderer) Description: A document whereby an Economic Operator (the tenderer) makes a description of the required qualification Application (In Europe: ESPD Response) to an Contracting Authority Description: A document used to quote for the provision of goods and services. Description: A document used to describe the receipt of goods and services. whereby a Contracting Authority sends information to the Economic Operator describing the final contract after a tendering procedure. Description: A document sent by a contracting party to an economic operator acknowledging receipt of a Tender. Description: A document whereby a Contracting Authority sends information to the Economic Operator describing the status of a tendering procedure. Description: A document whereby an Economic Operator (the tenderer) asking about the details and status of a tendering procedure Description: A document whereby an Economic Operator (the tenderer) makes a Tender Withdrawal to a Contracting Authority Description: A document declaring the qualifications of a tenderer. document whereby an Economic Operator (the tenderer) wants to Unsubscribe From Procedure and sends it to Contracting Authority Description: A document whereby a Contracting Authority accepts receiving an Unsubscribe From Procedure from an Economic Operator (the tenderer) and sends a confirmation Description: A supplement to an Invoice or Credit Note, containing information on the consumption of services provided by utility suppliers to private and public customers, including electricity, gas, water, and telephone services. Description: A transport document describing a shipment. It is. See Waybill and compare with.2 Data Model (Non-Normative). The name of each schema file together with a brief description of its contents is given below. CommonBasicComponents xsd/common/UBL-CommonBasicComponents-2.4, “Business Information Entities”. CCTS_CCT_SchemaModule xsd/common/CCTS_CCT_SchemaModule-2.2.xsd [CCTS] permits the definition of Qualified Datatypes as derivations from CCTS-specified Unqualified Datatypes. In UBL 2.2, all data type qualifications are expressed in the [CVA] file cva/UBL-DefaultDTQ-2.2.cva. The UBL-QualifiedDataTypes-2.2.xsd file in the UBL 2.2 release has declarations for each qualified type being only an unmodified restriction of the base unqualified data type, thus adding no constraints. The Common Basic Components type declarations point to the XSD qualified types where the BBIEs are qualified in the CCTS model, but all BBIEs are effectively unqualified. See Appendix D, Data Type Qualifications in UBL (Non-Normative) for information regarding UBL 2.2 data type derivation..2.xsd The CommonExtensionComponents schema defines the extension scaffolding used in all UBL document types, providing metadata regarding the use of an extension embedded in a UBL document instance (see Section 3.5, “Extension Methodology and Validation”). ExtensionContentDatatype xsd/common/UBL-ExtensionContentDataType-2.2.xsd The ExtensionContentDataType schema specifies the actual structural constraints of the extension element containing the foreign non-UBL content. By default, the version of this schema provided in the UBL 2.2 distribution imports the UBL Signature Extension module and namespace (see Section 3.3 and that extension’s use of XAdES. Without adding additional directives, the user’s constructs found under the extension point will not be validated. No changes are required to the complex type declaration for ExtensionContentType. The original declaration is considered the normative declaration but may be modified by users to accommodate restrictions they impose on the presence of extensions. To promote interoperability, imposing such restrictions on the type declaration is not recommended. UBL 2.2 schemas are supplied with a predefined standard extension that supports advanced digital signatures; see Section 3.4, “Schema Dependencies” and Section 5.4, “UBL Extension for Enveloped XML Digital Signatures” for further information regarding the UBL extension supporting digital signatures such as XAdES. CommonSignatureComponents xsd/common/UBL-CommonSignatureComponents-2.2.xsd The SignatureAggregateComponents schema defines those Aggregate Business Information Entities (ABIEs) that are used for signature constructs not defined in the common library. SignatureBasicComponents xsd/common/UBL-SignatureBasicComponents-2.2.xsd The SignatureBasicComponents schema defines those Basic Business Information Entities (BBIEs) that are used for signature constructs not defined in the common library. For a discussion of the terms Basic Business Information Entity and Aggregate Business Information Entity, see Section C.4, “Business Information Entities”. xmldsig-core-schema xsd/common/UBL-xmldsig1-schema-2.2.xsd This is a copy of the IETF/W3C Digital Signature core schema file, modified only to include a header and to import the renamed other digital signature schema fragments. xmldsig-core-schema xsd/common/UBL-xmldsig11-schema-2.2.xsd This is a copy of the IETF/W3C Digital Signature core schema file, modified only to include a header. xmldsig-core-schema xsd/common/UBL-xmldsig-core-schema-2.2.xsd This is a copy of the IETF/W3C Digital Signature core schema file, modified only to include a header and to remove the unnecessary PUBLIC and SYSTEM identifiers from the DOCTYPE. XAdES01903v132-201601 xsd/common/UBL-XAdES01903v132-201601-201903v141-201601 xsd/common/UBL-XAdES01903v141-201601-2.2.xsd This is a copy of the XAdES v1.4.1 schema file, modified only to change the importing URI for the XAdES v1.3.2 and the XML digital signature core schema files. The presence of this schema file does not oblige the use of XAdES. It is provided only as a convenience for those users who choose to include an XAdES extension inside of a digital signature. The following diagram details the dependencies among the schema modules making up a UBL 2.2 document schema. The UBL schemas define in ExtensionContentDataType the content of each extension to be a single element in any namespace. The schemas are delivered supporting the UBL standardized extension for digital signatures (namespaces with prefixes sig:, sac: and sbc:, though the prefix values are not mandatory) by importation. For more information regarding the signature extension, see Section 5.4, “UBL Extension for Enveloped XML Digital Signatures”. As shown at the bottom and right in this diagram, a set of XSD schemas supporting a different user-customized extension can be engaged by replacing the delivered ExtensionContentDataType schema fragment with one also importing the required custom schema apex fragment that defines the custom content (depicted using namespaces with example prefixes xxx:, xac: and xbc:).. The relationship of the UBL schemas to the UBL data model is illustrated in Figure C.1, “UBL Data Model Realization”. There exist many established XML vocabularies expressing useful semantics for information exchange. The W3C digital signature vocabulary is but one example of such a vocabulary that has its own governance, life-cycle and publication schedule. It is futile to attempt to mimic all of an established vocabulary’s constructs as new UBL constructs and keep up with changes made in their life cycle. Moreover, it is untenable to ask users to re-frame all of the content of an established vocabulary into any such new UBL constructs. Also, user communities may have the need to exchange information that is found neither in the UBL schemas nor in an established XML vocabulary. A colloquial XML vocabulary can be designed within which this information is expressed. Should the user community wish to promote the inclusion of their additional semantics into the UBL specification, the UBL Maintenance Governance Procedures [Governance] outlines how one would use the extension point and submit proposals for enhancements. The UBL extension scaffolding allows the inclusion of multiple extensions in any UBL instance, be they structured by established or colloquial XML vocabularies. Every UBL instance is allowed to contain extension content using the element <ext:UBLExtensions> in the extension namespace urn:oasis:names:specification:ubl:schema:xsd:CommonExtensionComponents-2 (there are no constraints on the namespace prefix, only the namespace URI). This element must be the first child element of the document element. It must contain one or more <ext:UBLExtension> elements. Each <ext:UBLExtension> element contains the metadata and content of a single extension. All extension metadata is optional, and the extension content is mandatory. The extension content element contains as its only child the apex element, in a namespace other than the UBL extension namespace, of an arbitrary XML structure. An excerpt of the example instance that includes a single extension without extension metadata is as follows: xml/MyTransportationStatus.xml <Transportation" xmlns="urn:oasis:names:specification:ubl:schema:xsd:TransportationStatus-2"> <!--this document needs additional information not defined by the UBL TC--> <ext:UBLExtensions> <ext:UBLExtension> <ext:ExtensionContent> <mec:Additional xmlns: <mac:QualificationLevel> <cbc:ID>L1</cbc:ID> <cbc:Description>Level 1</cbc:Description> <mbc:LevelPrerequisite>Level0</mbc:LevelPrerequisite> </mac:QualificationLevel> ... </mec:Additional> </ext:ExtensionContent> </ext:UBLExtension> </ext:UBLExtensions> <!--the remainder is stock UBL--> <cbc:UBLVersionID>2.1</cbc:UBLVersionID> <cbc:CustomizationID>urn:X-demo:TransportShipments</cbc:CustomizationID> ... </TransportationStatus> The UBL Digital Signature extension described in Section 5, “UBL Digital Signatures” is built into the UBL distribution and validates transparently. Users wishing to validate other extensions found in the instance simply revise the UBL-ExtensionContentDataType-2.2.xsd schema fragment. An <xsd:import> directive is added to incorporate the schema constraints of the apex of another extension to be validated in the single pass of XSD validation. Figure 78, “UBL Schema Dependencies” shows the replacement of the schema fragment with one in which user-defined extension modules with namespaces ext:, xxx:, xac:, and xbc: augment the digital signature extension modules with namespaces ext:, sig:, sac:, sbc: and ds:. Due to limitations of W3C Schema validation semantics (this is not the case in RELAX NG [RELAX NG], for example), the apex element of the extension in the instance being validated cannot be constrained solely to the apex element declared. W3C Schema in a colloquial XML vocabulary. xml/MyTransportationStatus.xml Whenever possible, one should use existing UBL common library aggregate and basic document constraints formally expressed by the schemas in Section 3, “UBL 2.2 Schemas”, UBL mandates several other rules governing conforming UBL instances that cannot be expressed using W3C Schema. These additional UBL document rules, addressing XML instance [XML]. Additional document constraints do not apply to the arbitrary content of extensions expressed in a UBL document as described in Section 3.5, “Extension Methodology and Validation”.. UBL recommends a two-phase approach for validation of rules related to specific data content (such as to check of code list values). See Appendix E, UBL 2-conforming instance documents MUST NOT contain an element devoid of content or containing null values. An important implication of this rule is that every container UBL). These constraints are consistent with the principle described in Section 2.2.2, “Manifest Values” that the recipient must receive all pertinent information manifest in the UBL document. Relying on the absence of a construct would require the recipient to know of the sender’s intention with that construct being absent. For reliable communication this cannot be assumed. Natural language text elements such as Note and Description appear throughout the UBL document model. They are of the same unstructured Text type as character data fields that are not intended for natural language prose, such as Address Line.. Attributes in UBL are used exclusively for supplemental components of the data types of basic business information entities. An empty attribute conveys no information but may be the source of confusion for users. [IND7] UBL-conforming instance documents MUST NOT contain an attribute devoid of content or containing null values.: it is uniquely linked to the signatory; it is capable of identifying the signatory; it is created using means that the signatory can maintain under his sole control; and it is linked to the data to which it relates in such a manner that any subsequent change of the data is detectable. eBusiness, eInvoicing eProcurement eBusiness, and eInvoicing eProcure conforming.3.2, “Enveloped XML Signatures in UBL Documents”. Managing XML signatures outside of a UBL document is described in Section 5.3.3, .4, “UBL Extension for Enveloped XML Digital Signatures”..:, identifiers, the identifier SHOULD exist and SHOULD be unique across all identifier values. An example is as follows: > See Section 5.4.5, “Digital Signature Structure” and Section 5.4 conforming schemas in this distribution are provided with a predefined standard extension for enveloped signatures 3.5.2.xsd schema module. 78, in UBL for the convenience of users of the XAdES specification. There is no obligation to use the XAdES extension in the IETF/W3C digital signature. The appropriate XSD fragments are imported into the overall schema structure from the extension content data type schema fragment. Changing UBL to support a future version of the XAdES schema fragments involves only changing the import statements in the extension content data type schema fragment..3.3, .5, “Digital Signature Examples”. The UBL-ExtensionContentDataType-2). The signature extension Business Information Entities for UBL are contained in a single spreadsheet, provided here in two different formats. An HTML rendition of the spreadsheet contents for the signature extension model also is provided:.3.2, “Enveloped XML Signatures in UBL Documents” for rules regarding common library UBL signature elements in the unextended portion of UBL documents that are being referenced by this element, together with an example of their use. signatures countersigning this signature, this element must use the Id= attribute with a value unique among other attributes of schema type ID in the instance. The following is a skeleton example of a single signature: distribution includes and engages XAdES schema fragments with versions 1.3.2 and 1.4.1 for the convenience of users who choose to use these versions of XAdES. Users of the UBL signature extension are not obliged to use any XAdES extensions.. One of two such transformation expressions SHOULD be used in the UBL signature extension; users should choose the appropriate one to meet the objectives of adding the signature.2 XSD schemas [XSD1][XSD2] are the only normative representations of the UBL 2.2 document types and library components for the purposes of XML document [XML] validation and conformance. An XML document is considered conforming to UBL 2.2 when all are true that: there are no violations of the XSD validation schema constraints when using one of the normative document schemas listed in Section 3.2, “UBL 2.2 Document Schemas”, there are no violations of the XSD constraints on extension scaffolding and metadata described in Section 3.5, “Extension Methodology and Validation”, and there are no content violations of the constraints listed in Section 4, “Additional Document Constraints”. Additional explanatory information regarding conformance as applied to UBL documents and schemas and their subsets, and the distinction between UBL conformance and UBL compatibility, is described in detail in the UBL 2 Guidelines for Customization [Customization]. That document has no bearing or impact on the clauses of this subsection. Claiming syntax conformance to the enveloped signature profile of UBL 2.2 requires the conforming the latest OASIS release of this package are available from: Online and downloadable versions of the latest ISO/IEC release of this package are available from: The UBL 2.2 specification is published as a zip archive in the release directory. Unzipping this archive creates a directory named csprd02-UBL-2.2 containing a master DocBook XML file (UBL-2.2.xml), a generated hypertext version of this file (UBL-2.2.html), a generated PDF version of this file (UBL-2.2.pdf), and a number of subdirectories. The files in these subdirectories, linked to from UBL-2.2.xml, UBL-2.2.html, and UBL-2.2.pdf, contain the various normative and informational pieces of the 2.2 release. A description of each subdirectory is given below. Note that while the UBL-2.2.xml file is the “original” of this specification, it may not be viewable in all currently available web browsers. art/ Diagrams and illustrations used in this specification cl/ Code list specification files; see Appendix E, UBL 2.2 Code Lists and Two-phase Validation (Non-Normative) cva/ Artefacts expressing data type qualifications; see [CVA] in Section 1.3, “Normative References” and Figure D.1, “Data Type Qualification in UBL” in Appendix D, Data Type Qualifications in UBL (Non-Normative) db/ DocBook stylesheets for viewing UBL-2.2.xml mod/ Spreadsheets and HTML renderings of the UBL data models; see Appendix C, The UBL 2.2 Data Model (Non-Normative) val/ Test harness for demonstrating UBL 2.2 two-phase validation; see Appendix E, UBL 2.2 Code Lists and Two-phase Validation (Non-Normative) xml/ Sample UBL 2.2 instances; see Appendix F, UBL 2.2 Example Document Instances (Non-Normative) xsd/ XSD schemas; see Section 3, “UBL 2.2 Schemas” xsdrt/ “Runtime” XSD schemas; see Section 3, “UBL 2.2 Schemas” UBL is a volunteer project of the international business community. Inquiries regarding UBL may be posted to the unmoderated public UBL-Dev list, archives for which are located at: Subscriptions to UBL-Dev can be made through the OASIS list manager at: OASIS provides an official community gathering place and information resource for UBL at: The Wikipedia article for UBL has numerous related links:.2 is that it is completely backward-compatible with UBL 2.0. In other words, any document that validates against a UBL 2.0 schema will validate against the UBL 2.2 release onto an existing installation, and the possible differences among existing installations are too large to allow a specific set of instructions to be provided for making the transition. The brief history of UBL document types in the next section puts the new capabilities into context and may help users of existing UBL implementations decide whether to upgrade to 2.2. New 2.2 users, on the other hand, can simply install 2.2 and rest assured that their software will interoperate with UBL documents generated by existing conforming UBL 2.0 installations. For more on the concept of conformance, see Section 6, “Conformance” and [Customization]. During deployment the presence of errors in the UBL normative components comes to the attention of the UBL Technical Committee. Some of these cannot be repaired without breaking backwards compatibility to previous versions of UBL. Accordingly, they are obliged to remain in UBL untouched to avoid ambiguity and to avoid problems with backwards compatibility. The list of known errors that are not being changed is as follows: the spelling of the BBIE named PartecipationPercent in the ABIE named ShareholderParty is incorrect the spelling of the BBIE named FirstShipmentAvailibilityDate in the ABIE named PromotionalEvent is incorrect the spelling of the BBIE named OccurenceLocation in the ABIE named Event is incorrect at this time there are no ASBIEs associating the common library ABIE with the DEN “Performance Data Line. Details” Since its first release as an OASIS Standard in 2004, UBL has experienced one major and now two minor version upgrades. This appendix provides a description of the evolution of UBL. minor revisions,, Fulfilment, and Traditional Billing processes described in the text (see Section 2.3.3.4, “Ordering (post-award)”, Section 2.3.5.1, “Logistics”, and Section 2.3.7.1.3, did fulfilment: Fulfilment Cancellation The Section 5, “UBL Digital Signatures” extension was added in UBL 2.1. This extension works as is also with UBL 2.0. Details of the changes from UBL 2.0 to UBL 2.1 are found at Because it preserves backward compatibility with UBL 2.1 and UBL 2.0, UBL 2.2 is technically a minor release, not a major one. However, it did add 16 new document types, bringing the total number of UBL business documents to 81. Added UBL 2.2 document types for eTendering: Enquiry, Enquiry Response, Expression Of Interest Request, Expression Of Interest Response, Qualification Application Request, Qualification Application Response, Tender Contract, Tender Status, Tender Status Request, Tender Withdrawal, Unsubscribe From Procedure Request, Unsubscribe From Procedure Response Added UBL 2.2 document types for transportation: Weight Statement Added UBL 2.2 document types for business directories and agreements: Business Card, Digital Agreement, Digital Capability.1 Common Library and those in the UBL 2.2 Common Library. As this is a very lengthy specification, this guidance to the reader reflects where UBL 2.2 has not changed substantially or substantively from UBL 2.1. Editorial changes that are related to grammar, spelling and turn of phrase are not enumerated. Section 1, “Introduction” is unchanged from UBL 2.1 with the exception of citing the intended primary audiences for this specification. Section 2, “UBL 2.2 Business Objects” has been augmented with an overall view diagram and information regarding a number of subject areas. No subject areas from UBL 2.1 have been removed from this section. Section 3, “UBL 2.2 Schemas” has been augmented with a number of new document types and references to example instances. Section 3.3.4, “Extension Content Schemas” is modified for clarity regarding the user’s latitude when adding extensions. Section 4, “Additional Document Constraints” is unchanged from UBL 2.1 with the exception of the addition of an explanatory comment regarding [IND5] and [IND6]. No constraints have been changed or added. Section 5, “UBL Digital Signatures” is unchanged from UBL 2.1 with the exception of the importation of updated XAdES schema fragments in the extension content schema fragment. Section 6, “Conformance” is unchanged from UBL 2.1 with the exception of calling out from an external document into this document the applicable information regarding schema and content conformance. Appendix A, Release Notes (Non-Normative) is unchanged with the exception of adding UBL 2.2 to the section on upgrading, and enumerating the known errors in the document models. Appendix B, Revision History (Non-Normative) summarizes the changes from UBL 2.0 to UBL 2.1 and details the changes from UBL 2.1 to UBL 2.2. Other sections are unchanged. Appendix C, The UBL 2.2 Data Model (Non-Normative) is largely unchanged from UBL 2.1 with the exception of file references, line numbers and adding hyperlinks to the model reports. Some information previously found in separate sub-clauses has been consolidated into the first sub-clause. References to UML diagrams have been removed. Appendix D, Data Type Qualifications in UBL (Non-Normative) is unchanged from UBL 2.1 with the exception of a revised diagram and referencing the UBL 2.1 release. Appendix E, UBL 2.2 Code Lists and Two-phase Validation (Non-Normative) is unchanged from UBL 2.1 with the exception of the list of code lists. Appendix F, UBL 2.2 Example Document Instances (Non-Normative) includes a revised list of example instances. Appendix G, Alternative Representations of the UBL 2.2 Schemas (Non-Normative) is revised to reference only a free RELAX-NG tool with which to convert the normative UBL schemas into an alternative syntax. Appendix H, The Open-edi reference model perspective of UBL (Non-Normative) is unchanged. Appendix I, Acknowledgements (Non-Normative) is changed to reflect the active membership of the technical committee during the development of UBL 2.2. The UBL 2.2 release does not include the non-normative RELAX-NG and UML diagram alternative representations of the UBL normative schemas that are found in earlier releases..2 csprd01 Common Library and those in the current UBL 2.2 csprd02 Common Library. The following table sums up the differences between the XML elements in the UBL 2.2 csprd01 document schemas and those in the current UBL 2.2 csprd02 document schemas. As described in the OASIS UBL Naming and Design Rules [UBL-NDR] application of the OASIS Business Document Naming and Design Rules [BD-NDR], the UBL data model design follows the principles of the UN/CEFACT.4, “Business Information Entities”. Historically, both the UBL common library of reusable components and the assembly models for the individual UBL documents have been published as separate spreadsheets using a format specifically developed for UBL business information modeling (this format is discussed further below). Beginning with UBL 2.2, all of these models are published as separate worksheets in a single spreadsheet. This spreadsheet is provided in both Open Document and Microsoft Excel formats in mod/ subdirectory: A machine-processable XML version of the spreadsheet contents for the entire UBL data model is provided in OASIS genericode [genericode] format: Similar files for the UBL standardized signature extension also are in mod/ subdirectory: An HTML rendition of the spreadsheet contents for the entire UBL data model and of the signature extension data model are provided: For links to the individual HTML reports for each of the document types, see the schema tables in Section 3.2, “UBL 2.2 Document Schemas”. These reports elide all of the library components that are not used by each document type and are far shorter than the “all documents” report. For notes on the use of the HTML reports, see Following the relevant sections of the OASIS Business Document Naming and Design Rules, the normative UBL schemas and non-normative OASIS Context/Value Association [CVA] file are generated from the machine-processable XML of the spreadsheet contents. From the CVA file and the genericode expressions of code list values, the data type qualifications XSLT stylesheet is generated. The following diagram shows the conceptual relationships between the UBL data models on the left and validation artefacts (schemas and XSLT) on the right. Compare Figure 78, “UBL Schema Dependencies”.. As noted above, UBL is based on a reusable library of Business Information Entities. In the current release, the Common Library contains more than two thousand of these individually defined data items. model to construct a trivial UBL Invoice instance. We will start with a wrapper copied from an example in the xml/ directory of the UBL distribution ( xml/UBL-Invoice-2.1-Example.xml) that has the required XML namespace declarations for the Invoice and for the common library components (“ cac” for the aggregate (ABIE and ASBIE) components and “ cbc” for the basic (BBIE) components): <?xml version="1.0" encoding="UTF-8"?> <Invoice xmlns="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2" xmlns: [...] </Invoice> Now we will fill out this shell of an instance, completing the part in the square brackets by traversing the data model. In addition to the aforementioned complete UBL data model spreadsheets and HTML rendering, when dealing with only a single document type there is an HTML rendition of that subset of the spreadsheet contents with only that content utilized by the one document type, such as for the Invoice: Line 2 of the Invoice model defines the document ABIE named Invoice. The Component Type column confirms that Invoice is an ABIE, as also indicated by the pink background in that row of the rendering. Everything after Invoice in the model Cardinality column, a column of the same name). To find this structure, we look for the Period library ABIE in the model report or in the Common Library worksheet of the UBL model spreadsheet. Period will be found at line 1510 and seen to contain a number of possible BBIE children, all of them optional; and the ASBIE InvoicePeriod in Invoice therefore has this structure, too. From this one could conclude that instantiations of the Period structure (there are more than 50 of them in UBL) need not contain any of the seven optional BBIE elements specified after line 1510, and indeed the corresponding declaration of the complex type PeriodType in the CAC schema ( xsd/common/UBL-CommonAggregateComponents-2.2.xsd) shows that an empty InvoicePeriod element will pass XML validation; but UBL explicitly prohibits such structures (see Section 4 conforming to UBL in addition to the requirement that the document validate against the Invoice schema. If StartDate and EndDate (for exsample) Associated Object Class column of the Invoice model, AccountingSupplierParty (line 36) derives from the SupplierParty ABIE and AccountingCustomerParty (line 37) derives from the CustomerParty ABIE. Checking in the Common Library, it is seen that both SupplierParty (line 2039 of the Common Library) and CustomerParty (line 562 of the Common Library) can contain an ASBIE named Party (as shown in lines 2043 and 566, respectively) and that each Party ASBIE is an instantiation of the Party ABIE (line 1402). Therefore both parties have the same structure (the BBIEs and ASBIEs following line 1402). Thus AccountingSupplierParty and AccountingCustomerParty share the information components common to parties in general and differ in the information specific to suppliers and customers. Parties commonly have a PartyName (line 1410) that derives (the Associated Object Class column) from the ABIE PartyName (line 1441), which is a wrapper for the BBIE Name (line 1442). 1324), 1333). This is because UBL does not define the primitive data types upon which the model is built; instead it uses standard data type definitions from [CCTS] and [XSD2]. In the case of PayableAmount, the CCTS data type (the Data Type column) according to the UBL Naming and Design Rules) and a few others, as listed in the following table. Some of these ( GraphicType, PictureType, SoundType, VideoType, and ValueType) are defined for completeness but not actually used in UBL 2.5, “Navigating the UBL Data Model” for an example of UBL attributes and a further discussion of this point. A reverse lookup of the implied occurrence of each attribute in the data models is provided in this summary report:.2.xsl, which is used in the recommended two-phase validation process to perform a check of code list values. See Appendix E, UBL 2.2 Code Lists and Two-phase Validation (Non-Normative) for a description of this process. The UBL revised approach to data type qualification contrasted to the UBL 2.0 approach is illustrated in the following diagram. In UBL 2.0, the schema library of common basic components (basic business information entities or BBIEs, (A) in the diagram) is based on a combination of the data types defined in the file of UBL 2.0 qualified data types (C) and the unqualified data types defined in the UN/CEFACT Unqualified Data Type schema module Ver. 1.1 Rev A 16 Feb 2005 (K). The UBL 2.0 data type qualifications XSLT stylesheet (D) was used in the two-pass validation process, offering limitations on values such as code lists hardwired in the UN/CEFACT UDT definition. In subsequent releases of UBL, the schema library of common basic components ((B) in the diagram) is based on a combination of the data types defined in the file of UBL qualified data types (E) and the data types defined in a file of UBL unqualified data types (M). The latter inherits the data type definitions in the UN/CEFACT CCTS CCT schema module Ver. 1.1 050114 (N). The UBL data type qualifications CVA file (F) controls the creation of the UBL XSLT stylesheet (G) used in the two-pass validation process, offering both limitations and extensions to values such as code lists. While this XSLT file, UBL-2.x-DefaultDTQ.xsl, can, when modified, apply to data type qualifications in general (such as field length restrictions and value range restrictions), the version of this file included in the UBL release contains only code list values linked to the metadata of the applicable code list. The two remaining boxes on the right in the diagram illustrate that users can add further data type qualifications if desired by preparing a custom CVA (H) and creating a custom XSLT file (J) to replace the default CVA and XSLT stylesheet provided in the UBL distribution. Users intending to prepare a custom CVA should note that cva/UBL-DefaultDTQ-2.2.cva contains relative URIs that expect the UBL 2.0 code lists from the UBL 2.0 Update Package in a sibling directory named os-UBL-2.0, and the UBL 2.1 code lists from the UBL 2.1 distribution in a sibling directory named os-UBL-2.1. This is irrelevant to users of the pre-compiled val/UBL-DefaultDTQ-2.2.xsl file contained in the UBL package, but users wishing to create their own CVA file must first install the code lists of prior releases of UBL 2.0. To properly install the update, first download and install the original UBL 2.0 release: Then download and install the UBL 2.0 update: Then download and install the UBL 2.1 release: Complete installation instructions can be found in the each package. As indicated above, the os-UBL-2.0/ and os-UBL-2.1/ directories thus created must be siblings directories to the directory created by installing the UBL 2.2 package. and beyond..2 specification are expressed as data type qualifications in a file named UBL-2 sub-trees within UBL document instances. Another way to say this is instances using the two-phase method, an “out-of-the-box” collection of open-source software that can be used to demonstrate default validation of UBLxes schema parser. If necessary, download and install the latest JRE from the following location before continuing: To demonstrate UBL schemas by executing commands of the form validate <ubl-schema> <ubl-document> where <ubl-document> is the path of a document to be validated and <ubl-schema> is the path of the UBLxes, is invoked from val/w3cschema.bat (or val/w3cschema.sh) to validate the specified UBL document ( .xml) against the specified UBL code list values specified in val/UBL-DefaultDTQ-2.2.xsl. Here the output line “No code list validation errors” from the validate script indicates that the Saxon run (invoked from val/xslt.bat or valxes val/UBL-DefaultDTQ-2.2.xsl provided in the UBL 2.2 distribution. This allows extensive code list management without the need to change the standard UBL 2.2 schemas. Schematron-based [SCH] techniques for generating a custom XSLT file to take the place of UBL-DefaultDTQ-2.2.xsl are explained in [CVA] and [Customization]. See also Appendix back-end business application to a simpler input processing area. Additional XSLT scripts can be added to extract logical sub-trees.2.xsl was created using the Schematron [SCH] implementation of CVA files for validation at The code lists included in the UBL 2.2 distribution use an OASIS Standard XML format for code lists called [genericode]. Each code list in the distribution is expressed as a genericode file. The code lists of UBL 2.0, UBL 2.1 and UBL 2.2 are incorporated into the default validation framework. Documentation on the UBL code lists is contained in a generated report file: The code list files in UBL 2.2 are divided into two subdirectories, cl/gc/default and cl/gc/special-purpose. The code lists in the cl/gc/default directory contain the default code values represented in UBL-DefaultDTQ-2, but there is no obligation to use them. The genericode files with corresponding “including deprecated” or “including deleted” files have been culled of deprecated or deleted values in order to be used in typical contexts. The files with entries no longer used are included for completeness. E, UBL 2.2 Code Lists and Two-phase Validation (Non-Normative) for a general discussion of UBL validation methodology. For convenience, those examples that relate specifically to a particular document type are linked from the description of that type in Section 3.2, “UBL 2.2 Document Schemas”. Example instances containing extensions Example instances related to signatures (see Section 5.5, “Digital Signature Examples”) Example instances with unconventional use of namespace bindings Example instances of different versions of certain document types UBL 2.2 digital signature extension (see Section 5.4, “UBL Extension for Enveloped XML Digital Signatures”), are intended to implement the same document instance constraints. Regarding creating RELAX-NG [RELAX NG] expressions of the UBL document models, the free Trang tool found at is suitable for converting the UBL W3C Schema expressions into such expressions..4, “UBL Customization”. The OASIS UBL Technical Committee thanks Altova for its contribution of XML Spy licenses for use in UBL schema design; Sparx Systems for its contribution of Enterprise Architect licenses for use in developing UML content models and swim-lane diagrams; SyncroSoft for its contribution of oXygen licenses used in DocBook authoring of UBL documentation; RenderX for its contribution of XEP licenses used in generating PDF documents from DocBook originals; and Crane Softwrights for the generation of the summary reports. The following persons and companies participated as members of the OASIS UBL Technical Committee during the four years of its development (2013–2017). This temporary appendix will be removed in the final version of the committee specification. During the review process of UBL 2.2, the distribution includes the parallel production of the specification PDF for publishing as ISO/IEC 19845 using the page layout prescribed by “ISO/IEC Directives, Part 2”. This rendering is found at ISO-IEC-19845.pdf. This rendering is not included in the final distribution but its content is submitted directly to ITTF for publishing.
http://docs.oasis-open.org/ubl/csprd02-UBL-2.2/UBL-2.2.html
2021-05-06T06:51:00
CC-MAIN-2021-21
1620243988741.20
[]
docs.oasis-open.org
This section describes the local IP address. Use for Private Tunnels Operators can choose to have private WAN links connect to the private IP address of a Partner Gateway. If private WAN connectivity is enabled on a Gateway, the VCO will audit to ensure that the local IP address is unique for each Gateway within a customer. Advertise via BGP The Operator can choose to automatically advertise the private WAN IP of a Partner Gateway via BGP. The connectivity will be provided via the existing local IP address defined in the Partner Gateway configuration.
https://docs.vmware.com/en/VMware-SD-WAN/3.3/velocloud-operator-guide-33/GUID-22EBFA49-5440-4E1C-BE35-BC370FD41F3A.html
2021-05-06T06:46:23
CC-MAIN-2021-21
1620243988741.20
[array(['images/GUID-D32A13C4-FFB9-4849-9188-629ED779FA42-low.png', 'gateways-handoff-local-IP-address'], dtype=object) ]
docs.vmware.com
Add filter policy to SNS subscription Filtering SNS messages require setting a filter policy on top of the subscription's connection. Add filter policy to SNS subscriptionAdd filter policy to SNS subscription - Click on the SNS's sourced connection you wish to add the filter policy. - Set Property, Type, Operatorand Valueof the filter policy rule. - Click SAVE to finish editing the connection. This will create a filter policy on top of the SNS subscription. The filter policy applies to the SNS's message attribute MessageAttribute. Check out AWS docs for further reading. ConstraintsConstraints - Supports up to 5 filters Stringtype supports only string values, Numbertype supports only numeric values - Supports up to 150 combinations between the filters, for example, the following filter policy: The first key has a single value, the second has three values, and the third has two values. The total combination is calculated as follows: 1 * 3 * 2 = 6 Acceptable formatsAcceptable formats Some operators support a comma-separated list of values, as detailed below. String type =, !=, In, Not In- support comma-separated string values. Starts With- supports a single prefix string value. Exists- doesn't require any value. Number type =, Not In- support comma-separated number values. >, >=, <, <=- support a single number value. Between- supports two number values that represent the range of possible values. Exists- doesn't require any value.
https://docs.altostra.com/howto/editor/set-sns-filter-policy.html
2021-05-06T06:42:57
CC-MAIN-2021-21
1620243988741.20
[array(['/assets/set-sns-filter-policy/set-sns-filter-policy.gif', 'Add filter policy'], dtype=object) array(['/assets/set-sns-filter-policy/filter-policy-combinations.png', 'Combinations sample'], dtype=object) ]
docs.altostra.com
Introduction¶ Blender provides a variety of tools for editing meshes. These are tools used to add, duplicate, move and delete elements. These are available through the Menus in the 3D Viewport header, and context menus in the 3D Viewport, as well as individual shortcut keys. Bemerkung.
https://docs.blender.org/manual/de/dev/modeling/meshes/editing/introduction.html
2021-05-06T07:12:20
CC-MAIN-2021-21
1620243988741.20
[]
docs.blender.org
Adding the Linux server and Citrix XenServer host object to BMC Server Automation The following sections provide instructions for adding a Citrix XenServer environment to BMC Server Automation: Before you begin Ensure that your environment meets the following requirements. System requirements To employ BMC Server Automation in an Citrix XenServer virtual environment, you must have the following minimum hardware and software: - A Red Hat Enterprise Linux server (version 5.4 or later) to act as a proxy server. - At least one installed and configured XenServer Host - A XenCenter application for managing the XenServer Hosts, installed and run on a Windows 2000/XP/Vista workstation Supported Citrix XenServer versions BMC Cloud Lifecycle Management version 3.1.x supports Citrix XenServer version 5.6 as a hypervisor platform for provisioning. Access and privileges For a Citrix XenServer environment, the CONNECTION_USER property requires the following permissions: - Root user on the master server - Citrix VM Power Admin role, if you are using Active Directory-based authentication mode Installing the Linux agent Adding the Linux server and Citrix XenServer host object to BMC Server Automation The following sections provide instructions for adding the Linux server and the Citrix XenServer agentless managed object to BMC Server Automation. To add the Linux server and Citrix XenServer host object to BMC Server Automation - From the BMC Server Automation Console, add the Red Hat Enterprise Linux proxy server as a managed server to a server group (right-click a server group and select Add Server ). - Right-click the server group in the Servers folder. - Select Virtualization > Add Citrix XenServer host to add a Citrix XenServer agentless device to BMC Server Automation. - On the Add a new agentless managed object - Properties page, add the name or IP address of the Citrix XenServer master server. - Optionally, add a brief description. - Under Properties, locate the AGENTLESS_MANAGED_OBJECT_PROXY_HOST*property. - Edit the property value by browsing to the name of the Linux system configured as the proxy host. - Click OK. - Click Finish. The agentless managed object for the Citrix XenServer is added to the folder you specified. Recommendation While it is possible to use the same proxy host for multiple agentless managed objects, the recommendation is to have separate proxy servers for each agentless managed object (representing a Citrix XenServer master server), for optimal performance. To set the connection properties for the Citrix XenServer object - From the BMC Server Automation Console, select Configuration > Property Dictionary View. - Browse the Built-in Property Classes > Connection class. - Click the Instances tab and set the connection details to the Citrix XenServer master host. When you add the agentless managed object, BMC Server Automation automatically created a Connection instance with the name Connection_ <agentlessManagedObjectName>(where <agentlessManagedObjectName> is the name of the Citrix XenServer object).This new instance contains the connection details to the Citrix XenServer master host. Note Your user role must have the appropriate RBAC permissions to update the Connection instance, otherwise the association between the instances will not be created and the server will not be enrolled. Also, if a Connection instance is already associated with the Virtualization instance then a new instance is not automatically created. Tip The Citrix XenServer platform can run in two modes: Pool mode or as a standalone host. - In Pool mode, multiple servers work together to provide capabilities such as workload balancing, high availability, and so on. In this configuration, one server is designated as the master server, while the other servers are designated as slaves. You must set the Connection instance to connect to the master server, rather than a slave server, so that BMC Server Automation can obtain configuration data or perform management operations. - In Standalone host mode, the standalone server is considered the master server. - Set the following properties for the newly added instance: - Save this instance. - Browse the Built-in Property Classes > Virtualization class. For each Citrix XenServer object you add, a corresponding instance is automatically created. - Ensure that the VIRTUAL_ENTITY_CONNECTIONproperty Virtualization instance points to the previously created Connection instance. - Save this instance. - Distribute the Citrix XenServer configuration object on the Citrix XenServer agentless managed object: - From the BMC Server Automation Console, navigate to the Jobs folder. - Right-click a job group and select New > Administration Task > Distribute Configuration Objects. - Provide a name for the Job and click Next. - Expand the Global Configuration Objects list, select the Citrix XenServer object, and add it to the Selected Configuration Objects section. Click Next. - On the Targets panel, select the enrolled agentless managed object for the Citrix XenServer master server. - Click Finish, and execute the job. For details, see Distributing configuration objects. You can now use the BMC Server Automation Console to Live Browse the Citrix XenServer agentless managed object and access the Citrix XenServer node. For details on adding a server and setting its properties, see Adding a server to the system. You are now ready to create the VGP in BMC Server Automation for Citrix XenServer, which you can then onboard in BMC Cloud Lifecycle Management as a XenServer compute resource for provisioning. Where to go next Creating the VGP in BMC Server Automation for Citrix XenServer
https://docs.bmc.com/docs/cloudlifecyclemanagement/31/administering/adding-virtual-environments-for-compute-resources/setting-up-the-citrix-xenserver-in-bmc-server-automation/adding-the-linux-server-and-citrix-xenserver-host-object-to-bmc-server-automation
2021-05-06T06:15:00
CC-MAIN-2021-21
1620243988741.20
[]
docs.bmc.com
We only need read access to your original image storage. We only fetch original images for processing them but we don't write or modify any image. You retain all your high-quality original images as it is under your control. We only need access equivalent to read. We must be able to fetch images from your original bucket or server. We won't need any write access. There is no charge for fetching original images from your image source.. We cache your original image for 3 months so we will still keep serving cached images. You need to clear cache through the dashboard or API to reflect the change in your original image. We cache your original image for a maximum of 3 months. If we need to revalidate the original image, it will start throwing a 404 error. If you flush the cache after deleting the original image, then we will start throwing a 404 error for that image. We support an input image that is a maximum of 8192x8192 pixels. You can find details about input image formats here: By default, we will return a 404 error when the image is not found and your users may see a broken image. You can configure a fallback image in your source settings. If that image is set, we will display that image instead of showing a broken image. We store processed images in our cache. The processed images are also stored in the CDN cache. This ensures that if an image is already generated, no extra time is spent delivering them. We support a maximum output image size of 8192 x 8192 pixels. You can find details about output image formats here: Yes, we remove all unnecessary EXIF metadata from processed images. Your original images however are untouched so all that metadata is always present there. If you want to keep the metadata from original images in output images as well, you can check this parameter:. We cache your original images for a maximum of 90 days and processed images for 100 days. Yes, we support modifying browser cache time. It's set at 100 days by default but you can change it in the cache management section of the source. We generally don't recommend modifying CDN cache time but if you must do it, please contact our support and we will be able to guide you.. You need to remove the CSS background-image property and you need to set data-bg attribute with image URL. Our library will ensure correct-sized background image will be loaded. You can find more information about it here: Gumlet library is able to handle lazy loading. We recommend that you remove any third-party library and just rely on our library for lazy-loading. Check more about enabling lazy loading here: Yes. You can serve images from as many different sources as you want. You just need to create multiple sources in the Gumlet panel and configure the Gumlet.js to load different images from different domains. A detailed example is given here:. We accept payments in USD. If you are paying from any other currency you will be charged as per the equivalent exchange rate. Absolutely! We use to store your payment information and process payments. None of your credit card details are received or stored on our servers. We will send you an invoice on the 1st of every month. If you have a credit card saved, it will be charged on the same day. Yes! We have a WordPress plugin that can be installed and you can get started in minutes. Our WordPress plugin will work just fine with your WooCommerce store. We currently don't support Shopify stores. We tried our best but it is not possible to integrate with Shopify without changing every image tag in the liquid theme. We do support optimizing images on Magento. Please follow this simple guide to start serving optimized images. Please follow our Gumlet.js (JavaScript plugin) integration. Getting started guide. We don't support optimizing images for the above platforms. We will keep these in our mind though and if there is huge demand, we will be publishing plugins when needed.. It's a simple marketing model where you can refer Gumlet and earn commission on sales from any user who subscribes to any paid Gumlet plan. There is no limit on your revenue. You can earn 30% of all the payments made in the 1st year and 15% for all subsequent years. You can simply login to your affiliate account received via email at any time and track all of your earnings. Affiliate commissions are paid monthly, on the 10th of the month. All affiliate commissions are paid via Paypal. We will process all payouts greater than 1$ in the immediate cycle.
https://docs.gumlet.com/quick-start-guides/faqs
2021-05-06T07:38:08
CC-MAIN-2021-21
1620243988741.20
[]
docs.gumlet.com
runai submit-mpi Description¶ Submit a Distributed Training (MPI) Run:AI Job for execution. Synopsis¶ runai submit-mpi [--always-pull-image] [--attach] [--backoffLimit int] [--command] [--cpu double] [--cpu-limit double] [--create-home-dir] [--environment stringArray | -e stringArray] [--git-sync string] [--gpu double | -g double] [--gpu-memory string] [--host-ipc] [--host-network] [--image string | -i string] [--interactive] [--job-name-prefix string] [--large-shm] [--local-image] [--memory string] [--memory-limit string] [--name string] [--node-type string] [--prevent-privilege-escalation] [--processes int] [--pvc [StorageClassName]:Size:ContainerMountPath:[ro]] [--run-as-user] [--stdin] [--template string] [--tty | -t] [--volume stringArray | -v stringArray] [--working-dir] [--loglevel string] [--project string | -p string] [--help | -h] -- [COMMAND] [ARGS...] [options] - Options with a value type of stringArray mean that you can add multiple values. You can either separate values with a comma or add the flag twice. Examples¶ start an unattended mpi training Job of name dist1, based on Project team-a using a quickstart-distributed image: runai submit-mpi --name dist1 --processes=2 -g 1 \ -i gcr.io/run-ai-demo/quickstart-distributed (see: distributed training Quickstart). Options¶ Aliases and Shortcuts¶ --name The name of the Job. --interactive Mark this Job as Interactive. Interactive Jobs are not terminated automatically by the system. --template string Provide the name of a template. A template can provide default and mandatory values. script.py --args documentation. --host-network Use the host's network stack inside the container. For further information see docker run referencedocumentation. Job Lifecycle¶ --backoffLimit int The number of times the Job will be retried before failing. The default is 6. This flag will only work with training workloads (when the --interactiveflag is not specified). --processes int Number of distributed training processes. The default is 1.. Scheduling¶ - an mpi Job. You can follow up on the Job by running runai list jobs or runai describe job <job-name>. See Also¶ - See Quickstart document Running Distributed Training.
https://docs.run.ai/Researcher/cli-reference/runai-submit-mpi/
2021-05-06T06:22:35
CC-MAIN-2021-21
1620243988741.20
[]
docs.run.ai
Q. Since Qt 5.15 Nullish Coalescing). The QML JavaScript host environment implements a number of host objects and functions, as detailed in the QML Global Object documentation. These host objects and functions are always available, regardless of whether any modules have been imported.: var v = something(); if (!v instanceof Item) { throw new TypeError("I need an Item type!"); } ... QML implements the following restrictions for JavaScript code: a variable. thisis undefined in QML in the majority of contexts. The this keyword is supported when binding properties from JavaScript. In QML binding expressions, QML signal handlers, and QML declared functions, this refers to the scope object. In all other situations, the value of this is undefined in QML. To refer to a specific object, provide an id. For example: Item { width: 200; height: 100 function mouseAreaClicked(area) { console.log("Clicked in area at: " + area.x + ", " + area.y); } // This will pass area to the function MouseArea { id: area y: 50; height: 50; width: 200 onClicked: mouseAreaClicked(area) } } See also Scope and Naming Resolution. © The Qt Company Ltd Licensed under the GNU Free Documentation License, Version 1.3.
https://docs.w3cub.com/qt~5.15/qtqml-javascript-hostenvironment
2021-05-06T07:01:58
CC-MAIN-2021-21
1620243988741.20
[]
docs.w3cub.com
GCR integration (Deprecated) Deprecation Note This integration has been deprecated. A new integration called Google Cloud has been introduced which can be used instead. It aims to simplify and unify existing GCR, GKE and GCL functionalities. If you have any existing GCR integrations, you can continue to use them. The GCR Integration is used to connect Shippable DevOps Assembly Lines platform to Google Container Registry so that you can pull and push Docker images. Creating an Integration Since this integration has been deprecated, you cannot create new integrations for this, you can only edit/delete the exisiting GCR integrations. You can use the new Google Cloud instead. Usage in CI Usage in Assembly Lines The GCR.
http://docs.shippable.com/platform/integration/deprecated/gcr/
2019-04-18T20:48:15
CC-MAIN-2019-18
1555578526807.5
[]
docs.shippable.com
Account Integrations Integrations let you connect your Shippable workflows to third-party services, or to store any sensitive information like passwords, tokens, etc, or simply to store key-value pairs needed for your workflows. To view a list of currently configured account integrations, click on Integrations in your left sidebar menu. The following documents will help you understand how integrations work, and how to work with them:
http://docs.shippable.com/platform/management/account/integrations/
2019-04-18T20:47:01
CC-MAIN-2019-18
1555578526807.5
[]
docs.shippable.com
There are two ways to model relationships between documents in RethinkDB: Let’s explore the advantages and disadvantages of each approach. We’ll use a simple blog database that stores information about authors and their posts to demonstrate them. We can model the relationship between authors and posts by using embedded arrays as follows. Consider this example document in the table authors: { "id": "7644aaf2-9928-4231-aa68-4e65e31bf219", "name": "William Adama", "tv_show": "Battlestar Galactica", "posts": [ {"title": "Decommissioning speech", "content": "The Cylon War is long over..."}, {"title": "We are at war", "content": "Moments ago, this ship received..."}, {"title": "The new Earth", "content": "The discoveries of the past few days..."} ] } The authors table contains a document for each author. Each document contains information about the relevant author and a field posts with an array of posts for that author. In this case the query to retrieve all authors with their posts is simple: # Retrieve all authors with their posts r.db("blog").table("authors").run() # Retrieve a single author with her posts r.db("blog").table("authors").get(AUTHOR_ID).run() Advantages of using embedded arrays: - Queries postsarray to no more than a few hundred documents. You can use a relational data modeling technique and create two tables to store your data. A typical document in the authors table would look like this: { "id": "7644aaf2-9928-4231-aa68-4e65e31bf219", "name": "William Adama", "tv_show": "Battlestar Galactica" } A typical document in the posts table would look like this: { "id": "064058b6-cea9-4117-b92d-c911027a725a", "author_id": "7644aaf2-9928-4231-aa68-4e65e31bf219", "title": "Decommissioning speech", "content": "The Cylon War is long over..." } Every post contains an author_id field that links each post to its author. We can retrieve all posts for a given author as follows: # If we have a secondary index on `author_id` in the table `posts` r.db("blog").table("posts"). get_all("7644aaf2-9928-4231-aa68-4e65e31bf219", index="author_id"). run() # If we didn't build a secondary index on `author_id` r.db("blog").table("posts"). filter({"author_id": "7644aaf2-9928-4231-aa68-4e65e31bf219"}). run() In a relational database, we’d use a JOIN here; in RethinkDB, we use the eq_join command. To get all posts along with the author information for William Adama: # In order for this query to work, we need to have a secondary index # on the `author_id` field of the table `posts`. r.db("blog").table("authors").get_all("7644aaf2-9928-4231-aa68-4e65e31bf219").eq_join( 'id', r.db("blog").table("posts"), index='author_id' ).zip().run() Note that the values for author_id correspond to the id field of the author, which allows us to link the documents. Advantages of using multiple tables: -. There’s a separate article, Table joins in RethinkDB, with much more information about the multiple-table approach, including how to do the ReQL equivalents of inner, outer and cross joins. If you aren’t sure which schema to use, ask us on Stack Overflow or join the #rethinkdb IRC channel on Freenode. © RethinkDB contributors Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.
https://docs.w3cub.com/rethinkdb~ruby/docs/data-modeling/
2019-04-18T20:31:41
CC-MAIN-2019-18
1555578526807.5
[]
docs.w3cub.com
API¶ Definitions¶ streamz. accumulate(upstream, func, start='--no-default--', returns_state=False, **kwargs)¶ Accumulate results with previous state This performs running or cumulative reductions, applying the function to the previous total and the new element. The function should take two arguments, the previous accumulated state and the next element and it should return a new accumulated state. Examples >>> source = Stream() >>> source.accumulate(lambda acc, x: acc + x).sink(print) >>> for i in range(5): ... source.emit(i) 1 3 6 10 streamz. buffer(upstream, n, **kwargs)¶ Allow results to pile up at this point in the stream This allows results to buffer in place at various points in the stream. This can help to smooth flow through the system when backpressure is applied. streamz. collect(upstream, cache=None, **kwargs)¶ Hold elements in a cache and emit them as a collection when flushed. Examples >>> source1 = Stream() >>> source2 = Stream() >>> collector = collect(source1) >>> collector.sink(print) >>> source2.sink(collector.flush) >>> source1.emit(1) >>> source1.emit(2) >>> source2.emit('anything') # flushes collector ... [1, 2] streamz. combine_latest(*upstreams, **kwargs)¶ Combine multiple streams together to a stream of tuples This will emit a new tuple of all of the most recent elements seen from any stream. streamz. filter(upstream, predicate, **kwargs)¶ Only pass through elements that satisfy the predicate Examples >>> source = Stream() >>> source.filter(lambda x: x % 2 == 0).sink(print) >>> for i in range(5): ... source.emit(i) 0 2 4 streamz. flatten(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=None, ensure_io_loop=False)¶ Flatten streams of lists or iterables into a stream of elements Examples >>> source = Stream() >>> source.flatten().sink(print) >>> for x in [[1, 2, 3], [4, 5], [6, 7, 7]]: ... source.emit(x) 1 2 3 4 5 6 7 streamz. map(upstream, func, *args, **kwargs)¶ Apply a function to every element in the stream Examples >>> source = Stream() >>> source.map(lambda x: 2*x).sink(print) >>> for i in range(5): ... source.emit(i) 0 2 4 6 8 streamz. partition(upstream, n, **kwargs)¶ Partition stream into tuples of equal size Examples >>> source = Stream() >>> source.partition(3).sink(print) >>> for i in range(10): ... source.emit(i) (0, 1, 2) (3, 4, 5) (6, 7, 8) streamz. rate_limit(upstream, interval, **kwargs)¶ Limit the flow of data This stops two elements of streaming through in an interval shorter than the provided value. streamz. sink(upstream, func, *args, **kwargs)¶ Apply a function on every element Examples >>> source = Stream() >>> L = list() >>> source.sink(L.append) >>> source.sink(print) >>> source.sink(print) >>> source.emit(123) 123 123 >>> L [123] streamz. sliding_window(upstream, n, **kwargs)¶ Produce overlapping tuples of size n Examples >>> source = Stream() >>> source.sliding_window(3).sink(print) >>> for i in range(8): ... source.emit(i) (0, 1, 2) (1, 2, 3) (2, 3, 4) (3, 4, 5) (4, 5, 6) (5, 6, 7) streamz. Stream(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=None, ensure_io_loop=False)¶ A Stream is an infinite sequence of data Streams subscribe to each other passing and transforming data between them. A Stream object listens for updates from upstream, reacts to these updates, and then emits more data to flow downstream to all Stream objects that subscribe to it. Downstream Stream objects may connect at any point of a Stream graph to get a full view of the data coming off of that point to do with as they will. Examples >>> def inc(x): ... return x + 1 >>> source = Stream() # Create a stream object >>> s = source.map(inc).map(str) # Subscribe to make new streams >>> s.sink(print) # take an action whenever an element reaches the end >>> L = list() >>> s.sink(L.append) # or take multiple actions (streams can branch) >>> for i in range(5): ... source.emit(i) # push data in at the source '1' '2' '3' '4' '5' >>> L # and the actions happen at the sinks ['1', '2', '3', '4', '5'] streamz. timed_window(upstream, interval, **kwargs)¶ Emit a tuple of collected results every interval Every intervalseconds this emits a tuple of all of the results seen so far. This can help to batch data coming off of a high-volume stream. streamz. union(*upstreams, **kwargs)¶ Combine multiple streams into one Every element from any of the upstreams streams will immediately flow into the output stream. They will not be combined with elements from other streams. See also Stream.zip, Stream.combine_latest streamz. unique(upstream, history=None, key=<function identity>, **kwargs)¶ Avoid sending through repeated elements This deduplicates a stream so that only new elements pass through. You can control how much of a history is stored with the history=parameter. For example setting history=1avoids sending through elements when one is repeated right after the other. Examples >>> source = Stream() >>> source.unique(history=1).sink(print) >>> for x in [1, 1, 2, 2, 2, 1, 3]: ... source.emit(x) 1 2 1 3 streamz. pluck(upstream, pick, **kwargs)¶ Select elements from elements in the stream. Examples >>> source = Stream() >>> source.pluck([0, 3]).sink(print) >>> for x in [[1, 2, 3, 4], [4, 5, 6, 7], [8, 9, 10, 11]]: ... source.emit(x) (1, 4) (4, 7) (8, 11) >>> source = Stream() >>> source.pluck('name').sink(print) >>> for x in [{'name': 'Alice', 'x': 123}, {'name': 'Bob', 'x': 456}]: ... source.emit(x) 'Alice' 'Bob' streamz. zip(*upstreams, **kwargs)¶ Combine streams together into a stream of tuples We emit a new tuple once all streams have produce a new tuple. See also combine_latest, zip_latest streamz. zip_latest(lossless, *upstreams, **kwargs)¶ Combine multiple streams together to a stream of tuples The stream which this is called from is lossless. All elements from the lossless stream are emitted reguardless of when they came in. This will emit a new tuple consisting of an element from the lossless stream paired with the latest elements from the other streams. Elements are only emitted when an element on the lossless stream are received, similar to combine_latestwith the emit_onflag. See also Stream.combine_latest, Stream.zip streamz. filenames(path, poll_interval=0.1, start=False, **kwargs)¶ Stream over filenames in a directory Examples >>> source = Stream.filenames('path/to/dir') # doctest: +SKIP >>> source = Stream.filenames('path/to/*.csv', poll_interval=0.500) # doctest: +SKIP streamz. from_kafka(topics, consumer_params, poll_interval=0.1, start=False, **kwargs)¶ Accepts messages from Kafka Uses the confluent-kafka library, streamz. from_textfile(f, poll_interval=0.1, delimiter='\n', start=False, from_end=False, **kwargs)¶ Stream data from a text file streamz.dask. DaskStream(*args, **kwargs)¶ A Parallel stream using Dask This object is fully compliant with the streamz.core.Streamobject but uses a Dask client for execution. Operations like mapand accumulatesubmit functions to run on the Dask instance using dask.distributed.Client.submitand pass around Dask futures. Time-based operations like timed_window, buffer, and so on operate as normal. Typically one transfers between normal Stream and DaskStream objects using the Stream.scatter()and DaskStream.gather()methods. See also dask.distributed.Client Examples >>> from dask.distributed import Client >>> client = Client() >>> from streamz import Stream >>> source = Stream() >>> source.scatter().map(func).accumulate(binop).gather().sink(...) streamz.dask. gather(upstream=None, upstreams=None, stream_name=None, loop=None, asynchronous=None, ensure_io_loop=False)¶ Wait on and gather results from DaskStream to local Stream This waits on every result in the stream and then gathers that result back to the local stream. Warning, this can restrict parallelism. It is common to combine a gather()node with a buffer()to allow unfinished futures to pile up. See also buffer, scatter Examples >>> local_stream = dask_stream.buffer(20).gather()
https://streamz.readthedocs.io/en/latest/api.html
2019-04-18T20:34:29
CC-MAIN-2019-18
1555578526807.5
[]
streamz.readthedocs.io