content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Welcome to PulsePoint's home for technical documentation. To find the information you need, please use the sidebar on the left-hand side of your screen, or click the links in the Support section below.You can reach our general customer-service team at [email protected]. PublishersFor assistance with making your inventory available to PulsePoint bidders, please contact [email protected]. BuyersFor assistance with API integration and bidding on impressions, please contact [email protected]. New York360 Madison Avenue, 14th FloorNew York, NY 10017(212) 706-4800 San Francisco115 Sansome Street, Suite 1002San Francisco, CA 94104(415) 937-8208 LondonThe Euston Office, One Euston Square40 Melton St London, NW1 2FD, UK+44 (0)203 574 4607
https://docs.pulsepoint.com/pages/viewpage.action?pageId=5308803
2019-09-15T16:15:54
CC-MAIN-2019-39
1568514571651.9
[]
docs.pulsepoint.com
tail Description Returns the last N number of specified results. The events are returned in reverse order, starting at the end of the result set. The last 10 events are returned if no integer is specified Syntax tail [<N>] Required arguments None. Optional arguments - <N> - Syntax: <int> - Description: The number of results to return. - Default: 10 Examples Example 1: Return the last 20 results in reverse order. ... | tail 20 See also Answers Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the tail.
https://docs.splunk.com/Documentation/Splunk/6.3.1/SearchReference/Tail
2019-09-15T16:44:33
CC-MAIN-2019-39
1568514571651.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Novelty - PopUp Maker Setup This theme uses the Popup Maker plugin, which can be obtained here or by searching in Dashboard>Plugins>Add New. After you’ve installed the plugin, navigate to POPUP MAKER > ADD NEW. Once there you’ll hit the edit button on the Sample Popup and configure as follows: A few essential notes: - It's important to note that you'll want to be sure that the condition is set for the HOMEPAGE. - Make sure that you're using the DEFAULT theme. - Ensure that the box is checked for the DISABLE OVERLAY feature. - You'll only add a title to the TOP entry line when you create a new popup. Adding a title in the "POPUP TITLE" area will force the title to appear on the actual popup. Here is the form code you’ll copy and paste to insert your form. This will work with your Mailchimp action code. To get that, follow this tutorial.
https://docs.restored316.com/article/605-novelty-popup-maker
2019-09-15T16:18:36
CC-MAIN-2019-39
1568514571651.9
[array(['http://demos.restored316designs.com/style/files/2016/11/Edit_Popup_‹_Get_in_Style_—_WordPress_-_2016-11-03_17.34.38.png', None], dtype=object) ]
docs.restored316.com
Push notification for OTP. When users receive the notification, they have to simply tap Allow on the notification to log in to Citrix Gateway. When gateway receives acknowledgment from the user, it identifies source of the request, and sends response to that browser connection. If the notification response is not received within the timeout period (30 seconds), users are redirected to the Citrix Gateway login page. The users can then enter the OTP manually or click Resend Notification to receive the notification again on the registered device. Admins can make push notification authentication as the default authentication by using the loginschemas created for push notification. Important: Push notification feature is available with a Citrix ADC Premium edition license. Advantages of push notificationsAdvantages of push notifications - Push notifications provide a more secure multifactor authentication mechanism. Authentication to Citrix Gateway is not successful until the user approves the login attempt. - Push notification is easy to administer and use. Users have to download and install the Citrix SSO mobile app that does not require any administrator assistance. - Users do not have to copy or remember the code. They have to simply tap on the device to get authenticated. - Users can register multiple devices. How push notifications workHow push notifications work The push notification workflow can be classified into two categories: - Device registration - End user login Prerequisites for using push notificationPrerequisites for using push notification Complete the Citrix Cloud onboarding process. 1. Create a Citrix Cloud company account or join an existing one. For detailed processes and instructions on how to proceed, see Signing Up for Citrix Cloud. 2. Log in to, and select the customer. 3. From Menu, select Identity and access Management and then navigate to API Access tab to create a client for the customer. 4. Copy the ID, secret, and customer ID. The ID and secret are required to configure push service in Citrix ADC as “ClientID” and “ClientSecret” respectively. Important: - Same API credentials can be used on multiple data centers. - On premises Citrix ADC appliances must be able to resolve server addresses mfa.cloud.com and trust.citrixworkspacesapi.net and are accessible from the appliance. This is to ensure that there are no firewalls or IP address blocks for these servers over port 443. Download the Citrix SSO mobile app from App Store and Play Store for iOS devices and Android devices respectively. Push notification is supported on iOS from build 1.1.13 on Android from 2.3.5. Ensure the following for the Active Directory. - Minimum attribute length must be at least and the client machine must be synchronized to a common Network Time Server. Push notification configurationPush notification configuration The following are the high-level steps that must be completed to use the push notification functionality. The Citrix Gateway administrator must configure the interface to manage and validate users. 1. Configure a push service. 2. Configure Citrix Gateway for OTP management and end user login. Users must register their devices with gateway for logging in to Citrix Gateway. 1. Register your device with Citrix Gateway. 2. Log in to Citrix Gateway. Create a push service 1. Navigate to Security > AAA-Application Traffic > Policies > Advanced Policies > Actions > Push Service and click Add. 2. In Name, enter the name of the push service. 3. In Client ID, enter the unique identity of the relying party for communicating with Citrix Push server in cloud. 4. In Client Secret, enter the unique secret of the relying party for communicating with Citrix Push server in cloud. 5. In Customer ID, enter the customer ID or name of the account in cloud that is used to create Client ID and Client Secret pair. Configure Citrix Gateway for OTP management and end user login Complete the following steps for OTP management and end user login. - Create login schema for OTP management - Configure authentication, authorization, and auditing virtual server - Configure VPN or load balancing virtual servers - Configure policy label - Create login schema for end user login For details on configuration, see Native OTP support. Important: For push notification, admins must explicitly configure the following: - Create a push service. - While creating login schema for OTP management, select the SingleAuthManageOTP.xml login schema or equivalent as per the need. - While creating login schema for end user login, select the DualAuthOrPush.xml login schema or equivalent as per the need. Register your device with Citrix Gateway Users must register their devices with Citrix Gateway to use the push notification functionality. 1. In your web browser, browse to your Citrix Gateway FQDN, and suffix /manageotp to the FQDN. This loads the authentication page. Example: 2. Log in using your LDAP credentials or appropriate two-factor authentication mechanisms, as required. 3. Click Add Device. 4. Enter a name for your device, then click Go. A QR code is displayed on the Citrix Gateway browser page. 5. Scan this QR code using the Citrix SSO app from the device to be registered. Citrix SSO validates the QR code and then registers with gateway for push notifications. If there are no errors in the registration process, the token is successfully added to the password tokens page. 6. If there are no additional devices to add/manage log out using the list at the top right corner of the page. Test one-time password authentication 1. To test the OTP, click your device from the list and then click Test. 2. Enter the OTP that you have received on your device and click Go. The OTP verification successful message appears. 3. Log out using the list at the top right corner of the page. Note: You can use the OTP management portal anytime to test authentication, remove registered devices, or register more devices. Log in to Citrix Gateway After registering their devices with Citrix Gateway, users can use the push notification functionality for authentication. 1. Navigate to your Citrix Gateway authentication page (for example:) You are prompted to enter only your LDAP credentials depending on the loginschema configuration. 2. Enter your LDAP user name and password, then select Submit. A notification is sent to your registered device. Note: If you want to enter the OTP manually, you must select Click to input OTP manually and enter the OTP in the TOTP field. 3. Open the Citrix SSO app on your registered device and tap Allow. Note: - The authentication server waits for push server notification response until the configured timeout period expires. After the timeout, Citrix Gateway displays the login page. The users can then enter the OTP manually or click Resend Notification to receive the notification again on the registered device. Based on your selected option, gateway validates the OTP that you have entered or resends the notification on your registered device. - No notification is sent to your registered device regarding login failure. The following images capture a sample device registration and push notification workflow. Failure conditionsFailure conditions - The device registration might fail in the following cases. - Server certificate might not be trusted by end-user device. - Citrix Gateway used to register for OTP is not reachable by client. - The notifications might fail in the following cases. - User device is not connected to the internet - Notifications on the user device are blocked - User does not approve the notification on the device In these cases, the authentication server waits until the configured timeout period expires. After the timeout, the Citrix Gateway displays a login page with the options to manually enter the OTP or to resend the notification again on your registered device. Based on the selected option, further validation occurs. Citrix SSO app behavior on iOS – points to noteCitrix SSO app behavior on iOS – points to note Notification shortcuts Citrix SSO iOS app includes support for actionable notifications to enhance user experience. Once a notification is received on an iOS device, and if the device is locked or the Citrix SSO app is not in foreground, users can use the shortcuts built into the notification to either approve or deny login request. To access notification shortcuts, users need to either force touch (3D touch) or long press the notification depending on the device’s hardware. Selecting the Allow shortcut action sends a login request to Citrix ADC. Depending on how the authentication policy is configured on the authentication, authorization, and auditing virtual server; - The login request might be sent in the background without any need to launch the app into foreground or unlock the device. - The app might prompt for Touch-ID/Face-ID/Passcode as an extra factor in which case the app is launched into foreground. Deleting password tokens from Citrix SSO 1. To delete a password token registered for push in the Citrix SSO app, users must perform the following steps: 2. Unregister (remove) the iOS/Android device on gateway. QR code for removing registration from device appears. 3. Open the Citrix SSO app and tap the info button of the password token to be deleted. 4. Tap Delete Token and scan the QR code. Note: - If the QR code is valid, the token is successfully removed from the Citrix SSO app. - Users can tap Force Delete to delete a password token without having to scan the QR code if the device is already removed from gateway. Force deleting might result in the device continuing to receive notifications if the device has not been removed from Citrix Gateway.
https://docs.citrix.com/en-us/citrix-adc/13/aaa-tm/push-notification-otp.html
2019-09-15T17:06:21
CC-MAIN-2019-39
1568514571651.9
[array(['/en-us/citrix-adc/media/push-notification-workflow-diagram.png', 'Workflow'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-device-registration.png', 'Login'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-qr-code-scan.png', 'scan'], dtype=object) array(['/en-us/citrix-adc/media/push_notification_add_token_scan.png', 'token'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-login-page-user.png', 'Token'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-allow_push.jpg', 'Allow'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-logon-fallback.png', 'Fallback'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-workflow.gif', 'Workflow'], dtype=object) array(['/en-us/citrix-adc/media/push_shortcut-notifications.png', 'Shortcut'], dtype=object) array(['/en-us/citrix-adc/media/push-notification-delete-token.png', 'Delete-token'], dtype=object) ]
docs.citrix.com
Orchard 1.9.1 Published: 06/30/2015 This release contains bug and security fixes. It is highly recommended to update to this version. We are also releasing patches for versions of Orchard which contain the security vulnerability described at. It contains patch files for the following Orchard versions: 1.7.3 1.8.2 How to Install Orchard To install Orchard using Web PI,.9.1 fixes bugs and introduces the following changes and features: - Tokens are evaluated by layout elements - Fixed performance regression for widgets - Fixed localization string encoding - Improved native clipboard support in layout editor - New heading, fieldset and break elements for the layout editor - Upgraded to ASP.NET MVC 5.2.3 and Razor 3.2.3 - Improved performance of output cache key generation - Added the ability to set the SSL setting for cookies in the SSL module - Added support for Visual Studio 2015 - New sticky toolbox in the layout editor - Improved logging for recipe execution - Jobs queue is now used to trigger indexing - Added import/export for projections - New ItemDisplayText extension to manage encoding options The full list of fixed bugs for this release can be found here: How to upgrade from a previous version You can find migration instructions here:.
http://docs.orchardproject.net/en/latest/Documentation/Orchard-1-9-1.Release-Notes/
2017-06-22T14:03:44
CC-MAIN-2017-26
1498128319575.19
[]
docs.orchardproject.net
The UX Series is going ahead… and today Daniel Burwen (Operation Ajax) and David Dufresne (Fort McMoney) explore the question: “what can i-doc producers learn from game design”?, but about understanding his needs, his goals, his context, and using all this information as a source of inspiration. 2 – interaction is more than clicking: the digital file offers options f customisation, personalisation and out of screen experience that have not been used enough in interactive narrative. 3 – agile development is key: without prototyping and reiterations – tweaking both the interface and the original idea – the product is conceptually static and distant from the user. 4 – successful projects have a clear proposition at their core – it is not complexity that makes them stand out, but the clarity of what they bring to the users. Matching a clear proposition with an effective design and navigation will be the focal point of the next questions in the UX Series, but I thought the insistence in the importance of agile design was calling for a deeper look into an industry that has embraced such a dynamic logic of design: the game industry. I have therefore asked Daniel Burwen (Operation Ajax) and David Dufresne (Fort McMoney) to give us their point of view on “what can i-doc producers learn from game design”. See what two of the most forward thinking creators of our industry have to say…
http://i-docs.org/2014/01/20/ux-series-question-2-can-learn-game-design/
2017-06-22T14:22:03
CC-MAIN-2017-26
1498128319575.19
[]
i-docs.org
This topic contains a brief overview of the accounts and groups, log files, and other security-related considerations for Microsoft BitLocker Administration and Monitoring (MBAM). For more information, follow the links in this article. General security considerations Understand the security risks. The most serious risk to MBAM is that its functionality could be hijacked by an unauthorized user who could then reconfigure BitLocker encryption and gain BitLocker encryption key data on MBAM Clients. However, the loss of MBAM functionality for a short period of time due to a denial-of-service attack would not generally have a catastrophic impact. Physically secure your computers. Security is incomplete without physical security. Anyone with physical access to an MBAM Server could potentially attack the entire client base. Any potential physical attacks must be considered high risk and mitigated appropriately. MBAM servers should be stored in a physically secure server room with controlled access. Secure these computers when administrators are not physically present by having the operating system lock the computer, or by using a secured screen saver. Apply the most recent security updates to all computers. Stay informed about new updates for operating systems, Microsoft SQL Server, and MBAM by subscribing to the Security Notification service (). Use strong passwords or pass phrases. Always use strong passwords with 15 or more characters for all MBAM and MBAM administrator accounts. Never use blank passwords. For more information about password concepts, see the “Account Passwords and Policies” white paper on TechNet (). Accounts and Groups in MBAM A best practice for user account management is to create domain global groups and add user accounts to them. Then, add the domain global accounts to the necessary MBAM local groups on the MBAM Servers. Active Directory Domain Services Groups No groups are created automatically during MBAM Setup. However, you should create the following Active Directory Domain Services global groups to manage MBAM operations. MBAM Server Local Groups MBAM Setup creates local groups to support MBAM operations. You should add the Active Directory Domain Services Global Groups to the appropriate MBAM local groups to configure MBAM security and data access permissions. SSRS Reports Access Account The SQL Server Reporting Services (SSRS) Reports Service Account provides the security context to run the MBAM reports available through SSRS. This account is configured during MBAM Setup. MBAM Log Files During MBAM Setup, the following MBAM Setup log files are created in the %temp% folder of the user who installs the MBAM Server Setup log files MSI<five random characters>.log Logs the actions taken during MBAM Setup and MBAM Server Feature installation. InstallComplianceDatabase.log Logs the actions taken to create the MBAM Compliance Status database setup. InstallKeyComplianceDatabase.log Logs the actions taken to create the MBAM Recovery and Hardware database. AddHelpDeskDbAuditUsers.log Logs the actions taken to create the SQL Server logins on the MBAM Compliance Status database and authorize helpdesk web service to the database for reports. AddHelpDeskDbUsers.log Logs the actions taken to authorize web services to database for key recovery and create logins to the MBAM Recovery and Hardware database. AddKeyComplianceDbUsers.log Logs the actions taken to authorize web services to MBAM Compliance Status database for compliance reporting. AddRecoveryAndHardwareDbUsers.log Logs the actions taken to authorize web services to MBAM Recovery and Hardware database for key recovery. Note In order to obtain additional MBAM Setup log files, you must install Microsoft BitLocker Administration and Monitoring by using the msiexec package and the /l <location> option. Log files are created in the location specified. MBAM Client Setup log files MSI<five random characters>.log Logs the actions taken during MBAM Client installation. MBAM Database TDE considerations The Transparent Data Encryption (TDE) feature available in SQL Server 2008 is a required installation prerequisite for the database instances that will host MBAM database features. With TDE, you can perform real-time, full database-level encryption. TDE is a well-suited choice for bulk encryption to meet regulatory compliance or corporate data security standards. TDE works at the file level, which is similar to two Windows features: the Encrypting File System (EFS) and BitLocker Drive Encryption, both of which also encrypt data on the hard drive. TDE does not replace cell-level encryption, EFS, or BitLocker. When TDE is enabled on a database, all backups are encrypted. Thus, special care must be taken to ensure that the certificate that was used to protect the Database Encryption Key (DEK) is backed up and maintained with the database backup. Without a certificate, the data will be unreadable. Back up the certificate along with the database. Each certificate backup should have two files; both of these files should be archived .It is best to archive them separately from the database backup file for security. For an example of how to enable TDE for MBAM database instances, see Evaluating MBAM 1.0. For more information about TDE in SQL Server 2008, see Database Encryption in SQL Server 2008 Enterprise Edition. Related topics Security and Privacy for MBAM 1.0
https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/mbam-v1/security-considerations-for-mbam-10
2017-06-22T14:44:37
CC-MAIN-2017-26
1498128319575.19
[]
docs.microsoft.com
Adding your application to the App Store The Monster-UI Framework has been built to allow developers to code their own apps and to allow them to reach their users via the Monster-UI. There are 2 ways to add your applications inside the database. Automatic way with a Kazoo Command Assuming you've installed your Monster applications to /path/to/monster-ui/apps, you can run the following SUP command on the server: sup crossbar_maintenance init_apps '/path/to/monster-ui/apps' '' This will load the apps (and let you know which apps it couldn't automatically load) into the master account (including icons, if present). For any apps that failed to be loaded automatically, you can follow the manual instructions below. If you want to install a single Monster application: sup crossbar_maintenance init_app '/path/to/monster-ui/apps/{APP}' '' Manual way by adding a document to the database In order to add your application manually, you have to add a document in the Master Account database. So first, go in your master account database. At the top-right of the page, there is a dropdown listing the different views of this database, click and select the apps_store, crossbar_listing view in the dropdown. This will list all the current apps of your app store. In a fresh install, this should be empty! If not, make sure you didn't create that document already... :) Now click on "New Document", and we'll create that document. (If you're adding an app that already exists, like accounts, voip, numbers, you can get all the metadata needed to create these files in their own metadata folder) { "name": "appName", "api_url": "", "i18n": { "en-US": { "label": "Friendly App Name", "description": "Description that will be displayed in the App Store.", "extended_description": "More things to write about the app", "features": [ "Great thing #1 about your app", "Great thing #2 about your app" ] } }, "tags": [ "reseller", "carrier" ], "author": "The Overlord", "version": "1.0", "license": "-", "price": 0, "icon": "nameOfIcon.png", "screenshots": [ "nameOfImage1.png", "nameOfImage2.png" ], "pvt_type": "app" } So let's go through those different attributes: name: define the name of your app in there. It needs to match the name of the folder of your app, api_url: define the URL of the Kazoo API to request, source_url: this defines where the Monster-UI framework will try to get the sources of your app. For now this needs to be hosted on the same server as the Monster-UI, i18n: In this, you can define the descriptions and features of your app in different languages, to try to reach as many users as possible. The language displayed in the app store will depend on the active setting of the user himself, tags: There is currently 3 different tags in the Monster-UI: reseller, carrier, developer. You can define all of them if you want your app to be listed in all those sections, author: Well.. that would be your name, version: This is only a label but can help you keep track of the different versions of your application, license: You can specify the license here, price: This is only a text for now, we plan to be able to charge for certain applications, but it hasn't been implemented yet, so you should leave this value to 0, icon: This is the name of an image for the Icon of your app, we'll be able to upload this image once this document is created via Futon, * screenshots: This is an array of image names that will be shown in the slideshow of your app page. So if you copy and paste the block of JSON above, and create a document with it, you should then be able to see the new app in the App Store, and should get a popup like this one when you click on the App in the App Store: Now the last thing to do would be to actually upload the different images and icon you want to use. If you go in Couch, in the document you just created, you should have the option to Upload an attachment Now you only need to upload your images, make sure that the names match whatever you typed in the app document. Once it's done, it will magically appear in the Monster-UI app store. Congratulations, you just added an application to the app store! Please let us know if you have any issue with the process, or if you feel like this documentation could be better in any way!
https://docs.2600hz.com/ui/docs/appstore/
2017-06-22T14:20:05
CC-MAIN-2017-26
1498128319575.19
[array(['http://i.imgur.com/4DZxZRR.png', 'Image showing details of app'], dtype=object) array(['http://i.imgur.com/ZKGPoMu.png', 'Image showing upload attachment link'], dtype=object)]
docs.2600hz.com
Creates an environment in the Integration Services catalog. Syntax create_environment [ @folder_name = ] folder_name , [ @environment_name = ] environment_name [ , [ @environment_description = ] environment_description ] Arguments [ ). Return Code Value 0 (success) Result Sets None Permissions This stored procedure requires one of the following permissions: READ and MODIFY permissions on the folder Membership to the ssis_admin database role database role Membership to the sysadmin server role Errors and Warnings The following list describes some conditions that may raise an error or warning: The folder name cannot be found An environment that has the same name already exists in the specified folder Remarks The environment name must be unique within the folder.
https://docs.microsoft.com/en-us/sql/integration-services/system-stored-procedures/catalog-create-environment-ssisdb-database
2017-06-22T15:04:31
CC-MAIN-2017-26
1498128319575.19
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Buildbot Development¶ This chapter is the official repository for the collected wisdom of the Buildbot hackers. It is intended both for developers writing patches that will be included in Buildbot itself, and for advanced users who wish to customize Buildbot. - Master Organization - Definitions - Buildbot Coding Style - Buildbot's Test Suite - Configuration - Error Handling - Reconfiguration - Utilities - Database - Build Result Codes - File Formats - Web Status - Master-Slave API - String Encodings - Metrics - Classes
http://docs.buildbot.net/0.8.7p1/developer/index.html
2017-06-22T14:05:31
CC-MAIN-2017-26
1498128319575.19
[]
docs.buildbot.net
Troubleshooting You may encounter some errors during the installation and configuration of ActiveShareFS. If you have purchased ActiveShareFS, you can also contact [email protected] for additional support. You may need to enable debug mode by modifying the <asfs> configuration section in the web.config of the ASFS-enabled webapp. This is found at /configuration/asfs/logger[@level]. Change the level to level="ALL". User authenticated successfully at the Identity Provider but either no conditions were matched or the username resolved empty. This means the username mapping resolved to an empty value. Check to ensure that the identity provider is releasing an attribute that can be used as the SharePoint account name. Check that the Shibboleth SP attribute-map.xml is configured for this attribute. Check asfs.xml <serverVariable> to see if it has correctly been configured to map that to a local name. This could also happen if none of the account rules in asfs.xml are being executed. You should always have a “catch-all” type of account rule that runs if no other rules matched. An error has occurred. Please contact the administrator: See the asfs.log file located under <ASFS_HOME>/logsfor detailed information. I am noticing a redirect loop while using a mobile device. ActiveShareFS does not currently support mobile views for SharePoint. You will need to disable it for the site by adding the following to the web.config’s configuration/system.web <browserCaps> <result type="System.Web.Mobile.MobileCapabilities, System.Web.Mobile, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> <filter>isMobileDevice=false</filter> </browserCaps>
https://docs.9starinc.com/asfs2010v5-4/troubleshooting/
2017-06-22T13:59:51
CC-MAIN-2017-26
1498128319575.19
[]
docs.9starinc.com
RT 4.4.1 Documentation Initialdata - Summary of initialdata files - Examples - What can be in an initialdata file? - What's missing? - Running an initialdata file - Implementation details. Examples RT ships with many initialdata files, only one of which is used to configure a fresh install; the rest are used for upgrades, but function the same despite being named differently. @Queues and @Groups arrays are expected by RT and should contain hashref definitions. There are many other arrays RT will look for and act on, described below. None are required, all may be used. Keep in mind that since they're just normal Perl arrays, you can push onto them from a loop or grep out definitions based on conditionals or generate their content with map, etc. The complete list of possible arrays which can be used, along with descriptions of the values to place in them, is below. @Users push @Users, { Name => 'john.doe', Password => 'changethis', Language => 'fr', Timezone => 'America/Vancouver', Privileged => 1, Disabled => 0, }; Each hashref in @Users is treated as a new user to create and passed straight into RT::User->Create. All of the normal user fields are available, as well as Privileged and Disabled (both booleans) which will do the appropriate internal group/flag handling. Also accepts an Attributes key, which is equivalent to pushing its arrayref of values onto @Attributes, below, with Object set to the new user. For a full list of fields, read the documentation for "Create" in RT::User. @Groups push @Groups, { Name => 'Example Employees', Description => 'All of the employees of my company', Members => { Users => [ qw/ alexmv trs falcone / ], Groups => [ qw/ extras / ] }, }; Creates a new RT::Group for each hashref. In almost all cases you'll want to follow the example above to create a group just as if you had done it from the admin interface. In addition to the Members option shown above, which can take both users and groups, the MemberOf field may be a single value or an array ref. Each value should be a user-defined group name or hashref to pass into "LoadByCols" in RT::Group. Each group found will have the new group added as a member. It also accepts an Attributes key, which is equivalent to pushing its arrayref of values onto @Attributes, below, with Object set to the new group. @Queues push @Queues, { Name => 'Helpdesk', CorrespondAddress => '[email protected]', CommentAddress => '[email protected]', }; Creates a new RT::Queue for each hashref. Refer to the documentation of "Create" in RT::Queue for the fields you can use. It also accepts an Attributes key, which is equivalent to pushing its arrayref of values onto @Attributes, below, with Object set to the new queue. @CustomFields push @CustomFields, { Name => 'Favorite color', Type => 'FreeformSingle', LookupType => 'RT::Queue-RT::Ticket', }; Creates a new RT::CustomField for each hashref. It is the most complex of the initialdata structures. The most commonly used fields are: Name The name of this CF as displayed in RT. Description A short summary of what this CF is for. ApplyTo May be a single value, or an array reference of such; each should be either an ID or Name. If omitted, the CF is applied globally. This should not be used for User or Group custom fields. This argument may also be passed via Queue, for backwards compatibility, which also defaults the LookupTypeto RT::Queue-RT::Ticket. Type One of the following on the left hand side: SelectSingle # Select one value SelectMultiple # Select multiple values FreeformSingle # Enter one value FreeformMultiple # Enter multiple values Text # Fill in one text area Wikitext # Fill in one wikitext area BinarySingle # Upload one file BinaryMultiple # Upload multiple files ImageSingle # Upload one image ImageMultiple # Upload multiple images Combobox # Combobox: Select or enter one value AutocompleteSingle # Enter one value with autocompletion AutocompleteMultiple # Enter multiple values with autocompletion Date # Select date DateTime # Select datetime IPAddressSingle # Enter one IP address IPAddressMultiple # Enter multiple IP addresses IPAddressRangeSingle # Enter one IP address range IPAddressRangeMultiple # Enter multiple IP address ranges If you don't specify "Single" or "Multiple" in the type, you must specify MaxValues. LookupType Labeled in the CF admin page as "Applies to". This determines whether your CF is for Tickets, Transactions, Users, Groups, or Queues. Possible values: RT::Queue-RT::Ticket # Tickets RT::Queue-RT::Ticket-RT::Transaction # Transactions RT::User # Users RT::Group # Groups RT::Queue # Queues RT::Class-RT::Article # Articles Ticket CFs are the most common, meaning RT::Queue-RT::Ticketis the most common LookupType. RenderType Only valid when Typeis "Select". Controls how the CF is displayed when editing it. Valid values are: Select box, List, and Dropdown. Listis either a list of radio buttons or a list of checkboxes depending on MaxValues. MaxValues Determines whether this CF is a Single or Multiple type. 0 means multiple. 1 means single. Make sure to set the MaxValuesfield appropriately, otherwise you can end up with unsupported CF types like a "Select multiple dates" (it doesn't Just Work). You can also use old-style Types which end with "Single" or "Multiple", for example: SelectSingle, SelectMultiple, FreeformSingle, etc. Values Valuesshould be an array ref (never a single value!) of hashrefs representing new RT::CustomFieldValue objects to create for the new custom field. This only makes sense for "Select" CFs. An example: my $i = 1; push @CustomFields, { LookupType => 'RT::Queue-RT::Ticket', # for Tickets Name => 'Type of food', Type => 'SelectSingle', # SelectSingle is the same as: Type => 'Select', MaxValues => 1 RenderType => 'Dropdown', Values => [ { Name => 'Fruit', Description => 'Berries, peaches, tomatos, etc', SortOrder => $i++ }, { Name => 'Vegetable', Description => 'Asparagus, peas, lettuce, etc', SortOrder => $i++ }, # more values as such... ], }; In order to ensure the same sorting of Values, set SortOrderinside each value. A clever way to do this easily is with a simple variable you increment each time (as above with $i). You can use the same variable throughout the whole file, and don't need one per CF. BasedOn Name or ID of another Select Custom Field. This makes the named CF the source of categories for your values. Pattern The regular expression text (not qr//!) used to validate values. It also accepts an Attributes key, which is equivalent to pushing its arrayref of values onto @Attributes, below, with Object set to the new custom field. Refer to the documentation and implementation of "Create" in RT::CustomField and "Create" in RT::CustomFieldValue for the full list of available fields and allowed values. @ACL @ACL is very useful for granting rights on your newly created records or setting up a standard system configuration. It is one of the most complex initialdata structures. Pick one or more Rights All ACL definitions expect a key named Right with the internal right name you want to grant; alternately, it may contain an array reference of right names. The internal right names are visible in RT's admin interface in grey next to the longer descriptions. Pick a level: on a queue, on a CF, or globally After picking a Right, you need to specify on what object the right is granted. This is different than the user/group/role receiving the right. - Granted on a custom field by name (or ID), potentially a global or queue CF => 'Name', LookupType => 'RT::User', # optional, in case you need to disambiguate - Granted on a queue Queue => 'Name', - Granted on a custom field applied to a specific queue CF => 'Name', Queue => 'Name', - Granted on a custom field applied to some other object # This finds the CF named "Name" applied to Articles in the # "Responses" class CF => 'Name', LookupType => RT::Article->CustomFieldLookupType, ObjectId => 'Responses', - Granted on some other object (article Classes, etc) ObjectType => 'RT::Class', ObjectId => 'Name', - Granted globally Specifying none of the above will get you a global right. There is currently no way to grant rights on a group or article class level. Note that you can grant rights to a group; see below. If you need to grants rights on a group or article class level, you'll need to write an @Final subref to handle it using the RT Perl API. Pick a Principal: User or Group or Role Finally you need to specify to what system group, system/queue role, user defined group, or user you want to grant the right to. - An internal user group GroupDomain => 'SystemInternal', GroupType => 'Everyone, Privileged, or Unprivileged' - A system-level role GroupDomain => 'RT::System-Role', GroupType => 'Requestor, Owner, AdminCc, or Cc' - A queue-level role GroupDomain => 'RT::Queue-Role', Queue => 'Name', GroupType => 'Requestor, Owner, AdminCc, or Cc', - A group you created GroupDomain => 'UserDefined', GroupId => 'Name' - Individual user UserId => 'Name or email or ID' Common cases You're probably looking for definitions like these most of the time. - Grant a global right to a group you created { Right => '...', GroupDomain => 'UserDefined', GroupId => 'Name' } - Grant a queue-level right to a group you created { Queue => 'Name', Right => '...', GroupDomain => 'UserDefined', GroupId => 'Name' } - Grant a CF-level right to a group you created { CF => 'Name', Right => '...', GroupDomain => 'UserDefined', GroupId => 'Name' } Since you often want to grant a list of rights on the same object/level to the same role/group/user, we generally use Perl loops and operators to aid in the generation of @ACL without repeating ourselves. # Give Requestors globally the right to see tickets, reply, and see the # queue their ticket is in push @ACL, map { { Right => $_, GroupDomain => 'RT::System-Role', GroupType => 'Requestor', } } qw(ShowTicket ReplyToTicket SeeQueue); Troubleshooting The best troubleshooting is often to see how the rights you define in @ACL show up in the RT admin interface. @Scrips Creates a new RT::Scrip for each hashref. Refer to the documentation of "Create" in RT::Scrip for the fields you can use. Additionally, the Queue field is specially handled to make it easier to setup the same Scrip on multiple queues: - Globally Queue => 0, - Single queue Queue => 'General', # Name or ID - Multiple queues Queue => ['General', 'Helpdesk', 13], # Array ref of Name or ID @ScripActions Creates a new RT::ScripAction for each hashref. Refer to the documentation of "Create" in RT::ScripAction for the fields you can use. @ScripConditions Creates a new RT::ScripCondition for each hashref. Refer to the documentation of "Create" in RT::ScripCondition for the fields you can use. @Templates Creates a new RT::Template for each hashref. Refer to the documentation of "Create" in RT::Template for the fields you can use. @Attributes An array of RT::Attributes to create. You likely don't need to mess with this. If you do, know that the key Object is expected to be an RT::Record object or a subroutine reference that returns an object on which to call AddAttribute. If you don't provide Object or it's undefined, RT->System will be used. Here is an example of using a subroutine reference as a value for Object: @Attributes = ({ Name => 'SavedSearch', Description => 'New Tickets in SomeQueue', Object => sub { my $GroupName = 'SomeQueue Group'; my $group = RT::Group->new( RT->SystemUser ); my( $ret, $msg ) = $group->LoadUserDefinedGroup( $GroupName ); die $msg unless $ret; return $group; }, Content => { Format => <<' END_OF_FORMAT', .... END_OF_FORMAT Query => "Status = 'new' AND Queue = 'SomeQueue'", OrderBy => 'id', Order => 'DESC' }, }); @Initial @Final @Initial RT->Logger->error("...")! What's missing? There is currently no way, short of writing code in @Final or @Initial, to easily create Classes, Topics, or Articles from initialdata files. Running an initialdata file /opt/rt4/sbin/rt-setup-database --action insert --datafile /path/to/your/initialdata This may prompt you for a database password. Implementation details All the handling of initialdata files is done in RT::Handle->InsertData. If you want to know.← Back to index
https://docs.bestpractical.com/rt/4.4.1/initialdata.html
2017-06-22T14:16:50
CC-MAIN-2017-26
1498128319575.19
[]
docs.bestpractical.com
reboot¶ Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, Mac OS X, and Linux platforms. New in Chef client 12.0 Syntax¶ is the resource - name is the name of the resource block - action identifies the steps the chef-client will take to bring the node into the desired state - delay_mins and reason are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. Actions¶ This resource has the following actions: - :cancel - Cancel a reboot request. - :nothing - Define this resource block to do nothing until notified by another resource to take action. When this resource is notified, this resource block is either run immediately or it is queued up to be run at the end of the chef-client run. - :reboot_now - Reboot a node so that the chef-client may continue the installation process. - :request_reboot - Reboot a node at the end of a chef-client run. Properties¶ This resource has the following properties: - delay_mins Ruby Type: Fixnum The amount of time (in minutes) to delay a reboot request. - timer is available: - :immediate, :immediately - Specifies that a notification should be run immediately, per resource notified. - reason Ruby Type: String A string that describes the reboot action. - timer is available: - :immediate, :immediately - Specifies that a notification should be run immediately, per resource notified. Examples¶.'
https://docs.chef.io/resource_reboot.html
2017-02-19T14:19:51
CC-MAIN-2017-09
1487501169776.21
[]
docs.chef.io
This is the latest version of the ChemAxon Documentation. To see the available package versions before installing, you may use: user:~$ npm info marvin-live NodeJS v4 or newer is now required to run Marvin Live added support for NodeJS 6.9 LTS on Windows 2016 Server realtime plugins are now called using a last in, first out queue / a stack, instead of parallel calls, resulting in more even loads on remote services Marvin JS updated to 16.10.17 added the ability to sort and filter snapshots in overview mode on data collected with realtime plugins additionalSnapshotFieldshas been added to attach metadata fields to snapshots (e.g.: Series, Assignee, Status) fixed an issue when after changing authenticated domain, comments would appear multiple times in the chat log deleteUnusedRooms, databaseCleanupInterval, saveReportOnDeletecan now be specified per authentication domain overriding any global setting fixed an issue in Firefox for Windows and Safari where comments didn't appear on slow connections a new plugin type has been introduced for automatic SQL backup for snapshots: storage plugins. Please review the developer guide for further details. secret_keycan now be specified as secretKeyas well. The old name will continue to be supported. the built-in protein-ligand viewer has been greatly improved with visibility toggles and display settings for chains, ligands, key residues and solvents in a scene upgraded LDAP authentication library for support of a wider range of Active Directory versions minor bug fixes the login page and lounge has new design, with clearly separated invitations and search capability for longer room lists improve URL generation throughout the application when the Marvin Live server is behind a proxy server the Overview mode now displays reported data on each snapshot's card. Up to 5 data fields can be selected for display data points in reports are now ordered the same way as on the user interface fixed an issue where the first snapshot’s image was not displayed in some cases fixed the layout of dropdowns in realtime plugin configuration panels fixed an issue where calculation results would show from the previous room, when switching rooms reports with built in chemical file formats (MRV, SDF, SMILES) now include extra property fields: Snapshot ID, room name, room link built in PowerPoint reports now include an Overview slide that shows all snapshots in a 4x2 grid built in PowerPoint report now includes a link to the original room the 3D viewer now supports downloading data from within using the downloadData and downloadFilename attributes fixed a number of memory leaks in the 3D viewer, partly regressions with the previous release fixed an issue where snapshot images disappeared after reloading the webpage realtime plugins can now have dynamic values for their settings (watch: true) Marvin JS updated to v16.2.22 significantly improved the loading speed of large protein structures in the 3D viewer added support for Red Hat / CentOS 6.4 pinned structures now automatically remain in place when creating a snapshot, and a new delete button is available to remove them as needed improved the loading speed of the browser application by 40-70% make sure the session database respects the databaseLocation configuration setting fixed a memory leak when repeatedly leaving and joining rooms Marvin JS updated to v16.1.4 realtime plugin can now have user controlled settings, including toggles, file uploads, combo boxes (developer guide) SPDY support has been temporarily disabled, due to ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY errors in Chrome and Firefox file and room import dialogs have been redesigned to make it easier to select data fields to keep The recommended NodeJS runtime has been changed from 0.10.x to 0.12.x. Please upgrade your NodeJS engine! an Overview mode is now available, snapshots can be managed, to focus exported dataset blue colors are no longer assigned as user color full list of exporter plugins are now exposed to exporter plugins to allow easily chaining them Marvin JS updated to v15.11.16 the attendee list has been moved to just above the chemical editor, its looks have been updated to include user colors, and it switches to show initials only when the number of people reaches a certain threshold atoms and bonds are now highlighted to remote users when adding or changing them in the chemical editor tooltips are now available throughout the application that explain functionality where buttons only show an icon invitations are now available to private rooms with URLs available from the Share & Invite menu, that authorized users of the system can use added an option to disable creating public rooms (install guide) changed the support email address available in the About dialog Marvin JS updated to v15.11.2 chat messages containing invitations to Gotomeeting, Webex, Livemeeting, Lync, Teamviewer, Hangout and Skype discussions are now highlighted in the toolbar fixed a bug introduced in 15.9.16 that prevented Marvin JS’s clean/conversion requests from reaching JChem Web Services Marvin JS updated to v15.9.21 changed the design of the room on the lounge screen when being invited added an About dialog to the user menu with version information removed requirements for compilers and Python during installation added right click option to snapshots with Copy, Load and Pin options added option to hide the domain picklist from the login screen to protect the name of various projects or partners (install guide) added option to specify file name for export plugins (developer guide) snapshots now display their order (e.g. #1) and these numbers can be referenced and highlighted in comments added option to configure Marvin JS’s display settings (install guide) added option to export plugins that allow returning a message instead of a file (developer guide) added option to real time plugins that simplify specifying the HTML template (developer guide) made domain and roomName available to plugins in the this execution context variable for export and real time plugins fix meeting reports where the comments didn’t show up next to the last snapshot of a report fixed the file uploader to allow uploading the same file multiple times added structure directive that allows realtime plugin templates to define a right click context menu added /status page for administrators to get basic information about the application’s status Marvin JS updated to v15.6.1 fixed the SDF uploader, in some cases it only loaded 1 structure on the GUI changed icons used throughout the GUI to a more consistent icon set moved the corporate ID resolver button to below the sketcher improved the logic of errors displayed on the corporate ID resolver after a restart, real time services now only initialize when a user joins a room to prevent overloading external services merged the “Copy” and “Upload” menus into a single “Import” menu to clarify their use fixed a display issue where meeting participant names would break the layout when not fitting a single line fixed a display issue that prevented the LDAP display name from appearing on the login screen added JSON upload capability to the upload API (URL integration) added a new allowCrossOriginUploads option to enable browsers sending chemical files from different domains (install guide) Marvin JS updated to v15.5.4 added a new exporter option: SDF exporter, available when converterService option is configured added a Share button to the top right toolbar to invite others to a meeting room fixed a regression, where the Copy from room list wouldn’t display correctly Marvin JS updated to v15.4.20 fixed 3 issues related to license handling 3D visualizer updated with new color scheme for alignment results, atom to atom distance measuring, the ability to keep view rotation and zoom settings when switching between molecules, and highlighting of atoms and bonds selected in Marvin JS Marvin JS updated to v15.4.13 added option to specify GUI order of real time plugins with sortOrder property (developer guide) added API endpoint to send chemical files from external applications into Marvin Live (URL integration) enable bookmarkable URLs when no authentication is configured fix highlighting room name in the lounge, when the room name contains “/” characters 3D visualizer will display molecule with CPK coloring if no atom/bond sets are defined for red-green coloring improve support of Node JS 0.12 Marvin JS updated to v15.4.6 raise limit of files that can be uploaded or copied from other rooms, from 20 to 100 snapshots in PowerPoint reports can now be edited with JChem for Office and indexed with JChem for SharePoint print uncaught errors from all plugin types to standard output force Internet Explorer on intranet sites to use Edge Mode fix multiplied client-server connections after disconnecting temporarily from the server Marvin JS updated to v15.3.30 installer included incorrect version for a bundled tool causing problems on start, this has been corrected update URL in the browser address bar to reflect the current room used direct user to the room specified in the URL add themeOverrides option to customize the theme of web client through an external CSS file (details) Marvin JS updated to v15.3.23 fix progress bar showing sometimes after a property box updated fix property box updating twice when a pinned structure is set fix creating a new room by hitting Enter add pushpin icon to pinned structure's top left corner improve layout and visibility by adding depth to pinned structures and the editor added the name of authenticated domain and a logout button to the lounge changed the design and layout of room selector login page now shows Logging in... for lengthy authentication reorganized the buttons in the top toolbars when running marvin live on https port 443, automatically redirect from http port 80 when logging in through LDAP, fetch the user's given name when specifying searchAttributes (details) add calculation results to MS Word report Marvin JS updated to v15.3.9 added option to load molecules from another meeting room pick molecules to load when importing from file or another meeting room new configuration option: databaseLocation (details), configure where persistent storage is located new configuration option: deleteUnusedRooms (details), automatically remove unused meeting rooms after a period of inactivity new configuration option: saveReportOnDelete (details), automatically save a copy of the meeting report on the disk before a meeting room is deleted add pinned structure to realtime plugins as a second optional parameter improve titles for uploaded structures in ppt and doc reports update Marvin JS round numbers added to Powerpoint slides in the default exporter add embedded database that stores meeting rooms and all data within (details) add missing license public key to distributable allow importing SDF in resolver plugins add SPDY support in HTTPS mode (details) add license checking (details) add resolver plugin system (details) increase firefox support to firefox 10 and newer add SAML authentication (details)
https://docs.chemaxon.com/display/docs/Marvin+Live+history+of+changes
2017-02-19T14:21:28
CC-MAIN-2017-09
1487501169776.21
[]
docs.chemaxon.com
Bigcommerce Integration Guide Introduction LeadDyno offers an integration with the new Bigcommerce “Single Click” system, making it incredibly easy to get an affiliate program going for your Bigcommerce store. After completing this guide, your LeadDyno account will be setup so that you will have complete visibility into your store’s visitors, leads and purchases, including crediting affiliates for traffic in which they send to your store. The Bigcommerce integration also synchronizes custom Affiliate Codes you assign to affiliates as Bigcommerce coupons. This allows affiliates to pass around their affiliate code via offline means and still get credit for those sales. This guide consists of several parts: - Adding the LeadDyno Bigcommerce Single Click App - Instructions on how to add the LeadDyn tracking code to your Bigcommerce store - Setup Bigcommerce coupon settings (optional) Adding the LeadDyno Bigcommerce Single Click App Navigate to the BigCommerce App Store and search for “LeadDyno”. After finding the app, click the “Install” button and you will be prompted to grant LeadDyno permission to access parts of your Bigcommerce store. Enable LeadDyno tracking in your Bigcommerce store The LeadDyno tracking integration makes use of the built in ‘Google Analytics’ tracking setting of Bigcommerce. To enable it, login to your Bigcommerce control panel, select the Setup & Tools menu, then under Set up your store choose the Web analytics option. Next, check the Google Analytics option in the Providers box, and click Save. A new tab will appear called Google Analytics. Click this tab. Paste the code given to you just after adding the LeadDyno app (or you can also use the following javascript) into the Tracking Code field: <script type="text/javascript" src=""></script> <script> LeadDyno.key = "YOUR_PUBLIC_KEY"; LeadDyno.recordVisit(); LeadDyno.autoWatch(); </script> If you are not using the code given to you after adding the LeadDyno app, replace the word YOUR_PUBLIC_KEY with your public API key found on the LeadDyno Dashboard. After clicking Save, your Bigcommerce store will now have the tracking code on every page. ### Setup Bigcommerce coupon settings ### If you want to provide your affiliates a coupon code that will track sales back to them, you can now set that up. On the LeadDyno Integration Settings page, there is a new button called Manage Bigcommerce Discount Program. This is where you configure the default settings for new coupons that LeadDyno creates for each affiliate. On this page you can choose if the coupon should be a percentage or fixed amount, and if a minimum purchase is required for the coupon to work. These are just the default settings for coupons that are created when you assign a discount/affiliate code to an affiliate. You can always change individual coupon settings from within the Bigcommerce manage coupon tool to change many more settings about the coupon. From this screen you can also manually run a synchronization between LeadDyno and Bigcommerce. This happens several times a day automatically, but if you have a recent Bigcommerce sale that you need to show up immediately in LeadDyno as a purchase, you can manually start the job on this page. Conclusion At this point your Bigcommerce shop will be fully integrated with LeadDyno. Good luck selling!
http://docs.leaddyno.com/bigcommerce-singleclick-integration-guide.html
2017-02-19T14:15:21
CC-MAIN-2017-09
1487501169776.21
[array(['/img/bc_guide_analytics.png', 'Bigcommerce Analytics Setting'], dtype=object) array(['/img/bc_guide_analytics_enable_google.png', 'Bigcommerce Providers'], dtype=object) array(['/img/bc_guide_analytics_click_google.png', 'Bigcommerce Google Analytics'], dtype=object) array(['/img/bc_guide_analytics_paste_and_save.png', 'Bigcommerce Tracking Code'], dtype=object) array(['/img/bc_guide_ld_coupon_settings.png', 'Bigcommerce Discount Settings'], dtype=object) array(['/img/bc_guide_ld_coupon_settings2.png', 'Bigcommerce Discount Settings Manage'], dtype=object)]
docs.leaddyno.com
ohai (executable)¶.. Examples¶ The following examples show how to use the Ohai command-line tool: Run a plugin independently of a chef-client run An Ohai plugin can be run independently of a chef-client run. First, ensure that the plugin is located in the /plugins directory and then use the -f option when running Ohai from the command line. For example, a plugin named sl_installed may look like the following: Ohai.plugin(:Sl) do provides "sl" collect_data(:default) do sl Mash.new if ::File.exist?("/usr/games/sl") sl[:installed] = true else sl[:installed] = false end # sl[:installed] = ::File.exist?("/usr/games/sl") end end To run that plugin from the command line, use the following command: $ ohai --directory /path/to/directory sl The command will return something similar to: { "sl": { "installed": true } }
https://docs.chef.io/ctl_ohai.html
2017-02-19T14:20:54
CC-MAIN-2017-09
1487501169776.21
[]
docs.chef.io
changes.mady.by.user chokchai chalermwattanatrai Saved on Sep 05, 2012 Saved on Sep 06, 2012 ... def geocode(String location) { // implementation returns [48.824068, 2.531733] for Paris, France [48.824068, 2.531733] } def (_lat, lon_long) = geocode("Paris, France") assert _lat == 48.824068 assert lon_long == 2.531733 And you can also define the types of the variables in one shot as follows: Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=2769&selectedPageVersions=28&selectedPageVersions=27
2014-08-20T08:50:39
CC-MAIN-2014-35
1408500801235.4
[]
docs.codehaus.org
Administration Guide Local Navigation Assign software tokens to a user account You must assign the software tokens that BlackBerry® device users can use to authenticate to a Wi-Fi® network or VPN network to the user accounts. Depending on the number of software token records that are available to you, you can assign up to three software tokens to each user account. - In the BlackBerry Administration Service, on the BlackBerry solution management menu, expand User. - Click Manage users. - Click the display name for the user account. - Click Edit user. - On the Software tokens tab, type the serial number of the software token. - To import the software token seed file for the user account, perform the following actions: - If you configured a password in the RSA® Authentication Manager so that you can encrypt the .sdtid file, type and confirm the password. - In the Timeout (minutes) field, type the length of time, in minutes, that the Wi-Fi enabled BlackBerry device takes to cache the PIN. - Click the Add icon. - Click Save all. Next topic: Changing the security settings of the BlackBerry Administration Service and BlackBerry Web Desktop Manager Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/25767/Assign_software_tokens_to_usr_account_602821_11.jsp
2014-08-20T08:56:09
CC-MAIN-2014-35
1408500801235.4
[]
docs.blackberry.com
This page is tagged because it NEEDS REVIEW. You can help the Joomla! Documentation Wiki by contributing to it. More pages that need help similar to this one are here. NOTE-If you feel the need is satistified, please remove this notice. Reason: Is this page relevant or not? Translations for Joomla! have a number of technical requirements and guidelines. For Joomla 2.5.x see Making a Language Pack for Joomla All ini files need to be saved as UTF-8 - No BOM. See How to create a language pack See Joomla Translation Policy Joomla! Translation Policy See also Typical Language File Headers Sample data are created by exporting tables from a Joomla! database to a file called sample_data.sql..
http://docs.joomla.org/index.php?title=Technical_Guidelines_for_Translation&oldid=120214
2014-08-20T10:26:12
CC-MAIN-2014-35
1408500801235.4
[]
docs.joomla.org
animation-fill-mode animation-fill-mode This article is Almost Ready. W3C Working Draft Summary Defines what values are applied by the animation outside the time it is executing (before and after the animation). By default, an animation does not affect property values between the time it is applied (when the animation-name property is set on an element) and the time it begins execution (determined by the animation-delay property). Also, by default an animation does not affect property values after the animation ends (determined by the animation-duration property). The animation-fill-mode property can override this behavior. Overview table Syntax animation-fill-mode: backwards animation-fill-mode: both animation-fill-mode: forwards animation-fill-mode: none Values - none - Property values do not change before the animation starts, and they return to their original state when the animation ends. This is the default behavior. - forwards - When the animation ends (as determined by its animation-iteration-count), properties retain the values set by the final keyframe. If animation-iteration-count is zero, apply the values that would start the first iteration. - backwards - If the animation is delayed by animation-delay, properties assume values set by the first keyframe while waiting for the animation to start. These are either the values of the from keyframe (when animation-direction is normal or alternate) or those of the to keyframe (when animation-direction is reverse or alternate-reverse). When the animation ends, properties revert to their original state. - both - Values set by the first and last keyframes are applied before and after the animation. Compatibility Desktop Mobile Examples View live exampleAn example of a mobile-like interface in which two concurrent animations displace content with a banner header. Without any animations, both elements would overlay the same screen area. In the moveContent animation, the fill mode of forwards means its end state (moved downward) persists after it finishes executing. In the insertBanner animation, the fill mode of backwards means its start state (off-screen) takes precedence over the element's CSS during the delay before the animation executes. (In the subsequent scrollBanner animation, the fill-mode is explicitly set to none to keep its initial state from overriding that of the previous animation.) CSS) } } @keyframes scrollBanner { from { transform : translateX(0) } 17% { transform : translateX(0%) } 20% { transform : translateX(-20%) } 37% { transform : translateX(-20%) } 40% { transform : translateX(-40%) } 57% { transform : translateX(-40%) } 60% { transform : translateX(-60%) } 77% { transform : translateX(-60%) } 80% { transform : translateX(-80%) } 97% { transform : translateX(-80%) } to { transform : translateX(0%) } } Usage Can also be a comma-separated list of fill modes, e.g., forwards, none, backwards, where each fill mode is applied to the corresponding ordinal position value of the animation-name property. Notes This is an experimental specification, and therefore not completely finalized. Syntax and behavior are still subject to change in future versions. Related specifications Compatibility Desktop Mobile Compatibility notes See also Other articles - Making things move with CSS3 animations - @keyframes - animation - animation-delay - animation-direction - animation-duration - animation-iteration-count - animation-name - animation-play-state - animation-timing-function External resources - See also Val Head's examples with tutorial video. Attribution This article contains content originally from external sources. Portions of this content come from the Microsoft Developer Network: Windows Internet Explorer API reference Article This tool helps to make and review comments inline. How to Use insert instructions, with images, here
http://docs.webplatform.org/wiki/css/properties/animation-fill-mode
2014-08-20T08:47:44
CC-MAIN-2014-35
1408500801235.4
[]
docs.webplatform.org
... For the 2.2.x branch the validation module has: - @completed goal@ Module Status The validation module is stable. IP Check Brent Owens sent the following email to the list on April 17th, 2006 ... The is a bad idea for two reasons: - it is not strongly typed - it causes confusion with the namespace:typeName used during GML output There are two solutions: - use the TypeName (extends Name) from GeoAPI, as provided for in 2.3 Feature Model development - use the Catalog System IGeoResource (an actual resource handle) Both offer strong typing and avoid the possibility of confusion. ...
http://docs.codehaus.org/pages/diffpages.action?pageId=50255&originalId=228168512
2014-08-20T09:06:45
CC-MAIN-2014-35
1408500801235.4
[]
docs.codehaus.org
The Standard Directory Layout :: - For every new project, you need to learn a new way of structuring your files. - You need to specify a lot of redundant configuration in order to make your build tool work. Nevertheless, all the projects don't use the same type of files (Java, XML, HTML, JSP, Groovy, SQL scripts, ...) so it might be hard to standardize the directory layout. Standard Directory Layout Pattern:. Maven standard directory layout strategy :
http://docs.codehaus.org/display/MAVENUSER/The+Standard+Directory+Layout
2014-08-20T08:59:31
CC-MAIN-2014-35
1408500801235.4
[array(['/download/attachments/48030/directory-layout-2.jpg?version=4&modificationDate=1144634059944&api=v2', None], dtype=object) ]
docs.codehaus.org
Crate rusty_green_kernel[−][src] Welcome to rusty-green-kernel. This crate contains routine for the evaluation of sums of the form $$f(\mathbf{x}_i) = \sum_jg(\mathbf{x}_i, \mathbf{y}_j)c_j$$ and the corresponding gradients $$\nabla_{\mathbf{x}}f(\mathbf{x}_i) = \sum_j\nabla_{\mathbf{x}}g(\mathbf{x}_i, \mathbf{y}_j)c_j.$$ The following kernels are supported. - The Laplace kernel: $g(\mathbf{x}, \mathbf{y}) = \frac{1}{4\pi|\mathbf{x} - \mathbf{y}|}$. - The Helmholtz kernel: $g(\mathbf{x}, \mathbf{y}) = \frac{e^{ik|\mathbf{x} - \mathbf{y}|}}{4\pi|\mathbf{x} - \mathbf{y}|}$ - The modified Helmholtz kernel:$g(\mathbf{x}, \mathbf{y}) = \frac{e^{-\omega|\mathbf{x} - \mathbf{y}|}}{4\pi|\mathbf{x} - \mathbf{y}|}$ Within the library the $\mathbf{x}_i$ are named targets and the $\mathbf{y}_j$ are named sources. We use the convention that $g(\mathbf{x}_i, \mathbf{y}_j) := 0$, whenever $\mathbf{x}_i = \mathbf{y}_j$. The library provides a Rust API, C API, and Python API. Installation hints The performance of the library strongly depends on being compiled with the right parameters for the underlying CPU. Almost any modern CPU supports AVX2 and FMA. To activate these features compile with export RUSTFLAGS="-C target-feature=+avx2,+fma" cargo build --release The activated compiler features can also be tested with cargo rustc -- --print cfg. To compile and install the Python module make sure that the wanted Python virtual environment is active. The installation is performed using maturin, which is available from Pypi and conda-forge. After compiling the library as described above use maturin develop --release -b cffi to compile and install the Python module. It is important that the RUSTFLAGS environment variable is set as stated above. The Python module is called rusty_green_kernel. Rust API The sources and targets are both arrays of type ndarray<T> with T=f32 or T=f64. For M targets and N sources the sources are a (3, N) array and the targets are a (3, M) array. To evaluate the kernel matrix of all interactions between a vector of sources and a vector of targets for the Laplace kernel use kernel_matrix = make_laplace_evaluator(sources, targets).assemble() To evaluate $f(\mathbf{x}_i) = \sum_jg(\mathbf{x}_i, \mathbf{y}_j)c_j$ we define the charges as ndarray of size (ncharge_vecs, nsources), where ncharge_vecs is the number of charge vectors we want to evaluate and nsources is the number of sources. For Laplace and modified Helmholtz problems charges must be of type f32 or f64 and for Helmholtz problems it must be of type Complex<f32> or Complex<f64>. We can then evaluate the potential sum by potential_sum = make_laplace_evaluator(sources, targets).evaluate( charges, EvalMode::Values, EvalMode::Value, ThreadingType::Parallel) The result potential_sum is a real ndarray (for Laplace and modified Helmholtz) or a complex ndarray (for Helmholtz). It has the shape (ncharge_vecs, ntargets, 1). For EvalMode::Value the function only computes the values $f(\mathbf{x}_i)$. For EvalMode::ValueGrad the array potential_sum is of shape (ncharge_vecs, ntargets, 4) and returns the function values and the three components of the gradient along the most-inner dimension. The value ThreadingType::Parallel specifies that the evaluation is multithreaded. For this the Rayon library is used. For the value ThreadingType::Serial the code is executed single-threaded. The enum ThreadingType is defined in the crate rusty-kernel-tools. Basic access to sources and targets is provided through the trait DirectEvaluatorAccessor, which is implemented by the struct DirectEvaluator. The Helmholtz kernel uses the trait ComplexDirectEvaluator and the Laplace and modified Helmholtz kernels use the trait RealDirectEvaluator. C API The C API in c_api provides direct access to the functionality in a C compatible interface. All functions come in variants for f32 and f64 types. Details are explaineed in the documentation of the corresponding functions. Python API For details of the Python module see the Python documentation in the rusty_green_kernel module.
https://docs.rs/rusty-green-kernel/0.1.0/rusty_green_kernel/
2021-06-12T18:40:55
CC-MAIN-2021-25
1623487586239.2
[]
docs.rs
Admin Guide: Ingest Wizard ~ how it works This document provides and overview of the various ways you can use the Ingest Wizard to ingest data, and includes the following topics: - Accessing the Ingest Wizard - Use case #1: ingest one file - Use case #2: transform your data - Use case #3: add a Lookup table - Use case #4: run a batch job - Use case #5: ingest from multiple data sources into one table For information on how to ingest data through the Interana CLI, see the CLI ingest Quick Start. Accessing the Ingest Wizard You must have Interana admin role permissions to be able to configure ingest on your Interana cluster. To bring up the Ingest Wizard, do the following: - Open a browser window. - Navigate to https://<cluster_location>/?import. - Log in with your admin credentials, or the Interana default login credentials: - Username: root@localhost - Password: root Use case #1: ingest one file Ingesting one file using the Ingest Wizard follows this basic process: - Name your event table and choose a sample file. - Configure the table and manage transformations, as necessary. - Ingest the sample file. For Ingest Wizard detailed walkthrough instructions, see the Ingest Wizard walkthrough. For a list of supported file types, see the Data types reference. Use case #2: transform your data It's not unusual to look at the preview of your data and realize that it isn't making it into Interana quite the way you would like. That could be because your data needs cleansing, or your logging format needs some transformation before it can be read naturally by Interana. In either case, you can use the Interana Transformer Library to do lightweight transformation and cleansing of your data. For a reference listing of all transformers available in the system, see Transformer steps and examples, and for great how-to example, check out Transform your the Intro to Lookup tables for more information.
https://docs.scuba.io/2/Guides/Admin_Guide/Admin_Guide%3A_Ingest_Wizard_~_how_it_works
2021-06-12T16:43:32
CC-MAIN-2021-25
1623487586239.2
[array(['https://docs.scuba.io/@api/deki/files/1838/clipboard_ee3fad50ebcebc0b4c3aeaf746e76c348.png?revision=1', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1840/clipboard_e683237c13a1567c02244d242cece8b01.png?revision=1', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1839/clipboard_e1788da2775158a7e77515d8b8877c8f4.png?revision=1', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1841/clipboard_e69e0e9bcb8e7a29268d7ace0ec122350.png?revision=1', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1843/clipboard_e1b7acc483d553f853e00c5bd9c787295.png?revision=1&size=bestfit&width=400&height=403', None], dtype=object) ]
docs.scuba.io
Event property An event property produces a single value for each event. An event property can exist in your source data, in which case the UI identifies it as a raw event property. Or a user can define a custom event property, in which case the UI identifies it as a manual event property. When you define an event property, you must choose a method. For example, if your log data includes a start and end time, you could define a new event property using the calculate method, "duration", as end time minus start time. You can use "duration" in a more complicated query (in an aggregation, split by, or filter) or to construct another event property. Then the top-level query can compute duration for any event in your data that has both a start and end time. Related terms More information - Building an event property in the User Guide.
https://docs.scuba.io/lexicon/Event_property
2021-06-12T18:31:53
CC-MAIN-2021-25
1623487586239.2
[]
docs.scuba.io
The Trifacta® platform uses multiple SQL databases to manage platform metadata. This section describes how to install and initialize these databases. DB Installation Pre-requisites - You must install a supported database distribution. For more information on the supported database versions, see System Requirements. -. - If you need to use different ports, additional configuration is required. More instructions are provided later. - Installation and configuration of the database cannot be completed until the Trifacta software has been installed. You should install the software on the Trifacta node first. - If you are installing the databases on an instance of Amazon RDS, additional setup is required first. See Install Databases on Amazon RDS. - Other pre-requisites specific to the database distribution may be listed in the appropriate section below. If you are concerned about durability and disaster recovery of your Trifacta databases. See Backup and Recovery., your enterprise backup procedures should include the This page has no comments.
https://docs.trifacta.com/display/r060/Install+Databases
2021-06-12T18:05:38
CC-MAIN-2021-25
1623487586239.2
[]
docs.trifacta.com
3.4.3.3 <anchorkey> The are ignored. Content models See appendix for information about this element in OASIS document type shells. Inheritance + topic/keyword delay-d/anchorkey , outputclass, and the attribute defined below. @keyref(REQUIRED) - Defines a key that, when possible, is preserved in content generated from the DITA source material. Conref relationships that use this key are not resolved when generating that material, so that @conrefmight be resolved at run-time when an end user is reading the content.
https://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part2-tech-content/langRef/base/anchorkey.html
2021-06-12T18:33:48
CC-MAIN-2021-25
1623487586239.2
[]
docs.oasis-open.org
Event Driven API. The Data Integration API Specifications describe the packages, classes, and objects of the Event Driven API in systemlib/cms- admin.jar. Event Driven API architecture The API is a collection of java classes and objects that move data from the institution systems into Blackboard Learn. Concrete data from the institution systems are encapsulated as java objects. The methods contained in the java classes determine how the data is input into Blackboard Learn. The data input is controlled by persisters that process the appropriate method, convert the object into data that can be input into Blackboard Learn database, and then input that data according to the method called. Entities and persisters There are two main types of objects in the API: entities (objects) and persisters (actions). Entities include the objects that represent data in the system, such as users. Persisters are behind-the-scenes methods that handle the storage of the entities into a persistent store or transient data format. Operations All data classes have methods to handle persistence actions. The following persistence operations are supported: - Insert: Inserts a record into the Blackboard Learn database. - Update: Updates an existing record in the Blackboard Learn database. - Save: Updates an existing record if it already exists. Otherwise, if it does not exist, inserts the record in the Blackboard Learn database. - Remove: Purges the record from the Blackboard Learn database. - Change Key: (Person and Group, Course, and Organization) Changes the primary key. This will automatically update any related Memberships of the changed keys. Create an object To create an object in the system, instantiate a corresponding entity, set attributes on the object, and then call a persister method (insert, update, save, delete). Persisters The following Persisters are found in the Event Driven API: - CourseSitePersister - OrganizationPersister - EnrollmentPersister - OrganizationMembershipPersister - StaffStudentPersister - PersonPersister - CourseCategoryPersister - OrganizationCategoryPersister - CourseCategoryMembershipPersister - OrganizationCategoryMembershipPersister - PortalRolePersister Persist methods include changeKey, insert, remove, save, update, clone. Change key is not relevant for membership type items. Clone is only relevant for Coursesite/Organization. More information about Persist methods can be found in the API specifications. Loaders The following Loaders are found in the Event Driven API: - SourseSiteLoader - OrganizationLoader - EnrollmentLoader - OrganizationMembershipLoader - StaffStudentLoader - PersonLoader - CourseCategoryLoader - OrganizationCategoryLoader - CourseCategoryMembershipLoader - OrganizationCategoryMembershipLoader - PortalRoleLoader Load methods include load by batch_uid and load by template. More information about Load methods can be found in the API specifications. Data source loader and persister The following Data Source Loader and Persister are found in the Event Driven API: DataSourceLoader - loadAll() - loadAdminObjectCount() - loadAllAdminObjectCounts() - loadByBatchUid() - DataSourcePersister - create() - disableAdminObjectsByDataType() - disableAllAdminObjects() - modify() - purgeAdminObjectsByDataType() - purgeAllAdmiObjects() - purgeSnapshotSessions() - removeByBatchUid() Persistence packageage:. Determining relationships When using the API, there are specific steps than need to be followed to determine the relationships between the entity and the persistence classes. The following is an example of how to get a Course site: blackboard.admin.data.course.CourseSite site=new blackboard.admin.data.course.CourseSite(); - The program must first initialize the BbServiceManager object. The method call is blackboard.platform.BbServiceManager.init( serviceConfig ); where serviceConfig is a java.io.File object. This object represents a link to the configuration file on the operating system. The file is a detailed list of all services that are active on the instance of the servicemanager as well as any of their configuration files. The BbServiceManager object is only initialized once for each execution of the application. A virtual host is also needed for proper setup. A virtual host can be obtained by first getting the Virtual Installation Manager using BBServiceManager.lookupService(VirtualInstallationManager.class). The Virtual Installation Manager has a getVirtualHost(String id) method which returns the virtual host. Next, the ContextManager must be retrieved using BBServiceManager.lookupService(ContextManager.class). Finally, the context can be set by calling the ContextManager’s setContext() method and passing in the virtual host as an argument. - The following code sample assumes a “SERVICE_CONFIG_DIR” and “VIRTUAL_HOST_KEY” properties will be set, probably through –D parameters if it is used in a command line application. The SERVICE_CONFIG_DIR should be set to the full path of the Blackboard config directory, while the VIRTUAL_HOST_KEY needs to be the virtual installation you want to test, by default it is bb_bb60. // Initialize the BbServiceManager and set the context //Follow the steps below to determine the relationships between the entity and persistence classes. try { blackboard.platform.BbServiceManager.init(System.getProperties().getProperty(" SERVICE_CONFIG_DIR") + "service-config-snapshot-jdbc.properties"); // The virtual host is needed to establish the proper database context. VirtualInstallationManager vm = (VirtualInstallationManager) BbServiceManager.lookupService(VirtualInstallationManager.class); String vhostUID = System.getProperty("VIRTUAL_HOST_KEY", "bb_bb60"); VirtualHost vhost = vm.getVirtualHost(vhostUID); vhost = vm.getVirtualHost(vhostUID); if(vhost == null) { throw new Exception("Virtual Host '" + vhostUID + "' not found."); } // Now that the vhost is set we can set the context based on that vhost ContextManager cm = (ContextManager) BbServiceManager.lookupService(ContextManager.class); Context context = cm.setContext(vhost); } catch(Exception e) { System.out.println("Exception trying to init the BbPersistenceManager\n " + e.toString() + "..exiting.\n"); System.exit(0); } - The controller creates an entity object and sets its attributes. - The controller requests a persist action off the Loader / Persister CourseSiteLoader cLoader= (CourseSiteLoader)BbServiceManager.getPersistenceService().getDbPersistenceMan ager().getLoader(CourseSiteLoader.TYPE); CourseSitePersister cPersister= (CourseSitePersister)BbServiceManager.getPersistenceService().getDbPersistence Manager().getPersister(CoursesSitePersister.TYPE); PersistenceManager.getLoader=PersonPersister.Default.getInstance() - The controller catches any persistence exceptions that occur. Repeat steps 2, 3, and 4 as needed for different entities and different persist actions.
https://docs.blackboard.com/learn/b2/advanced/event-driven-api
2021-06-12T16:48:44
CC-MAIN-2021-25
1623487586239.2
[]
docs.blackboard.com
LTI Advantage - Names and Roles Overview This document documents gives updates on Names and Roles as new features become available. For the definitive specifications, always refer to the published IMS documenation. Student Preview User Now Indicated By TestUser Role via Names and Roles Service This new functionality is seen in the Names and Roles service as implemented in Blackboard Learn. When your LTI 1.3 tool reaches back to Blackboard Learn using the Names and Roles service to get a list of memberships, a Student Preview user listed in the course memberships will have a new role listed in the roles claim. Ex "": [ "" ], As of this writing, 2020.04.07, this addition to the Names and Roles service has not yet been added to the IMS Names and Role Provisioning Services documentation. It will likely be added as an addendum soon.
https://docs.blackboard.com/standards/lti/tutorials/names-and-roles
2021-06-12T17:48:23
CC-MAIN-2021-25
1623487586239.2
[]
docs.blackboard.com
Role badges If your community administrator has activated role badges, then you can see small icons next to certain community user names. A role badge is an icon used to indicate the roles or responsibilities of community users. You might use this to find someone who can answer a question or identify people whose answers you can count on. Hover your mouse over the icon to see the name of the role. Here's a list of possible roles for the badges: - Administrator - Champion - Employee - Expert - Moderator - Support
https://docs.jivesoftware.com/9.0_on_prem_int/end_user/jive.help.core/user/RoleBadges.html
2021-06-12T16:36:41
CC-MAIN-2021-25
1623487586239.2
[]
docs.jivesoftware.com
Warning The DevOps Portal has been deprecated in the Q4`18 MCP release tagged with the 2019.2.0 Build ID. Note Configuring notifications through the Push Notification service is deprecated. Mirantis recommends that you configure Alertmanager-based notifications as described in MCP Operations Guide: Enable Alertmanager notifications. The Push Notification service can route notifications based on the alarms triggered by the issues that are found by Prometheus Alertmanager through email. Warning This section describes how to manually configure the Push Notification service Reclass metadata to integrate email routing for notifications in an existing OSS deployment. Therefore, if you want to configure the email routing configuration, perform the procedure below. Otherwise, if you are performing the initial deployment of your MCP environment, you should have already configured your deployment model with the default Simple Mail Transfer Protocol (SMTP) parameters for the Push Notification service as described in OSS parameters and the OSS webhook parameters as described in StackLight LMA product parameters. In this case, skip this section. Note The Push Notification service only routes the received notifications to email recipients. Therefore, you must also provide the Prometheus Alertmanager service with a predefined alert template containing an To configure email integration for OSS manually: Obtain the following data: pushkin_smtp_host SMTP server host for email routing. Gmail server host is used by default (smtp.gmail.com). pushkin_smtp_port SMTP server port for email routing. Gmail server port is used by default (587). webhook_from Source email address for notifications sending. pushkin_email_sender_password Source email password for notifications sending. webhook_recipients Comma-separated list of notification recipients. Verify that the following services are properly configured and deployed: Elasticsearch PostgreSQL Note For the configuration and deployment details, see: Configure services in the Reclass model Deploy OSS services manually In the /srv/salt/reclass/classes/cluster/${_param:cluster_name}/oss/server.yml file, define the following parameters: parameters: _param: pushkin_smtp_host: smtp.gmail.com pushkin_smtp_port: 587 webhook_from: [email protected] pushkin_email_sender_password: your_sender_password webhook_recipients: "[email protected],[email protected]" Push all changes of the model to the dedicated project repository. Refresh pillars and synchronize Salt modules: salt '*' saltutil.refresh_pillar salt '*' saltutil.sync_modules If you have the running pushkin docker stack, remove it and apply the following Salt states: salt -C 'I@docker:swarm:role:master' state.sls docker.client
https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deploy-mcp-cluster-manually/deploy-devops-portal/configure-email-integration.html
2021-06-12T16:33:00
CC-MAIN-2021-25
1623487586239.2
[]
docs.mirantis.com
From the left menu, select Issues to manage all of the issues for your journal. Future Issues are all of your unpublished issues. You can create as many of these as you wish, and schedule submissions to any of them. To create a new issue, use the Create Issue link and fill in the form. There are spaces to add volume, number, year, and title information (e.g., Special Issue #1), as well as a description and a cover image (if needed).. You can also use the Order link to reorder the entries.. Using that same blue arrow will also reveal a Preview link, letting you get a look at the issue before publishing it. Once you are happy with the issue hit the Publish Issue link to publish it on your journal website. By default, all registered users will be notified via email when a new issue is published from the list of Future Issues. To not send a notification of a new issue published, uncheck the box beside “Send notification email to all registered users” before pressing OK. You can use the Delete link to remove the issue. Any assigned articles will revert to their unpublished status. This tab lists all of your published issues. As with Future Issues, using the blue arrow will reveal similar options as described above (Edit, Preview, etc.).
https://docs.pkp.sfu.ca/learning-ojs/3.1/en/issues.html
2021-06-12T18:19:32
CC-MAIN-2021-25
1623487586239.2
[]
docs.pkp.sfu.ca
Content Security Policy¶ Streetmix adopts a Content Security Policy (CSP), which permits only approved content to appear or run on the application. The intent is to mitigate certain types of malicious attacks, like cross-site scripting (XSS) and data injection attacks. As we become a bigger platform, with user accounts and access to user data, it’s important we adopt good security practices, and CSP is a web standard that is within our reach. However, CSP can also be quite limiting. Third-party service integrations, cloud-hosted assets like fonts and images, and browser plugins can break, if they’re not explicitly whitelisted. Currently, our CSP directives are very strict. This section details more information about the way we implement CSP. - CSP directives are sent in HTTP headers. This is the most secure way to set a CSP directive. We use helmet to help write the directive string. This is set up when the Express server is starting. - We write the most restrictive directive we can. Currently we avoid allowing “any” content of any type, if possible. - Inline scripts require a nonce value. Since arbitrary inline scripts are not allowed, we “approve” inline scripts by giving them a unique ID. A nonce value can be anything, but our recommendation is to generate a uuid, and make it available to both the CSP definition, and the HTML templates which can then inject the nonce as a variable. - Note: we avoid using SHA hashes, which is an alternative way to allow inline scripts. We don’t have a way to automatically generate them, so if they’re manually generated and added to a CSP header, any change to the inline script, including whitespace formatting, can break the allow directive. They’re brittle, so we don’t use them. - CSP violations are blocked in production, but allowed in development. While blocking unknown scripts and assets help secure the production deployment, this can be annoying in development environments, where new scripts and assets may be implemented or experimented with. So resources that trigger a CSP violation is allowed when the NODE_ENVenvironment variable is set to development. CSP violations are logged to console, so please keep an eye on the console output, which can tell you what directives you will need to update to make new or updated code work in the production environment. - Note: Another reason why CSP is relaxed in development (violations are allowed, but reported) is because developer extensions, such as the React or Redux inspectors, are disabled in Firefox if CSP is too strict. - CSP violations are reported to the /services/csp-report endpoint. All reports are logged, in both production and development environments. Some CSP violations are expected, and can be safely ignored. (We don’t list the expected CSP violations here because this documentation inevitbably lags behind changes in the codebase. You will begin to recognize expected violations as you become familiar with local development.) Remember, in development mode, resources that are allowed can still trigger a CSP violation report. Please remember to update the CSP directive if they are intended for production. - Note: we do not have automated testing for CSP violations. In other words, continuous integration cannot catch if new or changed code will trigger a violation or issue a report. Do not depend on automated testing to catch CSP violations for you. - We log all reports in production so that we can see if users are doing anything that we should be adding to the CSP directive. (For instance: user profile images are hosted on a wide variety of cloud-based cache servers, and we still need to decide whether to allow each of these servers manually, or allow all image resources more liberally.) - Avoid writing separate CSP headers for development versus production environments. We have tried this in the past, and have seen instances where something that “just worked” in development mysteriously stopped working in production, because the CSP headers were different. This “ease of development” strategy became a footgun that made CSP feel worse. This is why we use the strategy of sending CSP headers even in development, and reporting all violations, so that we can catch problems earlier. - The one exception we make for development is to allow websockets for Parcel’s hot reloader. This is considered acceptable because the hot reloader will never be present in production.
https://streetmix.readthedocs.io/en/latest/technical/csp/
2021-06-12T17:42:18
CC-MAIN-2021-25
1623487586239.2
[]
streetmix.readthedocs.io
- June 08, June 08, 2021 Release This release notes document describes the enhancements and changes, fixed and known issues that exist for the Citrix ADM service release Build June 08, 2021. Notes This release notes document does not include security related fixes. For a list of security related fixes and advisories, see the Citrix security bulletin. What’s New The enhancements and changes that are available in Build June 08, 2021. View monitor status to check application health You can now use the Application dashboard in the ADM GUI under Application > Dashboard, to view the health monitor status of an application and the failure message if any. For details, see Application details. [NSADM-46935] View app security configuration in app dashboard When you drill-down an application from Applications > Dashboard, the Security tab now enables you to view if the application is configured with app security. If the WAF or Bot configuration is not enabled for the application, you can use the StyleBook to configure. [NSADM-66873] Improvements to RTT calculation Citrix ADC instances might not able to calculate the RTT value for some transactions. For such transactions, the web transaction analytics and Web Insight in ADM display the RTT value as < 1 ms. The RTT calculation for such transactions is improved and ADM now displays the following RTT values as: NA - Displays when the ADC instance cannot calculate the RTT. < 1ms - Displays when the ADC instance calculates the RTT in decimals between 0 ms and 1 ms. For example, 0.22 ms. [NSADM-65648] Web Insight - View details for cipher related issues In Applications > Web Insight, you can now view details for the following SSL parameters under SSL Errors: Cipher mismatch Unsupported Ciphers Under SSL errors, click an SSL parameter (Cipher Mismatch or Unsupported Ciphers) to view details such as the SSL cipher name, the recommended actions, and the details of the affected applications and clients. For more information, see Web Insight. [NSADM-62525] Support for identification and remediation of CVE-2020-8299 and CVE-2020-8300 Citrix ADM security advisory now supports identification and remediation of the newly announced CVE-2020-8299 and CVE-2020-8300. Remediation for CVE-2020-8299 requires an upgrade of the vulnerable ADC instances to a release and build that has the fix. Remediation for CVE-2020-8300 requires a two-step process: Upgrade the vulnerable ADC instance to a release and build that has the fix. Apply configuration jobs. For details about how to remediate CVE-2020-8300, see Remediate vulnerabilities for CVE-2020-8300. For more information about security advisory and how to remediate other CVEs, see Security Advisory. Note It might take a couple of hours for security advisory system scan to conclude and reflect the impact of CVE-2020-8299 and CVE-2020-8300 in the security advisory module. To see the impact sooner, start an on-demand scan by clicking Scan-Now. [NSADM-71136] Fixed Issues The issues that are addressed in Build June 08, 2021. Management and Monitoring If you rename a scheduled configuration job and delete it from the ADM GUI, the ADM server does not remove this job. [NSADM-73577] Miscellaneous While registering the ADM agent, the default password must be changed. If the new password contains a special character such as open and/or close bracket, a syntax error occurs. As a result, you cannot log on to ADM agent. [NSHELP-276.
https://docs.citrix.com/en-us/citrix-application-delivery-management-service/release-notes/citrix-adm-service-june-08-2021.html
2021-06-12T18:05:19
CC-MAIN-2021-25
1623487586239.2
[array(['/en-us/citrix-application-delivery-management-service/media/sec-violation-appdashboard.png', 'StyleBook configure'], dtype=object) array(['/en-us/citrix-application-delivery-management-service/media/ssl-error-details.png', 'SSL error details'], dtype=object) ]
docs.citrix.com
Functional overview The Jive for SharePoint On-Prem integration allows customers to leverage their existing SharePoint investment with their Jive community. With the integration, users can collaborate on documents stored in SharePoint, utilize SharePoint security policies and workflows, and view office documents in Jive utilizing the office web-apps viewer. The integration works by connecting a Jive place (space, group, or project) to a SharePoint site (top-level site or nested sub-site). The connected site can be an existing site or a site that is provisioned by the integration. Within the connected site, the customer defines a document library that is used as the upload target for any file uploaded in the connected Jive group. Also, the customer can select one or more additional document libraries that sync their content to Jive. Once the connection is made, any file action on the connected document libraries (such as upload, delete, or update) will reflect on the Jive connected site. And any file operation on the Jive side will reflect on the SharePoint primary document library. In addition, Jive manages the SharePoint permissions for sites that were provisioned by the integration, by syncing the Jive entitled user with their associated entitlements to the SharePoint document library.
https://docs.jivesoftware.com/cloud_int/comm_mgr/jive.help.jiveforsharepointV5/Admin/FunctionalOverview.html
2021-06-12T17:15:01
CC-MAIN-2021-25
1623487586239.2
[]
docs.jivesoftware.com
SCALAR Int The Int scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1. link Require by - AccountAccount interface shared by all kind of accounts (Bot, Collective, Event, User, Organization) - AccountCollectionA collection of "Accounts" - AccountReferenceInputnull - AccountStatsStats for the Account - AccountWithContributionsAn account that can receive financial contributions - AccountWithHostAn account that can be hosted by a Host - AmountA financial amount. - AmountInputInput type for an amount with the value and currency - BotThis represents a Bot account - CollectionCollection interface shared by all collection types - CollectiveThis represents a Collective account - CommentCollectionA collection of "Comments" - CommentCreateInputnull - ConnectedAccountThis represents a Connected Account - ConnectedAccountReferenceInputnull - Contributor A person or an entity that contributes financially or by any other mean to the mission of the collective. While "Member" is dedicated to permissions, this type is meant to surface all the public contributors. - ContributorCollectionA collection of "Contributor" - ConversationA conversation thread - ConversationCollectionA collection of "Conversations" - ConversationStatsnull - CreditThis represents a Credit transaction - CreditCardCreateInputnull - DebitThis represents a Debit transaction - EventThis represents an Event account - ExpenseThis represents an Expense - ExpenseCollectionA collection of "Expenses" - ExpenseInviteenull - ExpenseItemFields for an expense item - ExpenseItemCreateInputnull - ExpenseItemInputnull - ExpenseReferenceInputnull - FundThis represents an Project account - HostThis represents an Host account - HostApplicationCollectionA collection of "HostApplication" - HostCollectionA collection of "Hosts" - HostMetricsHost metrics related to collected and pending fees/tips. - HostPlanThe name of the current plan and its characteristics. - IndividualThis represents an Individual account - MemberCollectionA collection of "Members" (ie: Organization backing a Collective) - MemberOfCollectionA collection of "MemberOf" (ie: Collective backed by an Organization) - MutationThis is the root mutation - NewAccountOrReferenceInputnull - OrderOrder model - OrderCollectionA collection of "Orders" - OrderCreateInputInput to create a new order - OrderTaxnull - OrganizationThis represents an Organization account - PaymentMethodPaymentMethod model - ProcessExpensePaymentParamsParameters for paying an expense - ProjectThis represents an Project account - QueryThis is the root query - TagStatStatistics for a given tag - TierTier model - TierCollectionA collection of "Tiers" - TierReferenceInputnull - TransactionTransaction interface shared by all kind of transactions (Debit, Credit) - TransactionCollectionA collection of Transactions (Debit or Credit) - TransactionReferenceInputnull - TransferWiseFieldGroupnull - UpdateThis represents an Update - UpdateCollectionA collection of "Updates"
https://graphql-docs-v2.opencollective.com/int.doc.html
2021-06-12T18:21:55
CC-MAIN-2021-25
1623487586239.2
[]
graphql-docs-v2.opencollective.com
Turning on Firefox tests for a new configuration¶ You are ready to go with turning on Firefox tests for a new config. Once you get to this stage, you will have seen a try push with all the tests running (many not green) to verify some tests pass and there are enough machines available to run tests. For the purpose of this document, assume you are tasked with upgrading Windows 10 OS from 1803 -> 1903. To simplify this we can call this windows_1903, and we need to: - push to try - analyze test failures - disable tests in manifests - repeat try push until no failures - land changes and turn on tests - turn on run only failures - file bugs for test failures There are many edge cases, and I will outline them inside each step. Push to Try server¶ As you have new machines (or cloud instances) available with the updated OS/config, it is time to push to try. - In order to run all tests, we would need to execute: ./mach try fuzzy --no-artifact -q 'test-windows !-raptor- !-talos- --rebuild 5 - There are a few exceptions here: perf tests don’t need to be run (hence the !-raptor- !-talos-) need to make sure we are not building with artifact builds (hence the --no-artifact) there are jobs hidden behind tier-3, some for a good reason (code coverage is a good example, but fission tests might not be green) - The last piece to sort out is running on the new config, here are some considerations for new configs: - duplicated jobs (i.e. fission, a11y-checks), you can just run those specific tasks: ./mach try fuzzy --no-artifact -q 'test-windows fission --rebuild 5 - new OS/hardware (i.e. aarch64, os upgrade), you need to reference the new hardware, typically this is with --worker-override: ./mach try fuzzy --no-artifact -q 'test-windows --rebuild 5 --worker-override t-win10-64=t-win10-64-1903 the risk here is a scenario where hardware is limited, then --rebuild 5will create too many tasks and some will expire. in low hardware situations, either run a subset of tests (i.e. web-platform-tests, mochitest), or --rebuild 2and repeat. Analyze test failures¶ A try push will take many hours, it is best to push in the afternoon, ensure some jobs are running, then come back the next day. The best way to look at test failures is to use Push Health to avoid misleading data. Push Health will bucket failures into possible regressions, known regression, etc. When looking at 5 data points (from --rebuild 5), this will filter out intermittent failures. - There are many reasons you might have invalid or misleading data: tests fail intermittently, we need a pattern to know if it is consistent or intermittent we still want to disable high frequency intermittent tests, those are just annoying you could be pushing off a bad base revision (regression or intermittent that comes from the base revision) the machines you run on could be bad, skewing the data infrastructure problems could cause jobs to fail at random places, repeated jobs filter that out some failures could affect future tests in the same browser session or tasks if a crash occurs, or we timeout- it is possible that we will not run all of the tests in the task, therefore believing a test was run 5 times, but maybe it was only run once (and failed), or never run at all. task failures that do not have a test name (leak on shutdown, crash on shutdown, timeout on shutdown, etc.) That is a long list of reasons to not trust the data, luckily most of the time using --rebuild 5 will give us enough data to give enough confidence we found all failures and can ignore random/intermittent failures. - Knowing the reasons for misleading data, here is a way to use Push Health. - The main goal here is to know what <path>/<filenames> are failing, and having a list of those. Ideally you would record some additional information like timeout, crash, failure, etc. In the end you might end up with: dom/html/test/test_fullscreen-api.html, scrollbar gfx/layers/apz/test/mochitest/test_group_hittest.html, scrollbar image/test/mochitest/test_animSVGImage.html, timeout browser/base/content/test/general/browser_restore_isAppTab.js, crashed Disable tests in the manifest files¶ This part of the process can seem tedious and is difficult to automate without making our manifests easier to access programatically. The code sheriffs have been using this documentation for training and reference when they disable intermittents. First you need to add a keyword to be available in the manifest (e.g. skip-if = windows_1903). - There are many exceptions, the bulk of the work will fall into one of 4 categories: manifestparser: *.ini (mochitest*, firefox-ui, marionette, xpcshell) easy to edit by adding a fail-if = windows_1903 # <comment>, a few exceptions here reftest: *.list (reftest, crashtest) need to add a fuzzy-if(windows_1903, A, B), this is more specific web-platform-test: testing/web-platform/meta/**.ini (wpt, wpt-reftest, etc.) need to edit/add testing/web-platform/meta/<path>/<testname>.ini, and add expected results other (compiled tests, jsreftest, etc.) edit source code, ask for help - Basically we want to take every non intermittent failure found from push health and edit the manifest, this typically means: finding the proper manifest adding the right text to the manifest To find the proper manifest, it is typically <path>/<harness>.[ini|list]. There are exceptions and if in doubt use searchfox.org/ to find the manifest which contains the testname. Once you have the manifest, open it in an editor, and search for the exact test name (there could be similar named tests). Rerun try push, repeat as necessary¶ It is important to test your changes and for a new platform that will be sheriffed, to rerun all the tests at scale. With your change in a commit, push again to try with --rebuild 5 and come back the next day. As there are so many edge cases, it is quite likely that you will have more failures, mentally plan on 3 iterations of this, where each iteration has fewer failures. Once you get a full push to show no persistent failures, it is time to land those changes and turn on the new tests. There is a large risk here that the longer you take to find all failures, the greater the chance of: - bitrot of your patch - new tests being added which could fail on your config - other edits to tests/tools which could affect your new config Since the new config process is designed to find failures fast and get the changes landed fast, we do not need to ask developers for review, that comes after the new config is running successfully where we notify the teams of what tests are failing. land changes and turn on tests¶ After you have a green test run, it is time to land the patches. There could be changes needed to the taskgraph in order to add the new hardware type and duplicate tests to run on both the old and the new, or create a new variant and denote which tests to run on that variant. - Using our example of windows_1903, this would be a new worker type that would require these edits: transforms/tests.py (duplicate windows 10 entries) test-platforms.py (copy windows10 debug/opt/shippable/asan entries and make win10_1903) test-sets.py (ideally you need nothing, otherwise copy windows-testsand edit the test list) In general this should allow you to have tests scheduled with no custom flags in try server and all of these will be scheduled by default on mozilla-central, autoland, and release-branches. turn on run only failures¶ Now that we have tests running regularly, the next step is to take all the disabled tests and run them in the special failures job. We have a basic framework created, but for every test harness (i.e. xpcshell, mochitest-gpu, browser-chrome, devtools, web-platform-tests, crashtest, etc.), there will need to be a corresponding tier-3 job that is created. TODO: point to examples of how to add this after we get our first jobs running. file bugs for test failures¶ Once the failure jobs are running on mozilla-central, now we have full coverage and the ability to run tests on try server. There could be >100 tests that are marked as fail-if and that would take a lot of time to file bugs. Instead we will file a bug for each manifest that is edited, typically this reduces the bugs to about 40% the total tests (average out to 2.5 test failures/manifest). When filing the bug, indicate the timeline, how to run the failure, link to the bug where we created the config, describe briefly the config change (i.e. upgrade windows 10 rom version 1803 to 1903), and finally needinfo the triage owner indicating this is a heads up and these tests are running reguarly on mozilla-central for the next 6-7 weeks.
http://docs.mozilla-releng.net/en/latest/gecko_tests/new_config.html
2021-07-23T19:21:53
CC-MAIN-2021-31
1627046150000.59
[]
docs.mozilla-releng.net
Name of the system-defined calendar to be used as the default calendar for the session. The calendar name you specify cannot be user-defined. The default calendar name is Teradata. The other valid system-defined calendar names are ISO and COMPATIBLE. See SQL Data Types and Literals, B035-1143 for more information about these calendars.
https://docs.teradata.com/r/eWpPpcMoLGQcZEoyt5AjEg/tyK5IcnyEnUhSPwCN3RdIQ
2021-07-23T19:07:08
CC-MAIN-2021-31
1627046150000.59
[]
docs.teradata.com
SushiSwap - An AMA Multi-Chain DEX on Moonbeam¶ Disclaimer: Projects themselves entirely manage the content in this guide. Moonbeam is a permissionless network. Any project can deploy its contracts to Moonbeam. Introduction¶ SushiSwap is an automated market-maker (AMM) multi-chain decentralized exchange, which introduced revenue sharing for network participants. SushiSwap is a community-run project governed by the community vote for all significant changes to the protocol. Day-to-day operations, rebalancing of pools and ratios, business strategy, and overall development is ultimately decided on by our Sushi Chef 0xMaki. The SushiSwap's ecosystem offers a few core products: - SushiSwap Exchange — core exchange to swap ERC-20 tokens based on an AMM model - SushiSwap Liquidity Pools — offer a series of liquidity pools where anyone can provide liquidity and earn rewards in return - SushiSwap SushiBar Staking (xSushi) — allows you to stake your Sushi tokens and receive xSushi in return, which you can stake in the xSushi pool - SushiSwap Bentobox — a new an upcoming lending platform with new lending solutions You can read more about SushiSwap in the following links: You can contact the team via the following communication channels: Moonbase Alpha Implementation¶ SushiSwap has deployed its core exchange platform to the Moonbase Alpha TestNet, which you can access via this link. In this interface you can do the following: - Create a pair pool using two ERC-20 tokens (or the TestNet DEV utility token) - Add liquidity to an already existing pool - Swap tokens on already existing pools - Remove liquidity from pools you have added liquidity to - Earn a 0.25% fee on all trades proportional to your share of the liquidity pool Getting Started with the Interface¶ First, make sure you have MetaMask set up so that it connects to the Moonbase Alpha TestNet. To do so, you can follow this guide. You can also get DEV tokens from their faucet by following this tutorial. With MetaMask properly configured, open the exchange platform using this link. In there, click on the "Connect to a wallet" button, and choose MetaMask. MetaMask might display a pop-up menu requesting permission to connect the exchange with the wallet. Select the accounts you want to connect to the exchange and click "Next" and then "Connect." If everything was set up correctly, you should have the exchange interface connected to "Moonbase." In addition, your current account (in MetaMask) should be displayed in the top right corner, as well as its balance. Note that the balance is displayed as "GLMR," but this corresponds to "DEV," the TestNet token. Once connected, you can start playing around the SushiSwap instance deployed to Moonbase Alpha! Contract Information¶ You can find all the contracts relevant to the SushiSwap deployment in this GitHub repo. Addresses that are relevant for the Moonbase Alpha implementation are outlined in the following table:
https://docs.moonbeam.network/dapps-list/defi/sushiswap/
2021-07-23T18:12:49
CC-MAIN-2021-31
1627046150000.59
[array(['../../images/sushiswap/dapps-sushiswap-banner.png', 'SushiSwap Banner'], dtype=object) array(['../../images/sushiswap/dapps-sushiswap-1.png', 'SushiSwap Connect Wallet'], dtype=object) array(['../../images/sushiswap/dapps-sushiswap-2.png', 'SushiSwap Connect Wallet'], dtype=object) ]
docs.moonbeam.network
A summary of the steps for optimizing and deploying a model that was trained with Caffe*: NOTE: It is necessary to specify mean and scale values for most of the Caffe* models to convert them with the Model Optimizer. The exact values should be determined separately for each model. For example, for Caffe* models trained on ImageNet, the mean values usually are 123.68, 116.779, 103.939for blue, green and red channels respectively. The scale value is usually 127.5. Refer to Framework-agnostic parameters for the information on how to specify mean and scale values. To convert a Caffe* model: $INTEL_OPENVINO_DIR/deployment_tools/model_optimizerdirectory. mo.pyscript to simply convert a model, specifying the path to the input model .caffemodelfile and the path to an output directory with write permissions: Two groups of parameters are available to convert your model: The following list provides the Caffe*-specific parameters. prototxtfile. This is needed when the name of the Caffe* model and the .prototxtfile are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input model.caffemodelfile. You must have write permissions for the output directory. CustomLayersMappingfile. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe* system on the computer. To read more about this, see Legacy Mode for Caffe* Custom Layers. Optional parameters without default values and not specified by the user in the .prototxtfile are removed from the Intermediate Representation, and nested parameters are flattened: data, rois 1,3,227,227. For rois, set the shape to 1,6,1,1: Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom. Refer to Supported Framework Layers for the list of supported standard layers. The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the Model Optimizer FAQ. The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong. In this document, you learned:
https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe.html
2021-07-23T18:54:32
CC-MAIN-2021-31
1627046150000.59
[]
docs.openvinotoolkit.org
Yuansfer supports many useful SDK's for developers. You can follow the links and refer to the integration process for each platform and language. Please contact the Yuansfer Tech Team for any questions Java SDK PHP SDK Go SDK JavaScript SDK Yuansfer Mobile Pay SDK (Android) Yuansfer Mobile Pay SDK (iOS) C# SDK
https://docs.yuansfer.com/guide/sdk
2021-07-23T19:42:03
CC-MAIN-2021-31
1627046150000.59
[]
docs.yuansfer.com
How To Setup Receipts in Stripe for Subscriber Payments Once the subscription becomes active, you can setup receipts so that they automatically get sent to the subscriber’s email. In your Stripe dashboard you can set up the receipt emails under Business Settings > Customer Emails. Check the “email customers for successful payments” option. Learn more about Stripe Email Receipts.
https://docs.zeen101.com/article/67-how-to-setup-receipts-in-stripe-for-subscriber-payments
2021-07-23T18:23:45
CC-MAIN-2021-31
1627046150000.59
[]
docs.zeen101.com
How to automate nightly Google Play deployments¶ These instructions define how to set up an Android product for nightly deployments to the Google Play store. Throughout this document, wherever the term $product is used, substitute your product’s name in (replacing spaces with hyphens), e.g.: reference-browser or fenix. Ideally, we shouldn’t use the below docs, but instead use pushapk in taskgraph. Note: we don’t need to explicitly “create” scopes in Taskcluster. We’ll simply tell Taskcluster that our hook has some scopes, then later we’ll tell it that we’ll need those scopes to run our builds. Taskcluster will just verify that the scopes we’re using dynamically match between the build and the hook that starts the build. Request signing keys (example bug for reference-browser). Confirm with the app’s team what the “signature algorithm” and “digest algorithm” should be, and include that information in the ticket. You’ll want at least one “dep” key for testing, as well as a separate “release” key for every separate app that will be created (e.g.: nightly, betaand production) Clone the product’s repository Add .taskcluster.ymlin the root of the repository. This file tells Taskcluster what to do upon github events happening (push, pr, release, etc). Since we’re going to want to run taskgraphto decide what tasks to run, we can take a .taskcluster.ymlfrom a similarly-configured repository, like fenix example) Update repository and treeherder references to refer to your project, rather than fenix. Implement taskgraphconfiguration for the repository. See the Fenix configuration. You’ll need to implement the following parts: Define tasks in YAML in taskcluster/ci/ Define transforms in taskcluster/$project_taskgraph/transforms/which operate on the tasks defined in the YAML Define any custom loaders in taskcluster/$project_taskgraph/loader/(this is useful in cases like needing to generate a dynamic number of tasks based on an external source, like gradleor a .buildconfig.ymlfile) Define Dockerfilesin taskcluster/docker/ Create and update permissions in ci-configuration. Install ci-adminif you haven’t already hg clone Set up a virtualenvand install dependencies hg clone Update projects.ymland grants.ymlto add permissions for $project Submit your patch for review with moz-phab Once it’s landed, update to the new revision and apply it ci-admin diff --environment=production If there’s no surprises in the diff, apply it: ci-admin apply --environment=production If the diff contains changes other than the hooks and permissions you added, you can adjust the diffand applyoperations with the --grepflag: ci-admin diff --environment=production --grep "AwsProvisionerWorkerType=mobile-\d-b-firefox-tv" Update scriptworker(example for ``fenix` <>`__) Update scriptworker/constants.pywith entries for your product. Search for locations where “fenix” or “firefox-tv” were set up, and add your product accordingly In a separate commit, bump the minor version and add a changelog entry (example) Once these changes are CR’d and merged, publish the new version Update your repository against the mozilla-relengrepository git tag $version, e.g.: git tag 23.3.1 git push --tags && git push upstream --tags(assuming that the originremote is for your fork, and upstreamis the mozilla-relengrepo) Ensure you’re in the Python virtual env for your package (One approach is to share a single virtual env between all scriptworker repos) rm -rf dist && python setup.py sdist bdist_wheelbuild the package Publish to PyPI: gpg --list-secret-keys --keyid-format longto get your GPG identity (it’s the bit after “sec rsaxxxx/”). An example GPG identity would be 5F2B4756ED873D23 twine upload --sign --identity $identity dist/*to upload to Pypi (you may need to pip install twinefirst) - Update configuration in Locate signing secrets (dep signing username and password, prod signing username and password, Google Play service account and password) You should’ve received signing credentials from step 1. Print out the decrypted file you received: gpg -d <file from step 1> We will want to encrypt the “dep” and “rel” credentials for the “prod” autograph instance. They can be identified as lines that contain a “list” where the second item ends with “_dep” or “_prod”, respectively Example: “dep” line would be: ["http://<snip>", "signingscript_fenix_dep", "<snip>", ["autograph_apk"], "autograph"] For these two lines, the secrets we want to put in sops are the username and password (the second and third item) Later, in step 18, you’ll have been emailed a Google Play service account and key. However, for now, we’re going to use a dummy value (the string “dummy”) as placeholders for these values Add secrets to SOPS TODO Commit and push your SOPSand scriptworker-scripts changes, make a PR Once step 8’s PR is approved, merge the scriptworker-scriptsPR Verify with app’s team how versionCodeshould be set up. Perhaps by date like fenix? When the Google Play product is being set up, an officially-signed build with a version code of 1 needs to be built. So, the main automation PR for the product will need to be stunted: it needs to produce APKs with a version code of 1, and it should have pushing to Google Play disabled (so we don’t accidentally push a build before our official version-code-1 build is set up). Change the version code to be set to 1. If the product uses the same version-code-by-date schema as fenix, then edit versionCode.gradle Disable the creation of the task that pushes to Google Play Create the PR Once approved, merge the PR Verify the apk artifact(s) of the signing task Trigger the nightly hook Once the build finishes, download the apks from the signing task Using the prod certificate from step 10.iv.a., create a temporary keystore: keytool -import -noprompt -keystore tmp_keystore -storepass 12345678 -file $product_release.pem -alias $product-rel For each apk, verify that it matches the certificate: jarsigner -verify $apk -verbose -strict -keystore tmp_keystore. Check that The “Digest algorithm” matches step 1 The “Signature algorithm” matches step 1 There are no warnings that there are entries “whose certificate chain invalid”, “that are not signed by alias in this keystore” or “whose signer certificate is self-signed” Do the same thing for the dep signing task and certificate and check that the jarsignercommand shows that the “Signed by” CNis “Throwaway Key” Request both the creation of a Google Play product and for the credentials to publish to it. Consult with the product team to fill out the requirements for adding an app to Google Play. This request should be a bug for “Release Engineering > Release Automation: Pushapk”, and should be a combination of this and this As part of the bug, note that you’ll directly send an APK to the release management point of contact via Slack Give the first signed APK to the Google Play admins Perform a nightly build Once the signing task is done, grab the APK with the version code of 1 (if there’s multiple APKs, you probably want the arm one) Send the APK to release management Once the previous step is done and they’ve set up a Google Play product, put the associated secrets in SOPS Perform a new PR that un-stunts the changes from step 15 Fenix example Version code should be generated according to how the team requested in step 14 The task that pushes to Google Play should no longer be disabled Once the PR from the last step is merged, trigger the nightly task, verify that it uploads to Google Play Update the $product-nightlyhook, adding a schedule of 0 12 * * *(make it fire daily) Ensure that the hook is triggered automatically by waiting a day, then checking the hook or indexes How to test release graphs in mobile¶ Use the staging android-components and staging fenix repos, along with staging shipit. How to set up taskgraph for mobile¶ Setting up taskgraph for mobile is similar to setting up taskgraph for any standalone project, especially github standalone projects: install taskgraph in a virtualenv. ⚠️ You shouldn’t install gradle globally on your system. The ./gradlew scripts in each mobile repo define specific gradle versions and are in charge of installing it locally. Install jdk8: # On mac with homebrew brew tap homebrew/cask brew cask install homebrew/cask-versions/adoptopenjdk8 # On Ubuntu sudo apt install openjdk-8-jdk Install android-sdk: # On mac with homebrew brew cask install android-sdk # On Ubuntu sudo apt install android-sdk Make sure you’re pointing to the right java: # In your .zshrc or .bashrc: # On mac export JAVA_HOME="$(/usr/libexec/java_home -v 1.8)" # On Ubuntu follow symlinks to find JAVA_HOME ls -l `which java` export JAVA_HOME=<JAVA_HOME> # After sourcing that file, you should get the following version: # > $JAVA_HOME/bin/java -version # openjdk version "1.8.0_265" # OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_265-b01) # OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.265-b01, mixed mode) You’ll also need to set ANDROID_SDK_ROOT: # In your .zshrc or .bashrc: # On mac export ANDROID_SDK_ROOT=/usr/local/Caskroom/android-sdk/4333796 # On Ubuntu export ANDROID_SDK_ROOT=/usr/lib/android-sdk Test it: # In, say, an android-components or fenix clone, this should work: ./gradlew tasks --scan You’ll need a Python 2 virtualenv with taskgraph, glean-parser, and mozilla-version as well: virtualenv fenix # or whatever the repo name pushd ../taskgraph # assuming taskgraph is cloned in the same dir python setup.py install popd pip install mozilla-version glean-parser<1 # Verify taskgraph optimized returns tasks (You need cloned) # android-components taskgraph optimized -p ../braindump/taskcluster/taskgraph-diff/params-android-components/main-repo-release.yml # fenix taskgraph optimized -p ../braindump/taskcluster/taskgraph-diff/params-fenix/main-repo-push.yml To run taskgraph-gen.py: # set $TGDIR to the braindump/taskcluster directory path TGDIR=.. # Fenix $TGDIR/taskgraph-diff/taskgraph-gen.py --halt-on-failure --overwrite --params-dir $TGDIR/taskgraph-diff/params-fenix --full fenix-clean 2>&1 | tee out # Android-Components $TGDIR/taskgraph-diff/taskgraph-gen.py --halt-on-failure --overwrite --params-dir $TGDIR/taskgraph-diff/params-android-components --full ac-clean 2>&1 | tee out
http://docs.mozilla-releng.net/en/latest/procedures/mobile_automation_setup.html
2021-07-23T18:51:30
CC-MAIN-2021-31
1627046150000.59
[]
docs.mozilla-releng.net
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure. Inference Engine Plugin API provides the base InferenceEngine::AsyncInferRequestThreadSafeDefault class: _pipelinefield of std::vector<std::pair<ITaskExecutor::Ptr, Task> >, which contains pairs of an executor and executed task. _pipelineto finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the executable network instance and are not destroyed. AsyncInferRequestClass Inference Engine Plugin API provides the base InferenceEngine::AsyncInferRequestThreadSafeDefault class for a custom asynchronous inference request implementation: _inferRequest- a reference to the synchronous inference request implementation. Its methods are reused in the AsyncInferRequestconstructor to define a device pipeline. _waitExecutor- a task executor that waits for a response from a device about device tasks completion NOTE: If a plugin can work with several instances of a device, _waitExecutormust be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel. AsyncInferRequest() The main goal of the AsyncInferRequest constructor is to define a device pipeline _pipeline. The example below demonstrates _pipeline creation with the following stages: inferPreprocessis a CPU compute task. startPipelineis a CPU ligthweight task to submit tasks to a remote device. waitPipelineis a CPU non-compute task that waits for a response from a remote device. inferPostprocessis a CPU compute task. The stages are distributed among two task executors in the following way: inferPreprocessand startPipelineare combined into a single task and run on _requestExecutor, which computes CPU tasks. waitPipelineis sent to _waitExecutor, which works with the device. NOTE: callbackExecutoris also passed to the constructor and it is used in the base InferenceEngine::AsyncInferRequestThreadSafeDefault class, which adds a pair of callbackExecutorand a callback function set by the user to the end of the pipeline. Inference request stages are also profiled using IE_PROFILING_AUTO_SCOPE, which shows how pipelines of multiple asynchronous inference requests are run in parallel via the Intel® VTune™ Profiler tool. ~AsyncInferRequest() In the asynchronous request destructor, it is necessary to wait for a pipeline to finish. It can be done using the InferenceEngine::AsyncInferRequestThreadSafeDefault::StopAndWait method of the base class.
https://docs.openvinotoolkit.org/latest/ie_plugin_api/async_infer_request.html
2021-07-23T20:04:02
CC-MAIN-2021-31
1627046150000.59
[]
docs.openvinotoolkit.org
Using Data Transformations in Server Connect Data Transformations Data Transformation options are available in Server Connect. Using them you can easily combine different data sources, add, rename or remove columns from an existing data set. The options available in the Data Transformations are not editing your actual data set, they clone the data and allow you to manipulate it, so your original data source remains unchanged. You can use the Data Transformations with any collection/array such as - Database Query, API Data Source or any other array. In our example we will show you how to use Data Transformations with an API Data Source. We created an API Data source, pointing to a custom API: You can see its results if you enable the output option for this step and run the API Action in the browser: So that’s the data we are working with. We disable the output for the API Data Source step, as we won’t need to output the original data, but only the transformed one: Rename Column You can easily rename a column returned by your data source. Right click the API Action step: Open the Data Transformations menu and select Rename Column: Select your data source: We select the array element of our API Data Source: And we click the add new button: Enter the name of the column from your data source which you want to rename and enter the new name for it: Enable Output for this step: Save your API Action, and preview the results in the browser. You can see the new data set containing the renamed column: Filter Columns Using the Filter Columns option you can choose whether to show or hide specific columns from your original data set. This is useful when your data source returns many columns, but you need to output or use only a few of them. Right click the API Data Source step: Open the Data Transformations menu and select Filter Columns Select your data source: We select the array element of our API Data Source: And open the Filter Columns menu: In order to remove some of the columns from the data set, just select them from the dropdown: So this way the selected ones will be removed from the data set: If you want to keep only the selected columns, then enable the Keep option. In this case only the columns selected in the list will be displayed: Enable the Output option for this step: Save your API Action and preview the results in the browser. You can see that only the columns selected in the Filter step are displayed: Add Columns Using the Add Columns option you can add columns to the ones returned by your original Data Set. Right click the API Data Source step: Open the Data Transformations menu and select Add Columns: And select your data source: We select the array element of our API Data Source: Click the add new column button: And enter the name of your new column, add a value to it. It can be a dynamic value from your API Action or a static one: If you want to replace an existing column from your original data set with a new value, then enter the name of the existing column and enable the Overwrite option: Enable the output for this step: And preview the results in the browser. You can see the new column is now available in our new data set: Join We created a separate tutorial about joining two data sets. You can find it here:
https://docs.wappler.io/t/using-data-transformations-in-server-connect/31734
2021-07-23T18:51:15
CC-MAIN-2021-31
1627046150000.59
[]
docs.wappler.io
This is a short tutorial on how to convert RMS files to XML with the RMS builder. 1. Extracting RMS Files In the base game, RMS files are located within ambience.rcf and music01.rcf. These files can be opened with Lucas' RCF Explorer or extracted with Lucas' Radcore Cement Library Builder. ambience.rms can be found in ambience.rcf. l1_music.rms to l7_music.rms can be found in music01.rcf. 2. Converting to XML Once you have an RMS file extracted, the simplest way to convert it to XML is to drag it onto the RMS Builder's executable. This will create an XML file of the same name next to the RMS file.
http://docs.donutteam.com/docs/lucasrmsbuilder/tutorials/converting-rms-to-xml
2021-07-23T19:19:44
CC-MAIN-2021-31
1627046150000.59
[]
docs.donutteam.com
Why aren't some of my entries showing up on the map? If you are using the Map View layout for your View or the Multiple Entries Map widget and noticed that some of your entries are not showing up on the map, then the cause might be the Number of entries per page option: Make sure to increase this number to display all the missing entries.
https://docs.gravityview.co/article/379-why-arent-some-of-my-entries-showing-up-on-the-map
2021-07-23T20:26:06
CC-MAIN-2021-31
1627046150000.59
[]
docs.gravityview.co
Method GdkDisplaytranslate_key Declaration [src] gboolean gdk_display_translate_key ( GdkDisplay* display, guint keycode, GdkModifierType state, int group, guint* keyval, int* effective_group, int* level, GdkModifierType* consumed ) Description [src] Translates the contents of a GdkEventKey into a keyval, effective group, and level. Modifiers that affected the translation and are thus unavailable for application use are returned in consumed_modifiers. The effective_group is the group that was actually used for the translation; some keys such as Enter are not affected by the active keyboard group. The level is derived from state. consumed_modifiers gives modifiers that should be masked out from state when comparing this key press to a keyboard shortcut. For instance, on a US keyboard, the plus symbol is shifted, so when comparing a key press to a <Control>plus accelerator <Shift> should be masked out. This function should rarely be needed, since GdkEventKey already contains the translated keyval. It is exported for the benefit of virtualized test environments.
https://docs.gtk.org/gdk4/method.Display.translate_key.html
2021-07-23T19:55:45
CC-MAIN-2021-31
1627046150000.59
[]
docs.gtk.org
Add a command to set the view matrix. View matrix is the matrix that transforms from world space into camera space. When setting both view and projection matrices, it is more efficient to use SetViewProjectionMatrices. Note: The camera space in Unity matches OpenGL convention, so the negative z-axis is the camera's forward. This is different from usual Unity convention, where the camera's forward is the positive z-axis. If you are manually creating the view matrix, for example with an inverse of Matrix4x4.LookAt, you need to scale it by -1 along the z-axis to get a proper view matrix. using UnityEngine; using UnityEngine.Rendering; // Attach this script to a Camera and pick a mesh to render. // When you enter Play mode, a command buffer renders a green mesh at // origin position. [RequireComponent(typeof(Camera))] public class ExampleScript : MonoBehaviour { public Mesh mesh; void Start() { var material = new Material(Shader.Find("Hidden/Internal-Colored")); material.SetColor("_Color", Color.green); var tr = transform; var camera = GetComponent<Camera>(); // Code below does the same as what camera.worldToCameraMatrix would do. Doing // it "manually" here to illustrate how a view matrix is constructed. // // Matrix that looks from camera's position, along the forward axis. var lookMatrix = Matrix4x4.LookAt(tr.position, tr.position + tr.forward, tr.up); // Matrix that mirrors along Z axis, to match the camera space convention. var scaleMatrix = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, new Vector3(1, 1, -1)); // Final view matrix is inverse of the LookAt matrix, and then mirrored along Z. var viewMatrix = scaleMatrix * lookMatrix.inverse; var buffer = new CommandBuffer(); buffer.SetViewMatrix(viewMatrix); buffer.SetProjectionMatrix(camera.projectionMatrix); buffer.DrawMesh(mesh, Matrix4x4.identity, material); camera.AddCommandBuffer(CameraEvent.BeforeSkybox, buffer); } }
https://docs.unity3d.com/ja/2019.4/ScriptReference/Rendering.CommandBuffer.SetViewMatrix.html
2021-07-23T20:40:19
CC-MAIN-2021-31
1627046150000.59
[]
docs.unity3d.com
This method is used mainly to instantiate Webix component of any complexity level, whether it is a control or a complete application layout. It converts a JSON structure passed into it as a parameter to client-side layout objects - this is what Webix is used for:) webix.ui({ view:"", ... }); Normally, you need one webix.ui() constructor for the whole application while any window or popup needs an extra one as windows lie above page layout. webix.ui({ view:"window", ... }).show(); More opportunities of using this method are described in the chapter Rebuilding Application Layout. This method is a cross-browser alternative to the onDocumentReady event and can be used instead of onload() method. Code, which was put inside of it, will be called just after the complete page has parsed, protecting you from potential errors. The thing is optional but we recommend to use it. It can be used multiple times. webix.ready(function(){ webix.ui({ container:"box", view:"window", ... }); }) An easy way to bind a function to an object (inside bound function, this will point to an object instance). var t = webix.bind(function(){ alert(this); }, "master"); t(); //will alert "master" This method allows you to make a full (independent) copy of an object: var new_obj = webix.copy(source_obj)// full copy Adds methods and properties of the source object to the target object: var target_obj = new webix.ui.toolbar(config); webix.extend(target_obj, webix.Movable); Also, it can be used as one more method to get an object copy. In such a case, you must specify an empty object as the second parameter. var new_obj = webix.extend({},source_obj) //object based copy This helper allows executing code string at runtime. webix.exec(" var t = 2; "); It has an advantage over eval: variables defined in code will be defined in the global scope, the same as in case of a normal js code. Delays routine. If you need to delay some code from executing at runtime you are at the right place. webix.delay waits for the specified number of milliseconds and then executes the specified code. webix.delay(webix.animate, webix,[node,animation],10) A tool for getting a unique id (id is unique per session, not globally unique). var name = webix.uid(); webix.event is used to attach event handlers and webix.eventRemove - to remove them. var display_handler = webix.event(document.body,'click',function(e){}) ... webix.removeEvent(display_handler); With the help of this method you can get the position of any HTML element. var offset = webix.html.offset(ev.target) Two methods implementing CSS manipulations. webix.html.addCss allows adding a CSS class to an element, webix.html.removeCss - removes some defined CSS class. webix.html.addCss(this._headbutton, "collapsed"); ... webix.html.removeCss(this._headbutton, "collapsed"); All Webix helpers are listed in the related API Reference section.
https://docs.webix.com/helpers__top_ten_helpers.html
2021-07-23T18:20:18
CC-MAIN-2021-31
1627046150000.59
[]
docs.webix.com
Method GtkGestureDragget_offset Declaration [src] gboolean gtk_gesture_drag_get_offset ( GtkGestureDrag* gesture, double* x, double* y ) Description [src] Gets the offset from the start point. If the gesture is active, this function returns TRUE and fills in x and y with the coordinates of the current point, as an offset to the starting drag point.
https://docs.gtk.org/gtk4/method.GestureDrag.get_offset.html
2021-07-23T19:08:59
CC-MAIN-2021-31
1627046150000.59
[]
docs.gtk.org
Troubleshooting Bogon Network List Updates¶ Make sure the firewall can resolve DNS host names and can reach the bogons host, otherwise the update will fail. To ensure the firewall can resolve the bogon update host via DNS, perform a DNS Lookup: Navigate to Diagnostics > DNS Lookup Enter files.pfsense.orgin the Hostname field Click Lookup If that fails, troubleshoot DNS resolution for the firewall itself. If that works, then perform a port test as demonstrated in Figure Testing Connectivity for Bogon Updates: Navigate to Diagnostics > Test Port Enter files.pfsense.orgin the Hostname field Enter 80in the Port field Click Test If that fails, troubleshoot connectivity from the firewall. Forcing a Bogon Network List GUI: Navigate to Diagnostics > Tables Select bogons or bogonsv6 from the Table list Click Update
https://docs.netgate.com/pfsense/en/latest/troubleshooting/bogons.html
2021-07-23T20:13:23
CC-MAIN-2021-31
1627046150000.59
[]
docs.netgate.com
The Deployment Manager of Intel® Distribution of OpenVINO™ creates a deployment package by assembling the model, IR files, your application, and associated dependencies into a runtime package for your target device. The Deployment Manager is a Python* command-line tool that is delivered within the Intel® Distribution of OpenVINO™ toolkit for Linux* and Windows* release packages and available after installation in the <INSTALL_DIR>/deployment_tools/tools/deployment_manager directory. IMPORTANT: The operating system on the target host must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04. There are two ways to create a deployment package that includes inference-related components of the OpenVINO™ toolkit: you can run the Deployment Manager tool in either interactive or standard CLI mode. Click to expand/collapse Interactive mode provides a user-friendly command-line interface that will guide you through the process with text prompts. The script successfully completes and the deployment package is generated in the output directory specified. Click to expand/collapse Alternatively, you can run the Deployment Manager tool in the standard CLI mode. In this mode, you specify the target devices and other parameters as command-line arguments of the Deployment Manager Python script. This mode facilitates integrating the tool in an automation pipeline. To launch the Deployment Manager tool in the standard mode, open a new terminal window, go to the Deployment Manager tool directory and run the tool command with the following syntax: The following options are available: <--targets>— (Mandatory) List of target devices to run inference. To specify more than one target, separate them with spaces. For example: --targets cpu gpu vpu. You can get a list of currently available targets running the tool's help: [--output_dir]— (Optional) Path to the output directory. By default, it set to your home directory. [--archive_name]— (Optional) Deployment archive name without extension. By default, it is set to openvino_deployment_package. [--user_data]— (Optional) Path to a directory with user data (IRs, models, datasets, etc.) required for inference. By default, it's set to None, which means that the user data are already present on the target host machine. The script successfully completes and the deployment package is generated in the output directory specified. After the Deployment Manager has successfully completed, you can find the generated .tar.gz (for Linux) or .zip (for Windows) package in the output directory you specified. To deploy the Inference Engine components from the development machine to the target host, perform the following steps: Unpack the archive into the destination directory on the target host (if your archive name is different from the default shown below, replace the openvino_deployment_package with the name you use). The package is unpacked to the destination directory and the following subdirectories are created: bin— Snapshot of the bindirectory from the OpenVINO installation directory. deployment_tools/inference_engine— Contains the Inference Engine binary files. install_dependencies— Snapshot of the install_dependenciesdirectory from the OpenVINO installation directory. <user_data>— The directory with the user data (IRs, datasets, etc.) you specified while configuring the package. install_openvino_dependencies.shscript: Congratulations, you have finished the deployment of the Inference Engine components to the target host.
https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_deployment_manager_tool.html
2021-07-23T20:01:13
CC-MAIN-2021-31
1627046150000.59
[]
docs.openvinotoolkit.org
The appRules scheduler service must be installed and setup before Scheduling a project The appRules.SchedulerService is normally created by the appRulesPortalSetup exe. If not, you can manually install it running 'As administrator' the related cmd located in the appStrategy\PortalData\Services_Installation folder (install_sched_service.cmd) You can also run [Install_path]\appRulesPortal\bin\appRules.SchedulerService.exe -i to install (or -u to uninstall).
https://docs.appstrategy.com/installation/scheduler
2021-01-15T20:48:03
CC-MAIN-2021-04
1610703496947.2
[]
docs.appstrategy.com
You can import your Google contacts into Act! 365 by using our /import/google API. Before using the API, make sure you sync your Gmail account with Act! 365. - Visit and log in using your email and password. - Once you're logged in, click the Profile icon to your bottom left corner. - Click Apps & Integrations. - Click on the Gmail icon to sync your Gmail account with Act! 365. - Choose the account you want to sync with Act! 365 from the console that appears. - Click Allow. - Once the connection is made, you can see a check mark across the Gmail icon. You have now synced your Gmail account with Act! 365. You can now import your Google contacts seamlessly.
https://docs.cloud-elements.com/home/how-to-import-your-google-contacts-into-act-365
2021-01-15T21:31:53
CC-MAIN-2021-04
1610703496947.2
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e0eed3f8e121c87106f1fb6/n/image-2020-01-03-at-125807-pm.png', None], dtype=object) ]
docs.cloud-elements.com
Designer Help Once you have saved and pushed your app, you can launch any deployment stage of the app from the Deploy panel. To launch an app: Once you have saved your app in Designer, in the taskbar, click File | Open. Select the app you want to launch and click Open. If you are unable to find the app you want, type the name of it into the Search box and click Search. If you still cannot find the app, in the Filter by menu, select Shared With Me. Click Deploy. Click beside the deployment stage you want to launch. You can only launch a deployment stage if it has been pushed. A menu appears. From the menu, click Launch (Deployment Stage) App. For example, Launch Production App. The app is launched.
https://docs.geocortex.com/webviewer/latest/admin-help/Content/gwv/launch-an-app.htm
2021-01-15T19:52:58
CC-MAIN-2021-04
1610703496947.2
[]
docs.geocortex.com
Shader compiler used to generate player data shader variants. In Unity, shader programs are written in a variant of HLSL language. Each [[wiki:PlatformSpecific|platform] supports one or multiple graphics APIs. For example, Vulkan and Direct3D 12 are both supported in Windows. When building a standalone player, for each supported graphics API, Unity runs a corresponding shader compiler which generates the shader variants and cross-compiles the shader snippet into the shading language natively supported by the graphics API. See Also: IPreprocessShaders.OnProcessShader, Shader language.
https://docs.unity3d.com/2018.4/Documentation/ScriptReference/Rendering.ShaderCompilerPlatform.html
2021-01-15T20:14:03
CC-MAIN-2021-04
1610703496947.2
[]
docs.unity3d.com
Message-ID: <304388133.287848.1610745544117.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_287847_420187.1610745544116" ------=_Part_287847_420187.1610745544116 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Please login or sign up. You may also need to provide = your support ID if you have not already done so.=20 =20 The SunGard Continuity Management Solution (SunGard CMS) is an integrate= d suite of business continuity applications. It brings information cons= istency to all planning, testing, and execution-management processes. It su= pports the full lifecycle of business continuity management.
https://docs.bmc.com/docs/exportword?pageId=586751970
2021-01-15T21:19:04
CC-MAIN-2021-04
1610703496947.2
[]
docs.bmc.com
Licensing FAQ Product-specific licensing informationProduct-specific licensing information - Citrix ADC - Citrix Cloud - Citrix Endpoint Management - Citrix Gateway - Citrix Hypervisor - Citrix Virtual Apps and Desktops The following questions are asked frequently about your licensing environment. Where is the License Administration Console?Where is the License Administration Console? The License Administration Console is not available in License Server 11.16.6 and later. We recommend you use the Citrix Licensing Manager to manage the License Server. LicensesLicenses What is a license file and what do I do with it? The license file is a text file that contains the following: - Product licensing information - License Server name (or other binding identifier) - Customer Success Services membership renewal date - License expiration date (if applicable) - Other system information When you purchase a Citrix product, you are entitled to a license file. The License Server uses this file to determine whether to grant a license to a Citrix product. For more information, see License files. Why would I return a license? An example would be if you’re decommissioning a License Server, but you’re not ready to deploy existing licenses elsewhere. For more information, see Return allocations.:. What happens to returned licenses? Returned licenses are put back in the licensing pool. You can then allocate these licenses in any quantity at any time. After you return a license, remove the old license file from the License Server. For more information, see Return allocations and Modify licenses. What happens when I hide a license? Hiding (previously called archiving) doesn’t remove licenses from an account. Hiding removes them from view. To view hide and unhide licenses, see Hide and unhide licenses. How can I get a copy of my license file? You can obtain a copy of your license file from your License Server. Alternatively, all purchased licenses and allocated license files are available from the secure My Account > Manage Licenses portal at. Allocating licensesAllocating licenses For more information about allocating licenses, see Allocate licenses. What does allocating licenses mean?. What’s the difference between a host ID type and a host ID? The host ID type is the required binding type you supply to allocate licenses. The host ID is the License Server host name, MAC address, or other binding ID required to allocate licenses. Why can’t I partially allocate some licenses? Some licenses do not allow partial allocation. The License ServerThe License Server Can I rename the License Server? License files run only on the License Server for which they were made. License files contain the host name or binding identifier of the License Server you specify when you allocate the licenses. You cannot use the license file that you generated for a particular License Server or a MAC address (for an appliance), on a different License Server or appliance.:. If I upgrade my License Server does it affect my license files? No. The License Server and all product licenses are fully backward compatible and do not introduce any issues into your environment. For more information about upgrading the License Server, see Upgrade the License Server. a specific edition of a license and checks out that license edition. For more information and examples, see Single License Server and different servers using different product editions. Disaster recovery and maintenanceDisaster recovery and maintenance How do I license my disaster recovery site? You can use the same licenses for your disaster recovery that you use for your production environment. - Configure and manage your disaster recovery environment independently of your production environment. - Don’t use both the production and backup License Servers to service license checkouts at the same time. - Make only one License Server accessible at any one time - The License Server in your disaster recovery environment must have an identical host name to the License Server in your production environment. For more information, see Disaster recovery - back up and redundancy.
https://docs.citrix.com/en-us/licensing/current-release/frequently-asked-questions.html
2021-01-15T21:14:48
CC-MAIN-2021-04
1610703496947.2
[]
docs.citrix.com
defined will begin to queue objects whose ILM can no longer be fulfilled in near real time. In the example shown, the chart of the Awaiting—Client indicates that the number of objects awaiting ILM evaluation temporarily increases in an unsustainable manner, then eventually decreases. Such a trend indicates that ILM was temporarily not fulfilled in near real time.
https://docs.netapp.com/sgws-112/topic/com.netapp.doc.sg-admin/GUID-8C84FBA1-550F-497E-9563-21DC2BE9D1AA.html?lang=en
2021-01-15T21:14:52
CC-MAIN-2021-04
1610703496947.2
[]
docs.netapp.com
The: The rules are listed below for each category: When mapping a command parameter, you should use the natural path based on how the command is used in workflows. The following examples show how you can define a natural path: This is important when a workflow creates a volume and then performs an additional step with the created volume by referring to it. The following are similar examples: This category of commands is used for one of the following: You should use the following parameter mapping rules for this category of commands: The following are the exception scenarios for this rule: For example, the ExecutionTimeout parameter in the Create or resize aggregate command is an unmapped parameter. The following certified commands are examples for this category: This category of commands is used to find an object and update the attributes. You should use the following parameter mapping rules for this category of commands: For example, in the Set Volume State command, the Volume parameter is mapped but the new State is unmapped.. You should use the following parameter mapping rules for this category of commands:. For example, in the Move Volume command, Volume is moved from its current parent aggregate to a new destination aggregate. Therefore, Volume parameters are mapped to a Volume dictionary entry and the destination aggregate parameters are mapped separately to the Aggregate dictionary entry but not as volume.aggregate.name..
https://docs.netapp.com/wfa-50/topic/com.netapp.doc.onc-wfa-wdg/GUID-C903EB38-D719-4A8F-92B2-796CB65D2177.html?lang=en
2021-01-15T21:03:02
CC-MAIN-2021-04
1610703496947.2
[]
docs.netapp.com
Confluent REST Proxy¶ The Confluent REST There is a plugin available for Confluent REST Proxy that helps authenticate incoming requests and propagates the authenticated principal to requests to Kafka. This enables Confluent REST Proxy clients to utilize the multi-tenant security features of the Kafka broker. For more information, see Confluent Security Plugins. for the corresponding URLs. - Producers - Instead of exposing producer objects, the API accepts produce requests targeted at specific topics or partitions and routes them all through a small pool of producers. - Producer configuration - Producer instances are shared, so configs cannot be set on a per-request basis. However, you can adjust settings globally by passing new producer settings in the REST Proxy configuration. For example, you might pass in the compression.typeoption to enable site-wide compression to reduce storage and network overhead. - Consumers - Consumers are stateful and therefore tied to specific REST Proxy instances. Offset commit can be either automatic or explicitly requested by the user. Currently limited to one thread per consumer; use multiple consumers for higher throughput. The REST Proxy uses either the high level consumer (v1 api) or the new 0.9 consumer (v2 api) to implement consumer-groups that can read from topics. Note: the v1 API has been marked for deprecation. - Consumer configuration - Although consumer instances are not shared, they do share the underlying server resources. Therefore, limited configuration options are exposed via the API. However, you can adjust settings globally by passing consumer settings in the REST Proxy configuration. - Data Formats - The REST Proxy can read and write data using JSON, raw bytes encoded with base64 or using JSON-encoded Avro, Protobuf, or JSON Schema. With Avro, Protobuf, or JSON Schema, schemas are registered and validated against Schema Registry. - REST Proxy Clusters and Load Balancing - The REST Proxy is designed to support multiple instances running together to spread load and can safely be run behind various load balancing mechanisms (e.g. round robin DNS, discovery services, load balancers) as long as instances are configured correctly. - Simple Consumer - The high-level consumer should generally be preferred. However, it is occasionally useful to use low-level read operations, for example to retrieve messages at specific offsets. See also For an example that shows this in action, see the Confluent Platform demo. Refer to the demo’s docker-compose.yml for a configuration reference. - Admin operations - With the API v3 preview, you can create or delete topics, and to update or reset topic configurations. Just as important, here’s a list of features that aren’t yet supported: - Multi-topic Produce Requests - Currently each produce request may only address a single topic or topic-partition. Most use cases do not require multi-topic produce requests, they introduce additional complexity into the API, and clients can easily split data across multiple requests if necessary - Most Producer/Consumer Overrides in Requests - Only a few key overrides are exposed in the API (but global overrides can be set by the administrator). The reason is two-fold. First, proxies are multi-tenant and therefore most user-requested overrides need additional restrictions to ensure they do not impact other users. Second, tying the API too much to the implementation restricts future API improvements; this is especially important with the new upcoming consumer implementation. Installation¶ See the installation instructions for the Confluent Platform. Before starting the REST Proxy you must start Kafka and Schema Registry. The Confluent Platform quickstart explains how to start these services locally for testing. Deployment¶ Starting the Kafka REST Proxy service is simple once its dependencies are running: # Start the REST Proxy. The default settings automatically work with the # default settings for local |zk| and Kafka nodes. <path-to-confluent>/bin/kafka-rest-start etc/kafka-rest/kafka-rest.properties If you installed Debian or RPM packages, you can simply run kafka-rest-start as it will be on your PATH. The kafka-rest.properties file contains Schema Registry at. If you started the service in the background, you can use the following command to stop it: bin/kafka-rest-stop Development¶ To build a development version, you may need a development versions of common, rest-utils, and schema-registry. After installing these, you can build the Kafka REST Proxy with Maven. All the standard lifecycle phases work. During development, use mvn -f kafka-rest/pom.xml compile to build, mvn -f kafka-rest/pom.xml test to run the unit and integration tests, and mvn exec:java to run an instance of the proxy against a local Kafka cluster (using the default configuration included with Kafka). To create a packaged version, optionally skipping the tests: mvn -f kafka-rest/pom.xml package [-DskipTests] This will produce a version ready for production in target/kafka-rest-$VERSION-package containing a directory layout similar to the packaged binary versions. You can also produce a standalone fat jar using the standalone profile: mvn -f kafka-rest/pom.xml package -P standalone [-DskipTests] generating target/kafka-rest-$VERSION-standalone.jar, which includes all the dependencies as well. To run a local ZooKeeper, Kafka and REST Proxy cluster, for testing: ./testing/environments/minimal/run.sh
https://docs.confluent.io/5.5.0/kafka-rest/index.html
2021-01-15T20:32:38
CC-MAIN-2021-04
1610703496947.2
[]
docs.confluent.io
You can identify cluster performance issues when a cluster component goes into contention. The performance of or node,: transfer readytransactions to finish before the cluster can respond to an I/O request. If the network component is in contention, it means high wait time at the protocol layer is impacting the latency of one or more workloads. SSD Aggregateconsists of all SSDs (an all-flash aggregate), or a mix of SSDs and a cloud tier (a FabricPool aggregate).
https://docs.netapp.com/ocum-98/topic/com.netapp.doc.onc-um-ag/GUID-6B3E55C3-BD76-4B3D-A9CC-FA0128B60D19.html?lang=en
2021-01-15T21:47:36
CC-MAIN-2021-04
1610703496947.2
[]
docs.netapp.com
The concept of dynamic constraint is to allow parametric calculation of a Constraint Property with another constraint rather than one from its type (Constraint Block). Such constraint can be dynamically obtained during the simulation from typing Constraint Block or one of its subtypes. This section uses the ForwardContractValuation.mdzip sample to demonstrate this concept. To dynamically apply different constraints to a Constraint Property - Create a Constraint Block with all needed Constraint Parameter(s). - Create Constraint Blocks which inherit the Constraint Block created in step 1. - Specify the constraint expression of each Constraint Blocks created in step 2. - Type the Constraint Property with the Constraint Block created in step 1. - Create a behavior, for example State Machine, that will assign the runtime object of different Constraint Blocks created in step 2 to the Constraint Property. In ForwardContractValuation.mdzip, we will dynamically assign a constraint to the Valuation_Rule Constraint Property, which is typed by the Valuation Constraint Block (having two subtypes: Long Valuation and Short Valuation Constraint Blocks with two different constraint expressions) as shown in the following figure. The context of this simulation will be the System Block. A Valuation Constraint Block as the Type of the Valuation_Rule Constraint Property. In the following figure, the state machine is used as the classifier behavior of the System block. The Calculate Contract Value for Long Position and Calculate Contract Value for Short Position states apply different valuations to the System block via its entry activity. System Classifier Behavior State Machine. The Calculate Contract Value for Long Position's entry activity assigns the runtime value of the Long Valuation Constraint Block to the Valuation_Rule Constraint Property (see the first figure below). Consequently, the values of the constraint parameters will be calculated with the constraint expression specified in Long Valuation. Entry Activity of Calculate Contract Value for Long Position State. Similarly, the second figure below demonstrates the Short Valuation dynamic assignment. Entry Activity of Calculate Contract Value for Short Position State. Sample model The model used in the figures on this page is the ForwardContractValuation.mdzip sample model that comes with the modeling tool. To open this sample, do one of the following - Download ForwardContractValuation.mdzip - Find in the modeling tool <modeling tool installation directory>\samples\simulation\Parametrics\ForwardContractValuation.mdzip. Related pages - Specifying the language for the expression - Value binding - Evaluating expressions - Evaluation with causality -
https://docs.nomagic.com/display/CST190/Dynamic+constraint
2021-01-15T20:33:03
CC-MAIN-2021-04
1610703496947.2
[]
docs.nomagic.com
These are the docs for 13.8, an old version of SpatialOS. The docs for this version are frozen: we do not correct, update or republish them. 14.5 is the newest → spatial cloud upload Upload local assemblies. Multiple uploads to the same assembly name in parallel can result in undefined behavior. spatial cloud upload <assembly name> [flags] Options --enable_pre_upload_check Run sanity checks and fixes on the assembly before uploading it. (default true) -).
https://docs.improbable.io/reference/13.8/shared/spatial-cli/spatial-cloud-upload
2021-01-15T20:40:04
CC-MAIN-2021-04
1610703496947.2
[]
docs.improbable.io
How to Capture and Inspect Web Traffic Environment Description Fiddler Everywhere captures and inspects web traffic through HTTP and HTTPS. The captured traffic allows you to debug your web application while using the Fiddler's Request and Response Inspectors. Capturing Web Traffic Fiddler Everywhere can capture web traffic (for example, from a browser) made via HTTP or HTTPS. The captured traffic is listed as sessions in the Live Traffic. Follow the steps below to capture web traffic: Open Fiddler Everywhere and focus the main Live Traffic tab.. Make sure that Live Traffic switch is set to Capturing. To stop capturing the live traffic, switch it back to Paused. By default, Fiddler Everywhere can capture only non-secure traffic (HTTP). If you want to enable capturing of secure traffic (HTTPS), then follow the steps described in the Settings > HTTPS article. With the Live Traffic being set to Capturing, open a browser, enter the HTTP address and make the request. Alternatively, if you do not make the HTTP/HTTPS request through a browser, open the application and make the web request. For example: Open Google Chrome browser and enter Go back to Fiddler Everywhere. In the Live Traffic section, you will notice the live traffic being captured. Any new outgoing requests and upcoming responses (for example, after navigating deeper into a website or opening a new website) are continuously captured in the Live Traffic panel. Switch back to Paused to stop the live capturing and investigate specific sessions without polluting your Live Traffic_. You could also select one or more sessions from the __Live Traffic and save and share them or remove them from the list. Inspect Web Traffic Fiddler Everywhere provides functionality to inspect the already captured sessions. The live traffic sessions are composed of HTTP/HTTPS requests and responses. Each HTTP request can be observed through the Request Inspector. Each HTTP response can be observed through the Response Inspector. Both inspectors are powerful tools that can visualize the received content via different Inspector types. Follow the steps below to inspect a request and its respective response: Open Fiddler Everywhere and capture traffic, as mentioned above. Select a session row in the Live Traffic. To the right, at the top, is located the Request Inspector. You could choose a different inspector type to visualize the requested content. The default one is Headers. To the right, at the bottom, is located the Response Inspector. You could choose a different inspector type to visualize the requested content. The default one is Headers.
https://docs.telerik.com/fiddler-everywhere/knowledge-base/capture-and-inspect-web-traffic
2021-01-15T21:13:18
CC-MAIN-2021-04
1610703496947.2
[]
docs.telerik.com
Upgrading Cumulus Linux This topic describes how to upgrade Cumulus Linux on your switch. Deploying, provisioning, configuring, and upgrading switches using automation is highly recommended,. Consider making the following files and directories part of a backup strategy... 3.7.12 to 4.1.0). 4.0, or if you use third-party applications (package upgrade does not replace or remove third-party applications, unlike disk. sudo -E apt-get update and sudo 4.0.0 and run the sudo -E apt-get upgrade command on that switch, the packages are upgraded to the latest releases contained in the latest 4.1.0, these two files display the release as 4.1.0 after the upgrade. - The /etc/image-releasefile is updated only when you run a disk image install. Therefore, if you run a disk.1 from version 3.7.10 or later. If you are using a version of Cumulus Linux earlier than 3.7.10, you must upgrade to version 3.7.10 first, then upgrade to version 4.1. Version 3.7.10 is available on the downloads page on our website. During upgrade, MLAG bonds stay single-connected while the switches are running different major releases; for example, while leaf01 is running 3.7.12 and leaf02 is running 4.1.1.: -. See Back up and Restore..
https://docs.cumulusnetworks.com/cumulus-linux-41/Installation-Management/Upgrading-Cumulus-Linux/
2021-01-15T21:19:17
CC-MAIN-2021-04
1610703496947.2
[]
docs.cumulusnetworks.com
Open the Microsoft 365 App launcher , select All apps, and then select Stream, or go to stream.microsoft.com and sign in with your work or school credentials. Trending videos At the very top of your home page, the home page features a few of your company's trending videos. These are the most popular videos that your colleagues are watching, liking or comment on. You can click colleagues are also using it. Invite your coworkers from the navigation bar or by clicking the notification. By inviting them, you enable them to view the videos you have already uploaded and give them a chance to upload their videos and share with you. For more information, see Invite co-workers in Stream. Learn to use Microsoft Stream Directly from the home page, use the tutorial videos. These videos cover the most common actions in Microsoft Stream and make it easy for you to get started. After you're familiar with Stream, to hide these videos, click Don't show this again. My watchlist This section only shows if you have videos in your watchlist. A watchlist is a convenient way to bookmark videos that you want to come back to. These could be videos you really want to see but don't have the time at that moment or videos you want to watch again. From any video, click Add to watchlist. To learn how to manage your watchlist see Manage your watchlist. Followed channels Channels are a great way to organize content. To stay in touch with new videos added to a channel or to bookmark it you can easily follow a channel. If you are following a channel, this part of the home page lets you see the new videos added to a channel and/or find the channel you follow easily. If you are no longer interested in a channel, to unfollow it, click Following. For more information, see Follow channel. Trending videos and popular channels At the bottom of the Home page you will see some more Trending videos and also Popular channels. Trending videos are determined by the number of views, likes and comments. They're a convenient way to keep in touch with what's really popular in your organization. Popular channels highlight channels that are getting attention. See also Understand privacy settings
https://docs.microsoft.com/en-us/stream/portal-get-started?culture=ar-sa&country=WW
2021-01-15T21:01:32
CC-MAIN-2021-04
1610703496947.2
[array(['media/stream-portal-getting-started/stream-getting-started001.png', 'Getting started icons in Stream'], dtype=object) array(['media/stream-portal-getting-started/stream-getting-started002.png', 'Navigation bar in Stream'], dtype=object) array(['media/stream-portal-getting-started/stream-getting-started003.png', 'Getting started tutorials in Stream'], dtype=object) array(['media/stream-portal-getting-started/stream-getting-started004.png', 'Watchlist in Stream'], dtype=object) array(['media/stream-portal-getting-started/stream-getting-started005.png', 'Follow channels in Stream'], dtype=object) array(['media/stream-portal-getting-started/stream-getting-started006.png', 'Trending videos'], dtype=object) ]
docs.microsoft.com
public static interface Session.SessionLock Session locks are identified by a lock name. Lock names are arbitrary and chosen at will to suit the application. Each lock is owned by at most one session. Locks are established on demand; there is no separate operation to create or destroy a lock. A session lock is acquired using the Session.lock(String) method. If no other session owns the lock, the server will assign the lock to the calling session immediately. Otherwise, the server will record that the session is waiting to acquire the lock. A session can call lock more than once for a given session lock – if the lock is acquired, all calls will complete successfully with equal SessionLocks. If a session closes, the session locks it owns are automatically released. A session can also release a lock. When a session lock is released and other sessions are waiting to acquire the lock, the server will arbitrarily select one of the waiting sessions and notify it that it has acquired the lock. All of the newly selected session's pending lock calls will complete normally. Other sessions will continue to wait. The Session.lock(String, SessionLockScope) variant of this method takes a scope parameter that provides the further option of automatically releasing the lock when the session loses its connection to the server. The acquisition life cycle of a session lock from the perspective of a session is shown in the following diagram. Unlike the Lock API, there is no association between a lock and a thread. If a session calls this method for a lock it already owns, the call will complete normally and immediately with a SessionLock that is equal to the one returned when the lock was originally acquired. A single call to unlock() will release this session's claim to a lock. A further difference to java.util.concurrent.locks.Lock is that lock ownership can be lost due to an independent event such as loss of connection, and not only due to the use of the locking API by the owner. Consequently, the session should poll using isOwned() to check that it still owns the lock before accessing the protected resource. This session lock API has inherent race conditions. Even if an application is coded correctly to protect a shared resource using session locks, there may be a period where two or more sessions concurrently access the resource. The races arise for several reasons including isOwned(), the lock can be lost after the check has succeeded but before the resource is accessed; Despite this imprecision, session locks provide a useful way to coordinate session actions. String getName() long getSequence() name. SessionLocks that are acquired later are guaranteed to have bigger sequence values, allowing the sequence number to be used as a fencing token. boolean isOwned() Session.SessionLockScope getScope() The scope determines when the lock will be released automatically. If a session makes multiple requests for a lock using different scopes, and the server assigns the lock to the session fulfilling the requests, the lock will be given the weakest scope (UNLOCK_ON_CONNECTION_LOSS). Consequently, an individual request can complete with a lock that has a different scope to that requested. Session.lock(String, SessionLockScope) CompletableFuture<Boolean> unlock() On completion, this session will no longer own the named session lock. If CompletableFuture completes normally, a true value indicates this session previously owned the lock and a false value indicates it did not. If the CompletableFuture completes exceptionally, this session does not own the session lock. Common reasons for failure, indicated by the exception reported as the cause, include: SessionClosedException– if the session is closed. Session.lock(String)
https://docs.pushtechnology.com/docs/6.5.2/android/com/pushtechnology/diffusion/client/session/Session.SessionLock.html
2021-01-15T20:31:09
CC-MAIN-2021-04
1610703496947.2
[]
docs.pushtechnology.com
The following steps will also help you to enable Cloud Messaging for your Android app in case you already had GCM configured but Credentials became invalid due to the last Google update. 1. Go to and add Firebase to your Google project. To do so, press the Add project button and select your Google project from the dropdown list as shown below. 2. Make sure you selected the correct project, then select the Country and click Add Firebase. 3. Go to the Project Settings and select Cloud Messaging. 4. Copy the provided Firebase Cloud Messaging Token and Sender ID to your Pushwoosh Control Panel. Important Check that the Google Project Number (FCM Sender ID) in your application build matches the one in your Firebase Console. It should be located in your AndroidManifest.xml in a native Android app, for cross-platform plugins please refer to corresponding plugin integration guides. 5. Here’s the API key you will need to configure your application in Pushwoosh Control Panel. Go to the Pushwoosh Control Panel. 6. In your application click on Android->Edit to change the configuration for Android application. 7. Copy your Firebase Cloud Messaging Token to the API Key. 8. Copy your Sender ID to the FCM Sender ID field. FCM Sender ID is a unique numerical value created when you configure your Google project (given as Project Number in the Google Developers Console and as Sender ID in the Firebase console). In order to locate your Project Number please go to Android FAQ. That’s it! Easy, isn’t it? MismatchSenderID error If you see “MismatchSenderID” errors in your Message History, please try the following: Check that the Google Project Number in your app matches the one in your Google Developers Console. Firebase is deprecating Server Keys and replacing them with Firebase Cloud Messaging Tokens, and occasionally Server Keys may just stop working. If you still use Server Key as the API key when configuring your Pushwoosh Android app and get the MismatchSenderID error, please make sure to use Firebase Cloud Messaging Token instead of the Server Key.
https://docs.pushwoosh.com/platform-docs/pushwoosh-sdk/android-push-notifications/firebase-integration/gcm-to-fcm-migration
2021-01-15T21:44:17
CC-MAIN-2021-04
1610703496947.2
[]
docs.pushwoosh.com
and Arguments window value.
https://docs.trifacta.com/display/r064/ROWNUMBER+Function
2021-01-15T21:24:52
CC-MAIN-2021-04
1610703496947.2
[]
docs.trifacta.com
You can use the App Removal Log page to continue to hold application removal commands, dismiss commands, or release the commands to devices. The command status in the console displays the application removal log that represents a phase of the protection process until an admin acts on the held commands, the system does not remove internal applications. Procedure - Navigate to . - Filter, sort, or browse to select data. - Filter results by Command Status list applications. - Sort by Bundle ID to select data. - Select an application. - You can select the Impacted Device Count link to browse the list of devices affected by actions. This action displays the App Removal Log Devices page that lists the device name of the devices. You can use the device name to navigate to the devices' Details View. - You can select Release or Dismiss. - The Release option sends the commands to devices and the system removes the internal application off devices. - The Dismiss option purges the removal commands from the queue and the system does not remove the internal application off devices. - For dismissed commands, return to the internal applications area of the console and check the smart group assignments of the application for which you dismissed commands. Ensure that the internal application's smart group assignments are still valid.If the smart group assignment is invalid and you do not check it, the system might remove the application when the device checks-in with the system.
https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/Application_Management/GUID-AWT-APP-REM-PROT-LOGACTIONS.html
2021-01-15T21:17:56
CC-MAIN-2021-04
1610703496947.2
[]
docs.vmware.com
Data Requests JS Library This JS library allows you to embed a signing form on your own website. See: Data Requests (Electronic Signatures) Latest Version: Release Notes 2.3.0 (Apr 25, 2020) - For inline iframes: Changed default iframestyle to display: block; min-height: 500px;. ( display: blockremoves any space at the bottom of inline iframes.) - Added iframeClassand iframeStyleoptions, so that you can configure the class=""and style=""attributes on the iframeelement. (Available for both modal and inline iframes.) 2.2.4 (Apr 25, 2020) - Fixed crash for any browsers configured to block all cookies. Before this fix, the loading spinner would sometimes get stuck and the signing iframe would fail to load. 2.2.1 (Apr 24, 2020) - Improved logging and error reporting to investigate iframe loading issues 2.2.0 (Apr 2, 2020) - Fixed bug where static form data could not be overridden after clearing a field 2.0.0 (May 20, 2019) - Renamed FormAPI to DocSpring
https://docspring.com/docs/embedded_library_releases/data_requests.html
2021-01-15T20:36:56
CC-MAIN-2021-04
1610703496947.2
[]
docspring.com
Table of Contents These classes together provide a framework for editable toolbars based on the GtkAction facility introduced with Gtk+ 2.4. To create editable toolbars with exo, use the ExoToolbarsModel class with the ExoToolbarsView widget. This widget is designed around a Model/View/Controller design and consists of three parts: The ExoToolbarsEditorDialog widget is provided for convenience, it simply wraps the ExoToolbarsEditor component into a GtkDialog.
https://docs.bluesabre.org/exo/exo-toolbars.html
2020-02-17T03:12:38
CC-MAIN-2020-10
1581875141653.66
[]
docs.bluesabre.org
June 2017 Volume 32 Number 6 [DevOps] Git Internals for Visual Studio Developers By Jonathan Waldman | June 2017 In my commit to Git DevOps article (msdn.com/magazine/mt767697), I explained how the Git version control system (VCS) differs from centralized VCSes with which you might already be familiar. I then demonstrated how to accomplish certain Git tasks using the Git tooling in Visual Studio. In this article, I’ll summarize relevant changes to how Git works within the newly released Visual Studio 2017 IDE, discuss how the Git repo is implemented in the file system, and examine the topology of its data store and the structure and content of its various storage objects. I’ll conclude with a low-level explanation of Git branches, providing a perspective that I hope will prepare you for more advanced Git operations I’ll present in upcoming articles. Note: I use no servers or remotes in this article—I’m exploring a purely local scenario that you can follow using any Windows machine with Visual Studio 2017 and Git for Windows (G4W) installed (with or without an Internet or network connection). This article is an introduction to Git internals and assumes you’re familiar with Visual Studio Git tooling and basic Git operations and concepts. Visual Studio, Git and You Git refers not only to the repository (“repo”) that contains the version-control data store, but also to the engine that processes commands to manage it: Plumbing commands carry out low-level operations; porcelain commands bundle up plumbing commands in macro-like fashion, making them easier if less granular to invoke. As you master Git, you’ll discover that some tasks require the use of these commands (some of which I’ll use in this article), and that invoking them requires a command-line interface (CLI). Unfortunately, Visual Studio 2017 no longer installs a Git CLI because it uses a new Git engine called MinGit that doesn’t provide one. MinGit (“minimal Git”), introduced with G4W 2.10, is a portable, reduced-feature-set API designed for Windows applications that need to interact with Git repositories. G4W and, by extension, MinGit, are forks of the official Git open source project. This means they both inherit the official Git fixes and updates as soon as they’re available—and it ensures that Visual Studio can do the same. To access a Git CLI (and to follow along with me), I recommend installing the full G4W package. While other Git CLI/GUI tooling options are available, G4W (as MinGit’s official parent) is a wise choice—especially because it shares its configuration files with MinGit. To obtain the latest G4W setup, visit the Downloads section at the site’s official source: git-scm.com. Run the setup program and select the Git Bash Here checkbox (creates a Git command-prompt window) and the Git GUI Here checkbox (creates a Git GUI window)—which makes it easy to right-click a folder in Windows Explorer—and select one of those two options for the current folder (the “Bash” in Git Bash refers to Bourne Again Shell, which presents a Git CLI in a Unix shell for G4W). Next, select Use Git from the Windows Command Prompt, which configures your environment so that you can conveniently run Git commands from either a Visual Studio package manager console (Windows PowerShell) or a command-prompt window. If G4W is installed using the options I’ve recommended here (see Figure 1), the communication pathways that will be in effect when communicating with a Git repo: Visual Studio 2017 uses the MinGit API while PowerShell and command-prompt sessions use the G4W CLI—a very different communication pathway en route to the Git repo. Although they serve as different communication endpoints, MinGit and G4W are derived from the official Git source code—and they share their configuration files. Notice that when you issue porcelain commands, they’re translated into plumbing commands before being processed by the CLI. The point here is to understand that Git experts often resort to—and some thrive on—issuing bare-metal Git plumbing commands to a CLI because doing so is the most direct, lowest-level way with which to manage, query and update a Git repo. In contrast to low-level plumbing commands, higher-level porcelain commands and Git operations exposed by the Visual Studio IDE also can update the Git repo—yet it’s not always clear how, especially because porcelain commands often accept options that change what they do when they’re invoked. I’ve concluded that familiarity with Git plumbing commands is essential to wielding the power of Git and that’s why I strongly recommend installing G4W alongside Visual Studio 2017. (Read details about Git plumbing and porcelain commands at git-scm.com/docs.) Figure 1 Communication Paths to and from the MinGit API and the Git for Windows Command-Line Interface Low-Level Git It’s natural for a Visual Studio developer to try to leverage existing knowledge of a VCS, such as Team Foundation Server (TFS), when transitioning to Git. Indeed, there’s an overlap in the terms and concepts used to describe operations in both systems—such as checking out/checking in code, merging, branching and so on. However, the assumption that a similar vocabulary implies similar underlying operations is downright wrong and dangerous. That’s because the decentralized Git VCS is fundamentally different in how it stores and tracks files and in the way it implements familiar version-control features. In short, when transitioning to Git, it’s probably best to just forget everything you know about centralized VCSes and start afresh. When you’re working on a Visual Studio project that’s under Git source control, the typical edit/stage/commit workflow works something like this: You add, edit and delete (collectively hereafter, “change”) files in your project as needed. When ready, you stage some or all of those changes before committing them to the repo. Once committed, those changes become part of the repo’s complete and transparent history. Now, let’s see how Git manages all that internally. The Directed Acyclic Graph Behind the scenes, each commit ends up as a vertex (node) on a Git-managed directed acyclic graph (“DAG,” in graph-theory parlance). The DAG represents the Git repo and each vertex represents a data element known as a commit object (see ** Figure 2**). Vertices in a DAG are connected with a line called an edge; it’s customary for DAG edges to be drawn as arrows so they can express a parent/child relationship (the head points to the parent vertex; the tail points to the child vertex). The origin vertex represents the repo’s first commit; a terminal vertex has no child. DAG edges express the exact parent-child relationship between each of its vertices. Because Git commit objects (“commits”) are vertices, Git can leverage the DAG structure to model the parent-child relationship between every commit, giving Git its ability to generate a history of changes from any commit back to the repo’s initial commit. Furthermore, unlike linear graphs, a DAG supports branches (a parent with more than one child), as well as merges (a child with more than one parent). A Git branch is spawned whenever a commit object produces a new child and a merge occurs when commit objects are combined to form a child. Figure 2 A Directed Acyclic Graph Showing Vertex, Edge, Head, Tail, Origin Vertex and Terminal Vertices; Three Branches (A, B and C); Two Branch Events (at A4); and One Merge Event (B3 and A5 Are Merged at A6) I’ve explored the DAG and its associated terminology in great detail because such knowledge is a prerequisite to understanding advanced Git operations, which tend to work by manipulating vertices on the Git DAG. Furthermore, DAGs help to visualize the Git repo, and they’re widely leveraged in teaching materials, during presentations and by Git GUI tools. Git Objects Overview So far I’ve mentioned only the Git commit object. However, Git actually stores four different object types in its repo: commit, tree, blob and tag. To investigate each of these, launch Visual Studio (I’m using Visual Studio 2017, but earlier versions that include Git support will work similarly) and create a new console application using File | New Project. Give your project a name, check the Create new Git repository checkbox and click OK. (If you haven’t configured Git within Visual Studio before, you’ll see a Git User Information dialog box appear. If you do, specify your name and e-mail address—this information is written to the Git repo for each of your commits—and check the “Set in global .gitconfig” checkbox if you want to use this information for every Git repo on your machine.) When done, open a Solution Explorer window (see ** Figure 3, Marker 1**). You’ll see light-blue lock icons next to files that are checked-in—even though I haven’t yet issued a commit! (This is an example showing that Visual Studio sometimes carries out actions against your repo that you might not anticipate.) To see exactly what Visual Studio did, look at the history of changes for the current branch. Figure 3 A New Visual Studio Project with Its Git-Repo History Report .png) Git names the default branch master and makes it the current branch. Visual Studio displays the current branch name at the right edge of its status bar (Marker 2). The current branch identifies the commit object on the DAG that will be the parent of the next commit (more on branches later). To view the commit history for the current branch, click the master label (Marker 2) then select View History (Marker 3) from the menu. The History - master window displays several columns of information. At the left (Marker 4) are two vertices on the DAG; each graphically represents a commit on the Git DAG. The ID, Author, Date and Message columns (Marker 5) show details about the commit. The HEAD for the master branch is indicated by a dark red pointer (Marker 6)—I’ll fully explain the meaning of this toward the end of this article. This HEAD marks the location of the next edge arrow’s head after a commit adds a new vertex to the DAG. The report shows that Visual Studio issued two commits, each with its own Commit ID (Marker 7). The first (earliest) is uniquely identified with ID a759f283; the second with bfeb0957. These values are truncated from the full 40-character hexadecimal Secure Hash Algorithm 1 (SHA-1). The SHA-1 is a cryptographic hash function engineered to detect corruption by taking a message, such as the commit data, and creating a message digest—the full SHA-1 hash value—such as the commit ID. In short, the SHA-1 hash acts not only like a checksum, but also like a GUID because it provides roughly 1.46 x 1048 unique combinations. Like many other Git tools, Visual Studio expresses only the first eight characters of the full value because these provide 4.3 billion unique values, enough to avoid collisions in your daily work. If you want to see the full SHA-1 value, hover the mouse over a line in the History report (Marker 8). While the View History report’s message column indicates the stated purpose of each commit (provided by the committer during a commit), it is, after all, just a comment. To examine a commit’s actual changes, right-click a row in the list and select View Commit Details (see Figure 4). Figure 4 Commit Details for the Repo’s First Two Commits .png) The first commit (Marker 1) shows two changes: .gitignore and .gitattributes (I discussed these files in my previous article). The [add] next to each indicates files added to the repo. The second commit (Marker 2) shows five files added and also displays its parent-commit object’s ID as a clickable link. To copy the entire SHA-1 value to the clipboard, simply click the Actions menu and select Copy Commit ID. File-System Implementation of a Git Repo To see how Git stores these files in the repo, right-click the solution (not the project) in Solution Explorer and select Open Folder in File Explorer. In the solution’s root you’ll see a hidden folder called .git (if you don’t see .git, click Hidden items in the File Explorer View menu). The .git folder is the Git repo for your project. Its objects folder defines the DAG: All DAG vertices and all parent-child relationships between each vertex are encoded by files that represent every commit in the repo starting with the origin vertex (refer back to ). The .git folder’s HEAD file and refs folder define branches. Let’s look at these .git items in detail. Exploring Git Objects The .git\objects folder stores all Git object types: commit (for commits), tree (for folders), blob (for binary files) and tag (a friendly commit-object alias). Commit Object Now’s the time to launch a Git CLI. You can use whichever tool you prefer (Git Bash, PowerShell or command window)—I’ll use PowerShell. To begin, navigate to the solution root’s .git\objects folder, then list its contents ( Figure 5, Marker 1). You’ll see that it contains a number of folders named using two-character hex values. To avoid exceeding the number of files permitted in a folder by the OS, Git removes the first two characters from each 40-byte SHA-1 value in order to create a folder name, then it uses the remaining 38 characters as the file name for the object to store. To illustrate, my project’s first commit has ID a759f283, so that object will appear in a folder called a7 (the first two characters of the ID). As expected, when I open that folder, I see a file named 59f283. Remember that all of the files stored in these hex-named folders are Git objects. To save space, Git zlib-compresses the files in the object store. Because this kind of compression produces binary files, you won’t be able to view these files using a text editor. Instead, you’ll need to invoke Git commands that can properly extract Git-object data and present it using a format you can digest. Figure 5 Exploring Git Objects Using a Git Command-Line Interface .png) I already know that file 59f283 contains a commit object because it’s a commit ID. But sometimes you’ll see a file in the objects folder without knowing what it is. Git provides the cat-file plumbing command to report the type of an object, as well as its contents (Marker 3). To obtain the type, specify the -t (type) option when invoking the command, along with the few unique characters of the Git object’s file name: git cat-file -t a759f2 On my system, this reports the value “commit”—indicating that the file starting with a759f2 contains a commit object. Specifying only the first five characters of the SHA-1 hash value is usually enough, but you can provide as many as you want (don’t forget to add the two characters from the folder name). When you issue the same command with the -p (pretty print) option, Git extracts information from the commit object and presents it in a format that’s human-readable (Marker 4). A commit object is composed of the following properties: Parent Commit ID, Tree ID, Author Name, Author Email Address, Author Commit Timestamp, Committer Name, Committer Email Address, Committer Commit Timestamp and Commit Message (the parent commit ID isn’t displayed for the first commit in the repo). The SHA-1 for each commit object is computed from all of the values contained in these commit-object properties, virtually guaranteeing that each commit object will have a unique commit ID. Tree and Blob Objects Notice that although the commit object contains information about the commit, it doesn’t contain any files or folders. Instead, it has a Tree ID (also an SHA-1 value) that points to a Git tree object. Tree objects are stored in the .git\objects folder along with all other Git objects. Figure 6 depicts the root tree object that’s part of each commit object. The root tree object maps, in turn, to blob objects (covered next) and other tree objects, as needed. Figure 6 Visualization of Git Objects That Express a Commit Because my project’s second commit (Commit ID bfeb09) includes files, as well as folders (see the earlier ** 4**), I’ll use it to illustrate how the tree object works. ** Figure 7, Marker 1** shows the cat‑file ‑p bfeb09 output. This time, notice that it includes a parent property, which correctly references the SHA-1 value for the first commit object. (Remember that it’s a commit object’s parent reference that enables Git to construct and maintain its DAG of commits.) Figure 7 Using the Git CLI to Explore Tree Object Details .png) The root tree object maps, in turn, to blob objects (zlib-compressed files) and other tree objects, as needed. Commit bfeb09 contains a tree property with ID ca853d. Figure 7, Marker 2 shows the cat-file -p ca853d output. Each tree object contains a permissions property corresponding to the POSIX permissions mask of the object (040000 = Directory, 100644 = Regular non-executable file, 100664 = Regular non-executable group-writeable file, 100755 = Regular executable file, 120000 = Symbolic link, and 160000 = Gitlink); type (tree or blob); SHA-1 (for the tree or blob); and name. The name is the folder name (for tree objects) or the file name (for blob objects). Observe that this tree object is composed of three blob objects and another tree object. You can see that the three blobs refer to files .gitattributes, .gitignore and DemoConsole.sln, and that the tree refers to folder DemoConsoleApp (Figure 7, Marker 3). Although tree object ca853d is associated with the project’s second commit, its first two blobs represent files .gitattributes and .gitignore—files added during the first commit (see ** Figure 4, Marker 1**)! The reason these files appear in the tree for the second commit is that each commit represents the previous commit object along with changes captured by the current commit object. To “walk the tree” one level deeper, Figure 7, Marker 3 shows the cat-file -p a763da output, showing three more blobs (App.config, DemoConsoleApp.csproj and Program.cs) and another tree (folder Properties). Blob objects are again just zlib-compressed files. If the uncompressed file contains text, you can extract a blob’s entire content using the same cat-file command along with the blob ID (Figure 7, Marker 5). Because blob objects represent files, Git uses the SHA-1 blob ID to determine if a file changed from the previous commit; it also uses SHA-1 values when diffing any two commits in the repo. Tag Object The cryptic alphanumeric nature of SHA-1 values can be a bit unwieldy to communicate. The tag object lets you assign a friendly name to any commit, tree or blob object—although it’s most common to tag only commit objects. There are two types of tag object: lightweight and annotated. Both types appear as files in the .git\refs\tags folder, where the file name is the tag name. The content of a lightweight tag file is the SHA-1 to an existing commit, tree or blob object. The content of an annotation tag file is the SHA-1 to a tag object, which is stored in the .git\objects folder along with all other Git objects. To view the content of a tag object, leverage the same cat-file -p command. You’ll see the SHA-1 value of the object that was tagged, along with the object type, tag author, date-time and tag message. There are a number of ways to tag commits in Visual Studio. One way is to click the Create Tag link in the Commit Details window ( ). Tag names appear in the Commit Details window ( , Marker 3) and in View History reports (see the earlier ** , Marker 9**). Git populates the info and pack folders in the .git\objects folder when it applies storage optimizations to objects in the repo. I’ll discuss these folders and the Git file-storage optimizations more fully in an upcoming article. Armed with knowledge about the four Git object types, realize that Git is referred to as a content-addressable file system because any kind of content across any number of files and folders can be reduced to a single SHA-1 value. That SHA-1 value can later be used to accurately and reliably recreate the same content. Put another way, the SHA-1 is the key and the content is the value in an exalted implementation of the usually prosaic key-index-driven lookup table. Additionally, Git can economize when file content hasn’t changed between commits because an unchanged file produces the same SHA-1 value. This means that the commit object can reference the same SHA-1 blob or tree ID value used by a previous commit without having to create any new objects—this means no new copies of files! Branching Before truly understanding what a Git branch is, you must master how Git internally defines a branch. Ultimately, this boils down to grasping the purpose of two key terms: head and HEAD. The first, head (all lowercase), is a reference Git maintains for every new commit object. To illustrate how this works, Figure 8 shows several commits and branch operations. For Commit 01, Git creates the first head reference for the repo and names it master by default (master is an arbitrary name with no special meaning other than it’s a default name—Git teams often rename this reference). When Git creates a new head reference, it creates a text file in its ref\heads folder and places the full SHA‑1 for the new commit object into that file. For Commit 01, this means that Git creates a file called master and places the SHA-1 for commit object A1into that file. For Commit 02, Git updates the master head file in the heads folder by removing the old SHA-1 value and replacing it with the full SHA-1 commit ID for A2. Git does the same thing for Commit 03: It updates the head file called master in the heads folder so that it holds the full commit ID for A3. Figure 8 Two Heads Are Better Than One: Git Maintains Various Files in Its Heads Folder Along with a Single HEAD File You might have guessed correctly that the file called master in the heads folder is the branch name for the commit object to which it points. Oddly, perhaps, at first, a branch name points to a single commit object rather than to a sequence of commits (more on this specific concept in a moment). Observe the Create Branch & Checkout Files section in ** Figure 8**. Here, the user created a new branch for a print-preview feature in Visual Studio. The user named the branch feat_print_preview, based it on master and checked the Checkout branch checkbox in Team Explorer’s New Local Branch From pane. Checking the checkbox tells Git that you want the new branch to become the current branch (I’ll explain this in a moment). Behind the scenes, Git creates a new head file in the heads folder called feat_print_preview and places the SHA-1 value for commit object A3 into it. This now means that two files exist in the heads folder: master and feat_print_preview—both of which point to A3. At Commit 04, Git is faced with a decision: Normally, it would update the SHA-1 value for the file reference in the heads folder—but now it’s got two file references in that folder, so which one should it update? That’s what HEAD does. HEAD (all uppercase) is a single file in the root of the .git folder that points to a “head” (all lowercase) file in the heads folder. (Note that “HEAD” is actually a file that’s always named HEAD, whereas “head” files have no particular name.) The head file HEAD contains the commit ID that will be assigned as the parent ID for the next commit object. In a practical sense, HEAD marks Git’s current location on the DAG, and there can be many heads but there’s always only one HEAD. Going back to ** 8**, Commit 01 shows that HEAD points to the head file called master, which, in turn, points to A1 (that is, the master head file contains the SHA-1 for commit object A1). At Commit 02, Git doesn’t need to do anything with the HEAD file because HEAD already points to the file master. Ditto for Commit 03. However, in the Create and Check-Out New Branch step, the user created a branch and checked out the files for the branch by marking the Checkout branch checkbox. In response, Git updates HEAD so that it points to the head file called feat_print_preview rather than to master. (If the user hadn’t checked the Checkout branch checkbox, HEAD would continue to point to master.) Armed with knowledge about HEAD, you now can see that Commit 04 no longer requires Git to make any decision: Git simply inspects the value of HEAD and sees that it points to the head file called feat_print_preview. It then knows that it must update the SHA-1 in the feat_print_preview head file so that it contains the commit ID for B1. In the Checkout Branch step, the user accessed the Team Explorer branches pane, right-clicked the master branch and chose Checkout. In response, Git checks out the files for Commit A3 and updates the HEAD file so that it points to the head file called master. At this point, it should be clear why branch operations in Git are so efficient and fast: Creating a new branch boils down to creating one text file (head) and updating another (HEAD). Switching branches involves updating only a single text file (HEAD) and then usually a small performance hit as files in the working directory are updated from the repo. Notice that commit objects contain no branch information! In fact, branches are maintained by only the HEAD file and the various files in the heads folder that serve as references. Yet when developers using Git talk about being on a branch or they refer to a branch, they often are colloquially referring to the sequence of commit objects that originates with master or from a newly formed branch. The earlier Figure 2 shows what many developers would identify as three branches: A, B and C. Branch A follows the sequence A1 through A6. Branch activity at A4 produces two new branches: B1 and C1. So the sequence of commits that begins with B1 and continues to B3 can be referred to as Branch B while the sequence from C1 to C2 can be referred to as Branch C. The takeaway here is not to forget the formal definition of a Git branch: It’s simply a pointer to a commit object. Moreover, Git maintains branch pointers for all branches (called heads) and a single branch pointer for the current branch (called HEAD). In my next article I’ll explore details about checking out and checking in files from and to the repo, and the role of index and how it constructs tree objects during each commit. I’ll also explore how Git optimizes storage and how merges and diffs work. experts for reviewing this article: Kraig Brockschmidt and Ralph Squillace Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/june/devops-git-internals-for-visual-studio-developers
2020-02-17T05:08:50
CC-MAIN-2020-10
1581875141653.66
[array(['images/mt809117.waldman_figure1_hires%28en-us%2cmsdn.10%29.png', 'Communication Paths to and from the MinGit API and the Git for Windows Command-Line Interface Communication Paths to and from the MinGit API and the Git for Windows Command-Line Interface'], dtype=object) array(['images/mt809117.waldman_figure2_hires%28en-us%2cmsdn.10%29.png', 'Directed Acyclic Graph Showing Vertex, Edge, Head, Tail, Origin Vertex and Terminal Vertices Directed Acyclic Graph Showing Vertex, Edge, Head, Tail, Origin Vertex and Terminal Vertices'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt809117.Waldman_Figure%2003_hires(en-us,MSDN.10', 'A New Visual Studio Project with Its Git-Repo History Report A New Visual Studio Project with Its Git-Repo History Report'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt809117.Waldman_Figure%2004_hires(en-us,MSDN.10', 'Commit Details for the Repo’s First Two Commits Commit Details for the Repo’s First Two Commits'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt809117.Waldman_Figure%2005_hires(en-us,MSDN.10', 'Exploring Git Objects Using a Git Command-Line Interface Exploring Git Objects Using a Git Command-Line Interface'], dtype=object) array(['images/mt809117.waldman_figure6_hires%28en-us%2cmsdn.10%29.png', 'Visualization of Git Objects That Express a Commit Visualization of Git Objects That Express a Commit'], dtype=object) array(['https://msdn.microsoft.com/en-us/mt809117.Waldman_Figure%2007_hires(en-us,MSDN.10', 'Using the Git CLI to Explore Tree Object Details Using the Git CLI to Explore Tree Object Details'], dtype=object) array(['images/mt809117.waldman_figure8_hires%28en-us%2cmsdn.10%29.png', 'Two Heads Are Better Than One: Git Maintains Various Files in Its Heads Folder Along with a Single HEAD File Two Heads Are Better Than One: Git Maintains Various Files in Its Heads Folder Along with a Single HEAD File'], dtype=object) ]
docs.microsoft.com
Importing Booklets To A Team Account By Scott.Brownlee updated 5 months ago You can import multiple booklets at once to a team member account from your admin account. Open your TEAM MEMBER menu from your admin dashboard. Scroll down to the team member account you want to add booklets into. Tap the IMPORT button. Tap import beside any booklet you want to add from your admin account into the team account. When it has been imported, the booklet button will change to green and say IMPORTED Continue to do this for every booklet you want to copy. This creates a duplicate copy of your booklet into their account. If you update that booklet it will not affect the original booklet in your admin account. As well, updates to the admin account booklet will not affect the booklet imported into the team account.
https://docs.simplebooklet.com/article/116-importing-booklets-to-a-team-account
2020-02-17T04:46:13
CC-MAIN-2020-10
1581875141653.66
[array(['https://d258lu9myqkejp.cloudfront.net/users_profiles/14100/medium/guy-smiling-pink.png?1552082292', 'Avatar'], dtype=object) ]
docs.simplebooklet.com
Types of Splunk software licenses Each Splunk instance requires a license. Splunk licenses specify how much data a given Splunk instance can index and what features you have access to. This topic discusses the various license types and options. In general, there are four types of licenses: - The Enterprise license enables all enterprise features, such as authentication and distributed search. - The Free license allows you a limited indexing volume and disables authentication, but is perpetual. - The Forwarder license allows you to forward, but not index, data and enables authentication. - The Beta license, typically enables enterprise features, but is restricted to Splunk Beta releases. Also discussed in this topic are some special licensing considerations if your deployment includes distributed search or index replication. For information about upgrading an existing license, see "Migrate to the new Splunk licenser" in the Installation Manual. Enterprise license Splunk Enterprise is the standard Splunk: Enterprise trial license When you download Splunk for the first time, you are asked to register. Your registration authorizes you to receive an Enterprise trial license, which allows a maximum indexing volume of 500 MB/day. The Enterprise trial license expires 60 days after you start using Splunk. If you are running with a Enterprise trial license and your license expires, Splunk requires you to switch to a Splunk Free license. Once you have installed Splunk, you can choose to run Splunk with the Enterprise trial license until it expires, purchase an Enterprise license, or switch to the Free license, which is included. Note: The Enterprise trial license is also sometimes referred to as "download-trial." Sales trial license If you are working with Splunk Sales, you can request trial Enterprise licenses of varying size and duration. The Enterprise trial license expires 60 days after you start using Splunk. If you are preparing a pilot for a large deployment and have requirements for a longer duration or higher indexing volumes during your trial, contact Splunk Sales or your sales rep directly with your request. Free license The Free license includes 500 MB/day of indexing volume, is free (as in beer), and has no expiration date. The following features that are available with the Enterprise license are disabled in Splunk Free: - Multiple user accounts and role-based access controls - Distributed search - Forwarding in TCP/HTTP formats (you can forward data to other Splunk instances, but not to non-Splunk instances) - Deployment management (including for clients) - Alerting/monitoring - Authentication and user management, including native authentication, LDAP, and scripted authentication. - There is no login. The command line or browser can access and control all aspects of Splunk with no. Learn more about the free version of Splunk in this manual.. -" in this manual, and how you arrange for licensing for the search head depends on the version of Splunk: - In the past, for versions prior to 4.2, Splunk suggested using a separate forwarder license on each search head. This was simply because forwarder licenses do not allow indexing, but require authentication for access to the search head. - Now, for versions after 4.2, Splunk recommends that, instead of assigning a separate license to each peer, indexer cluster nodes (for index replication) As with any Splunk deployment, your licensing requirements are driven by the volume of data your indexers process. Contact your Splunk sales representative to. Read more about "System requirements and other deployment considerations"!
https://docs.splunk.com/Documentation/Splunk/6.1.13/Admin/TypesofSplunklicenses
2020-02-17T05:07:04
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Utilities for implementing APIs¶ This page covers some utility classes that are useful for implementing WebExtension APIs: WindowManager¶ This class manages the mapping between the opaque window identifiers used in the browser.windows API. See the reference docs here. TabManager¶ This class manages the mapping between the opaque tab identifiers used in the browser.tabs API. See the reference docs here. ExtensionSettingsStore¶ ExtensionSettingsStore (ESS) is used for storing changes to settings that are requested by extensions, and for finding out what the current value of a setting should be, based on the precedence chain or a specific selection made (typically) by the user. When multiple extensions request to make a change to a particular setting, the most recently installed extension will be given precedence. It is also possible to select a specific extension (or no extension, which infers user-set) to control a setting. This will typically only happen via ExtensionPreferencesManager described below. When this happens, precedence control is not used until either a new extension is installed, or the controlling extension is disabled or uninstalled. If user-set is specifically chosen, precedence order will only be returned to by installing a new extension that takes control of the setting. ESS will manage what has control over a setting through any extension state changes (ie. install, uninstall, enable, disable). Notifications:¶ “extension-setting-changed”:¶ When a setting changes an event is emitted via the apiManager. It contains the following: - action: one of select, remove, enable, disable - id: the id of the extension for which the setting has changed, may be null if the setting has returned to default or user set. - type: The type of setting altered. This is defined by the module using ESS. If the setting is controlled through the ExtensionPreferencesManager below, the value will be “prefs”. - key: The name of the setting altered. - item: The new value, if any that has taken control of the setting. ExtensionPreferencesManager¶ ExtensionPreferencesManager (EPM) is used to manage what extensions may control a setting that results in changing a preference. EPM adds additional logic on top of ESS to help manage the preference values based on what is in control of a setting. Defining a setting in an API¶ A preference setting is defined in an API module by calling EPM.addSetting. addSetting allows the API to use callbacks that can handle setting preferences as needed. Since the setting is defined at runtime, the API module must be loaded as necessary by EPM to properly manage settings. In the api module definition (e.g. ext-toolkit.json), the api must use “settings”: true so the management code can discover which API modules to load in order to manage a setting. See browserSettings[1] as an example. Settings that are exposed to the user in about:preferences also require special handling. We typically show that an extension is in control of the preference, and prevent changes to the setting. Some settings may allow the user to choose which extension (or none) has control of the setting. Preferences behavior¶ To actually set a setting, the module must call EPM.setSetting. This is typically done via an extension API, such as browserSettings.settingName.set({ …value data… }), though it may be done at other times, such as during extension startup or install in a modules onManifest handler. Preferences are not always changed when an extension uses an API that results in a call to EPM.setSetting. When setSetting is called, the values are stored by ESS (above), and if the extension currently has control, or the setting is controllable by the extension, then the preferences would be updated. The preferences would also potentially be updated when installing, enabling, disabling or uninstalling an extension, or by a user action in about:preferences (or other UI that allows controlling the preferences). If all extensions that use a preference setting are disabled or uninstalled, the prior user-set or default values would be returned to. An extension may watch for changes using the onChange api (e.g. browserSettings.settingName.onChange). [1] ESS vs. EPM¶ An API may use ESS when it needs to allow an extension to store a setting value that affects how Firefox works, but does not result in setting a preference. An example is allowing an extension to change the newTab value in the newTab service. An API should use EPM when it needs to allow an extension to change a preference. Using ESS/EPM with experimental APIs¶ Properly managing settings values depends on the ability to load any modules that define a setting. Since experimental APIs are defined inside the extension, there are situations where settings defined in experimental APIs may not be correctly managed. The could result in a preference remaining set by the extension after the extension is disabled or installed, especially when that state is updated during safe mode. Extensions making use of settings in an experimental API should practice caution, potentially unsetting the values when the extension is shutdown. Values used for the setting could be stored in the extensions locale storage, and restored into EPM when the extension is started again.
https://firefox-source-docs.mozilla.org/toolkit/components/extensions/webextensions/other.html?highlight=ess
2020-02-17T04:34:26
CC-MAIN-2020-10
1581875141653.66
[]
firefox-source-docs.mozilla.org
Python Handlers¶ Python file handlers are Python files which the server executes in response to requests made to the corresponding URL. This is hooked up to a route like ("*", "*.py", python_file_handler), meaning that any .py file will be treated as a handler file (note that this makes it easy to write unsafe handlers, particularly when running the server in a web-exposed setting). The Python files must define a function named main with the signature: main(request, response) …where request is a wptserve Request object and response is a wptserve Response object. This function must return a value in one of the following four formats: ((status_code, reason), headers, content) (status_code, headers, content) (headers, content) content Above, headers is a list of (field name, value) pairs, and content is a string or an iterable returning strings. The main function may also update the response manually. For example, one may use response.headers.set to set a response header, and only return the content. One may even use this kind of handler, but manipulate the output socket directly. The writer property of the response exposes a ResponseWriter object that allows writing specific parts of the request or direct access to the underlying socket. If used, the return value of the main function and the properties of the response object will be ignored. The wptserver implements a number of Python APIs for controlling traffic. Python3 compatibility¶ Even though Python3 is not fully supported at this point, some work is being done to add compatibility for it. This is why you can see in multiple places the use of the six python module which is meant to provide a set of simple utilities that work for both generation of python (see docs). The module is vendored in tools/third_party/six/six.py. When an handler is added, it should be at least syntax-compatible with Python3. You can check that by running: python3 -m py_compile <path/to/handler.py> Example: Dynamic HTTP headers¶ The following code defines a Python handler that allows the requester to control the value of the Content-Type HTTP response header: def main(request, response): content_type = request.GET.first('content-type') headers = [('Content-Type', content_type)] return (200, 'my status text'), headers, 'my response content' If saved to a file named resources/control-content-type.py, the WPT server will respond to requests for resources/control-content-type.py by executing that code. This could be used from a testharness.js test like so: <!DOCTYPE html> <meta charset="utf-8"> <title>Demonstrating the WPT server's Python handler feature</title> <script src="/resources/testharness.js"></script> <script src="/resources/testharnessreport.js"></script> <script> promise_test(function() { return fetch('resources/control-content-type.py?content-type=text/foobar') .then(function(response) { assert_equals(response.status, 200); assert_equals(response.statusText, 'my status text'); assert_equals(response.headers.get('Content-Type'), 'text/foobar'); }); }); </script>
http://firefox-source-docs.mozilla.org/web-platform/writing-tests/python-handlers/index.html
2020-02-17T05:01:25
CC-MAIN-2020-10
1581875141653.66
[]
firefox-source-docs.mozilla.org
Isolated Storage File Permission Class Definition Specifies the allowed usage of a private virtual file system. This class cannot be inherited. public ref class IsolatedStorageFilePermission sealed : System::Security::Permissions::IsolatedStoragePermission [System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public sealed class IsolatedStorageFilePermission : System.Security.Permissions.IsolatedStoragePermission type IsolatedStorageFilePermission = class inherit IsolatedStoragePermission Public NotInheritable Class IsolatedStorageFilePermission Inherits IsolatedStoragePermission - Inheritance - IsolatedStorageFilePermission - Attributes - Remarks. Note There is no effect if you use Assert, PermitOnly, or Deny to add stack modifiers for usage or quota. Usage and quota are determined from evidence and a stack walk is not performed for demands, making the above operations ineffective.
https://docs.microsoft.com/en-us/dotnet/api/system.security.permissions.isolatedstoragefilepermission?redirectedfrom=MSDN&view=netframework-4.8
2020-02-17T04:33:25
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
# Beginner's Guide # Creating a First Draft Manually drafting legal agreements is costly, inefficient, and increases the risk of errors in the drafting process. With our legal automation tools, anyone can turn complex and static legal documents into dynamic, fill-in templates with the click of a button. Create a First Draft and make the most out of your client relationships by adding some of our basic markup language features to your legal agreements. For more information on these features and instructions on how to use all of our markup language, check out the complete guide. # Automate Legal Agreement Terms # Step 1: "Start a New Template" or Go to the "Edit" View of any Template The OpenLaw markup language can be added to your legal agreement by first clicking on "Start a New Template" in the nav menu: or by going to an existing template's "Edit" view, which is accessible in the action toolbar for that template. # Step 2: Bracket Desired Text Easily transform any word or phrase in your legal agreement into a fill-in field by surrounding the text with a set of double brackets. Adding double brackets around legal prose transforms natural language into machine-readable objects called variables. Consider the following basic contractual language from a mutual non-disclosure agreement (NDA): This Mutual Non-disclosure Agreement (this “Agreement”) is entered into as of March 20, 2019 (the “Effective Date”), by and between ABC, Inc., a Delaware corporation with its principal place of business at 123 Street, New York, NY 11111 (the “Company”) . . . Replacing March 20, 2019 with [[Effective Date: Date]] results in: # Date Fields Adding : Date after the defined text name will transform your fill-in field into an easy-to-use date picker, which will display month, day, and year. For example, [[Effective Date: Date]] becomes: # Adding Images To upload an image, add : Image after the defined text. For example, [[Company Logo: Image]] creates a button that permits you to search your files and upload any image. # Drop-down Menus Add drop-down menus into your template by following the steps below. # Step 1: Add Sample Drop-down Copy and paste the following into your template: [[Choice Variable Name: Choice("option 1", "option 2", "option 3")]] [[Selected Option Variable Name: Choice Variable Name]] or Select the boxed arrow in the editor toolbar of any template: # Step 2: Define What You Would Like Your Drop-down Menu to Ask The Choice Variable Name in the sample drop-down menu is how you can define the text name. Remember to choose the same text name for both fields. Selected Option Variable Name is where you add what you would like to include above the drop-down menu. For example, [[Entity: Choice("corporation", "limited liability company", "limited partnership", "public benefit corporation")]] [[Company Entity Type: Entity]] creates a drop-down menu allowing the user to choose what entity type should be included in the template. The example above will result in the following drop-down menu: # Step 3: Include the Drop-down Menu within the Text of an Agreement At the very top of the template, add the following: <% [[Entity: Choice("corporation", "limited liability company", "limited partnership", "public benefit corporation")]] [[Company Entity Type: Entity]] %> When you get to the text where you would like to have a drop-down menu, include the second bracketed item from above. Simply add [[Company Entity Type: Entity]] in the template editor similar to the following: The above markup results in the drop-down menu below. # Conditional Responses Adding conditionals allows users to select what language is included in a First Draft. When a user selects "Yes", the conditional will output the selected information that you have decided to include in the conditional. # Step 1: Add Sample Conditional Copy and paste the following into your template: {{Name of Conditional “Question to Prompt User?” => Text that you would like to include if a user selects 'yes'}} or Select the check mark circle in the editor toolbar of any template: # Step 2: Name the Conditional and Add the Desired Language to be Inserted when a User Selects "Yes" If you would like the user to have the option to include a specific provision such as an affiliate provision, simply include the text after the arrow =>. For example, {{Affiliate Definition “Would you like to include a definition for an Affiliate?” => “Affiliate” shall mean an affiliate of, or person affiliated with, a specified person, is a person that directly, or indirectly through one or more intermediaries, controls or is controlled by, or is under common control with, the person specified.}} will result in the following: # Signatures Including electronic signature markup within a template allows any party to electronically sign the agreement through an email identity. An electronic signature can be embedded into an agreement template by including : Identity | Signature after a variable name as shown in the following example: **[[Party A | Uppercase]]** [[Party A Signatory Email: Identity | Signature]] _______________________ By: [[Party A Signatory Name]] Title: [[Party A Signatory Title]] **[[Party B | Uppercase]]** [[Party B Signatory Email: Identity | Signature]] _______________________ By: [[Party B Signatory Name]] Title: [[Party B Signatory Title]] # Formatting # Bold To bold text, simply add two asterisks ** both before and after the relevant language. For example, **This Agreement** becomes "This Agreement" in the agreement text. # Italic If you would like to italicize text, you can simply add one asterisks * both before and after the relevant language. For example, *however* becomes "however" in the agreement text. # Bold and Italic For this type of formatting, simply surround the relevant text with three asterisks *** both before and after the relevant language. For example, ***emphasized text*** becomes "emphasized text" in the agreement text. # Underline To underline text, just add __ (two underscores) before and after the desired content. For example, __This text is underlined for emphasis__ becomes "This text is underlined for emphasis" in the agreement text. If you would like to apply multiple formatting styles, just apply the relevant syntax around the text. For example, __**This text is underlined and bold for more emphasis**__ become "This text is underlined and bold for more emphasis" in the agreement text. # Uppercase To display text in all caps, simply add | Uppercase after the defined text. For example, [[Party A | Uppercase]] becomes "PARTY A" in the agreement text. # Alignment # Centered To center text such as titles and headings, add \centered before the relevant text. For example, \centered **Agreement Title** will center and bold the relevant text in the agreement. # Right Align To right align text, add \right before the relevant text. # Other Alignment Options Adding \right-three-quarters before the relevant text will position the text to be three-quarters aligned to the right. This may be helpful in positioning signature blocks in an agreement. # Page Break If you would like to add a page break to an agreement, such as separating an exhibit from the main body of the agreement, you can simply add \pagebreak where the break should be located in the template. # Sections and Subsections Organizing an agreement into sections and subsections is straightforward. Currently, we offer four section levels, which can be invoked using the appropriate number of ^ before the section heading. For example, ^First Level ^^Second Level ^^^Third Level ^^^^Fourth Level will result in the following:
https://docs.openlaw.io/beginners-guide/
2020-02-17T03:59:43
CC-MAIN-2020-10
1581875141653.66
[array(['/assets/img/new-template-nav.fcf9a832.png', 'New Template'], dtype=object) array(['/assets/img/edit-toolbar.e4084f43.png', 'Toolbar Edit View'], dtype=object) array(['/assets/img/choice.00a66994.png', 'Choice example'], dtype=object) array(['/assets/img/choice-example.834649c8.png', 'Choice Usage example'], dtype=object) array(['/assets/img/choice-example-in-editor.c909aff0.png', 'Choice Usage in Editor example'], dtype=object) array(['/assets/img/choice-in-form.94772f1a.png', 'Choice in Form example'], dtype=object) array(['/assets/img/conditional.7ca3f5a0.png', 'Conditional example'], dtype=object) array(['/assets/img/conditional-in-form.a4c78d54.png', 'Conditional in Form example'], dtype=object) array(['/assets/img/sections.20c7fa57.png', 'Sections example'], dtype=object) ]
docs.openlaw.io
Texture Returns total height used by this control. Draw a property field for a texture shader property that only takes up a single line height. The thumbnail is shown to the left of the label. Note for some textures it might use more vertical space than a single line height because of an additional information box.
https://docs.unity3d.com/kr/2018.3/ScriptReference/MaterialEditor.TexturePropertyMiniThumbnail.html
2020-02-17T05:18:52
CC-MAIN-2020-10
1581875141653.66
[]
docs.unity3d.com
Dependency Property Dependency Property Dependency Property Dependency Property Class Definition Represents a dependency property that is registered with the dependency property system. Dependency properties provide support for value expressions, data binding, animation, and property change notification. For more info on how DependencyProperty values serve as identifiers for dependency properties, see Dependency properties overview. public : sealed class DependencyProperty struct winrt::Windows::UI::Xaml::DependencyProperty public sealed class DependencyProperty Public NotInheritable Class DependencyProperty See Remarks - Attributes - Windows 10 requirements Examples This example shows a basic usage where a DependencyProperty is established as a public static member of a class. This is done by calling Register Set: - DependencyObject.ClearValue - DependencyObject.GetAnimationBaseValue - DependencyObject.GetValue - DependencyObject.ReadLocalValue - DependencyObject.SetValue - DependencyPropertyChangedEventArgs.Property - Setter(DependencyProperty,Object) constructor Properties Methods See also Feedback Feedback wird geladen...
https://docs.microsoft.com/de-de/uwp/api/windows.ui.xaml.dependencyproperty
2020-02-17T04:46:35
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Holochain Development Kit for Rust-based Zomes Overview hdk-rust is a library for Rust-based hApps that makes it easier to develop Holochain Zomes. With Holochain, Zome functions and validation code are represented as WASM binaries. This library provides bindings for Rust. Usage First, Rust must be installed on your computer. Being a Rust library, hdk-rust can be added as a dependency to any Rust crate. When you generate Rust based Zomes with hc it will automatically be added as a dependency, and imported into your code. To see the documentation for usage, check out Specification for App Development As new features, or changes to the HDK (and the API) are being designed, use cases will be added to an example app and put as changes to a pull request to the app_spec directory of this repo. The example app also integrates the feature set available in Holochain's main branch. Please see the Contribute section for our protocol on how we do this. Contribute Holochain is an open source project. We welcome all sorts of participation and are actively working on increasing surface area to accept it. Please see our contributing guidelines for our general practices and protocols on participating in the community. License This program is free software: you can redistribute it and/or modify it under the terms of the license p rovided in the LICENSE file (GPLv3). This program is distributed in the hope that it will be useful, bu t WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Note: We are considering other 'looser' licensing options (like MIT license) but at this stage are using GPL while we're getting the matter sorted out. See this article for some of our thinking on licensing for distributed application frameworks.
https://docs.rs/crate/hdk/0.0.43-alpha3
2020-02-17T03:26:44
CC-MAIN-2020-10
1581875141653.66
[]
docs.rs
Versions Compared Old Version 15 changes.mady.by.user Jana Pelclova Saved on New Version Current changes.mady.by.user Jana Pelclova Saved on Key - This line was added. - This line was removed. - Formatting was changed. You should enable only those services, which your application needs. Otherwise, you provide more opportunities for attackers to infiltrate your system. Many services are installed by default, so you should take care to disable those you do not actually need. Server security If you run your applications locally on your own servers, then you should check which services run on your server and IIS. Then turn off everything your application does not need. You should also patch your operating system and server regularly. When a serious security issue is announced, you should patch your system as soon as possible, because the attackers are usually able to exploit the flaws within 24 hours. If your applications run on remote servers (webhosting, cloud, etc.), all you can do is trust your provider to ensure the server security. Kentico security We recommend that you install only necessary modules (or that you uninstall unused modules after the installation). You can choose which modules will be installed with Kentico in the Custom installation, and you can also add or remove modules and components after the installation – see Adding and removing components from an installed Kentico web project. You should also restrict public access to unused files located in /CMSPages and /CMSModules/<some module>/CMSPages directories. The following example restricts the public access for the GetCMSVersion.aspx page: Hotfixing We recommend installing hotfixes only when you need them – in cases when the hotfix repairs bugs that are causing you problems. You should also know that we do not publicly announce our security issues. If we did, we would make it easier for the attackers to determine the issue and attack servers, that are not updated yet. If you desire to be informed about security issues in Kentico, sign up to our security newsletter on the client portal (this newsletter is available only if you have prepaid maintenance service).
https://docs.kentico.com/pages/diffpagesbyversion.action?pageId=26313015&selectedPageVersions=16&selectedPageVersions=15
2020-02-17T03:51:03
CC-MAIN-2020-10
1581875141653.66
[]
docs.kentico.com
CRD Your system administrator must configure claims-based authentication before your users can access their Microsoft Dynamics CRM data with their CRM for tablets app. If you have your Microsoft Dynamics CRM website available over the internet but it is not using the Microsoft Dynamics CRM IFD configuration, it is not supported. To verify that your on-premises deployment is configured for IFD, open Microsoft Dynamics CRM Deployment Manager on your Microsoft Dynamics CRM Server. The Authentication Summary section should show that both Claims-Based Authentication and Internet-Facing Deployment are enabled. Authenticaton Method To ensure you have a smooth login experience from the outside of you network please make sure you have disabled Integrated Windows Authentication (IWA) on you ADFS server so that Forms Authentication is happening. See also
https://docs.microsoft.com/en-us/archive/blogs/lystavlen/crm-for-tablets-with-crm-on-premises-ifd-on-and-iwa-off
2020-02-17T05:27:11
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
What is Batch Support? Batch Deposit Support is a method of grouping together payments in QuickBooks Online to help you account for processors who batch all payments for a day and deposit them in a lump sum into your bank account – making it hard to match individual order payments to this lump deposit. Batch Support in our sync will real-time sync order payments from WooCommerce to QuickBooks Online – but will sync them to your Undeposited Funds account. Then, when your processor batches your transactions for the day (and you set the batch support setting to run at this same time), our integration will automatically make a Bank Deposit in QuickBooks Online that will group the payments together from Undeposited Funds and deposit them into the bank account you set in Map > Payment Methods. At this time, it will also deduct transaction fees from this batch, if this is turned in in Map > Payment Methods as well. This results in easily “matching” your credit card deposit to this Bank Deposit in QuickBooks: Batch Support will only be triggered for current days sales. This means that our integration cannot retroactively create batches for orders placed/synced over to QuickBooks Online before Batch Support was enabled in your Payment Method Mappings. To learn more about this, visit Batch Support for pre-existing orders. How do I configure Batch Support? - Enable this setting on the Map > Payment Methods page (click Show Advanced Options) – and choose the time of the Batch Support deposit, the Undeposited Funds and Bank Account the funds will be going into. - Visit MyWorks Sync > Connection tab. This lets our integration know what time to create the deposit. Simply loading this page is all that’s needed. Below is an example of this configuration. Once the above has been set, the first deposit will be created in QuickBooks Online within 24 hours. Batch Support only applies to “today’s” orders – and days going forward. It will not retroactively create bank deposits for previous days. Do you use Stripe as a payment processor? Read more here about additional configuration for Stripe. Transaction Fee Handling When our integration syncs payments over to QuickBooks Online for a gateway that has Batch Support enabled in Map > Payment methods, the transaction fees will not be individually recorded as journal entries in QuickBooks Online. This is because when our integration creates the Bank Deposit in QuickBooks Online at the end of the day, the transaction fees are then added as a separate line at the bottom of the deposit. The reason for this is to ensure that the bank deposit correctly matches the actual deposit that your processor sends to your bank account for that day’s sales. You can see what an actual bank deposit created by our integration would look like here: If you are pushing over old orders, and would like the transaction fees to be synced to QuickBooks Online, read the Batch Support for pre-existing orders to learn how to do this.
https://docs.myworks.software/woocommerce-sync-for-quickbooks-online/payments/batch-deposit-support-overview
2020-02-17T03:48:05
CC-MAIN-2020-10
1581875141653.66
[array(['https://w6f2r2u8.stackpathcdn.com/wp-content/uploads/2020/01/Screen-Shot-2020-01-24-at-20.09.02-1024x204.png', None], dtype=object) array(['https://w6f2r2u8.stackpathcdn.com/wp-content/uploads/2017/10/Screenshot-2017-10-13-12.55.43-1024x621.png', None], dtype=object) array(['https://w6f2r2u8.stackpathcdn.com/wp-content/uploads/2018/01/Screenshot-2018-01-23-16.22.00-1024x828.png', None], dtype=object) ]
docs.myworks.software
Active Directory Federation Services Integration Guide Overview Active Directory Federation Services (AD FS) is a standards-based service that securely shares identity information between applications. This documentation describes how to configure a single sign-on partnership between AD FS as the identity provider and Pivotal Single Sign‑On as the service provider. Single Sign‑On supports service provider-initiated authentication flow and single logout. It does not support identity provider-initiated authentication flow. All Single Sign‑On communication takes place over SSL. Prerequisites To integrate AD FS with Pivotal Platform, you must have the following: - An AD FS Single Sign‑On Complete both steps below to integrate your deployment with AD FS and Single Sign‑On. - Configure Active Directory Federation Services as an Identity Provider - Configure a Single Sign‑On Service Provider
https://docs.pivotal.io/p-identity/1-10/adfs/index.html
2020-02-17T04:10:49
CC-MAIN-2020-10
1581875141653.66
[]
docs.pivotal.io
Askama implements a type-safe compiler for Jinja-like templates. It lets you write templates in a Jinja-like syntax, which are linked to a struct defining the template context. This is done using a custom derive implementation (implemented in askama_derive). For feature highlights and a quick start, please review the README. Creating Askama templates An Askama template is a struct definition which provides the template context combined with a UTF-8 encoded text file (or inline source, see below). Askama can be used to generate any kind of text-based format. The template file's extension may be used to provide content type hints. A template consists of text contents, which are passed through as-is, expressions, which get replaced with content while being rendered, and tags, which control the template's logic. The template syntax is very similar to Jinja, as well as Jinja-derivatives like Twig or Tera. The template() attribute Askama works by generating one or more trait implementations for any struct type decorated with the #[derive(Template)] attribute. The code generation process takes some options that can be specified through the template() attribute. The following sub-attributes are currently recognized: path(as path = "foo.html"): sets the path to the template file. The path is interpreted as relative to the configured template directories (by default, this is a templatesdirectory next to your Cargo.toml). The file name extension is used to infer an escape mode (see below). In web framework integrations, the path's extension may also be used to infer the content type of the resulting response. Cannot be used together with source. source(as source = "{{ foo }}"): directly sets the template source. This can be useful for test cases or short templates. The generated path is undefined, which generally makes it impossible to refer to this template from other templates. If sourceis specified, extmust also be specified (see below). Cannot be used together with path. ext(as ext = "txt"): lets you specify the content type as a file extension. This is used to infer an escape mode (see below), and some web framework integrations use it to determine the content type. Cannot be used together with path. print = "code"): enable debugging by printing nothing ( none), the parsed syntax tree ( ast), the generated code ( code) or allfor both. The requested data will be printed to stdout at compile time. escape(as escape = "none"): override the template's extension used for the purpose of determining the escaper for this template. See the section on configuring custom escapers for more information. syntax(as syntax = "foo"): set the syntax name for a parser defined in the configuration file. The default syntax , "default", is the one provided by Askama. Configuration At compile time, Askama will read optional configuration values from askama.toml in the crate root (the directory where Cargo.toml can be found). Currently, this covers the directories to search for templates, custom syntax configuration and escaper configuration. This example file demonstrates the default configuration: [general] # Directories to search for templates, relative to the crate root. dirs = ["templates"] Here is an example that defines two custom syntaxes: [general] default_syntax = "foo" [[syntax]] name = "foo" block_start = "%{" comment_start = "#{" expr_end = "^^" [[syntax]] name = "bar" block_start = "%%" block_end = "%%" comment_start = "%#" expr_start = "%{" A syntax block consists of at least the attribute name which uniquely names this syntax in the project. The following keys can currently be used to customize template syntax: block_start, defaults to {% block_end, defaults to %} {# comment_end, defaults to #} expr_start, defaults to {{ expr_end, defaults to }} Values must be 2 characters long and start delimiters must all start with the same character. If a key is omitted, the value from the default syntax is used. Here is an example of a custom escaper: [[escaper]] path = "::tex_escape::Tex" extensions = ["tex"] An escaper block consists of the attributes path and name. path contains a Rust identifier that must be in scope for templates using this escaper. extensions defines a list of file extensions that will trigger the use of that escaper. Extensions are matched in order, starting with the first escaper configured and ending with the default escapers for HTML (extensions html, htm, xml, j2, jinja, jinja2) and plain text (no escaping; md, yml, none, txt, and the empty string). Note that this means you can also define other escapers that match different extensions to the same escaper. Variables Top-level template variables are defined by the template's context type. You can use a dot ( .) to access variable's attributes or methods. Reading from variables is subject to the usual borrowing policies. For example, {{ name }} will get the name field from the template context, while {{ user.name }} will get the name field of the user field from the template context. Assignments Inside code blocks, you can also declare variables or assign values to variables. Assignments can't be imported by other templates. Assignments use the let tag: {% let name = user.name %} {% let len = name.len() %} {% let val -%} {% if len == 0 -%} {% let val = "foo" -%} {% else -%} {% let val = name -%} {% endif -%} {{ val }} Filters Values such as those obtained from variables can be post-processed using filters. Filters are applied to values using the pipe symbol ( |) and may have optional extra arguments in parentheses. Filters can be chained, in which case the output from one filter is passed to the next. For example, {{ "{:?}"|format(name|escape) }} will escape HTML characters from the value obtained by accessing the name field, and print the resulting string as a Rust literal. The built-in filters are documented as part of the filters module documentation. To define your own filters, simply have a module named filters in scope of the context deriving a Template impl. Note that in case of name collision, the built in filters take precedence. Whitespace control Askama considers all tabs, spaces, newlines and carriage returns to be whitespace. By default, it preserves all whitespace in template code, except that a single trailing newline character is suppressed. However, whitespace before and after expression and block delimiters can be suppressed by writing a minus sign directly following a start delimiter or leading into an end delimiter. Here is an example: {% if foo %} {{- bar -}} {% else if -%} nothing {%- endif %} This discards all whitespace inside the if/else block. If a literal (any part of the template not surrounded by {% %} or {{ }}) includes only whitespace, whitespace suppression on either side will completely suppress that literal content. Template inheritance Template inheritance allows you to build a base template with common elements that can be shared by all inheriting templates. A base template defines blocks that child templates can override. Base template <!DOCTYPE html> <html lang="en"> <head> <title>{% block title %}{{ title }} - My Site{% endblock %}</title> {% block head %}{% endblock %} </head> <body> <div id="content"> {% block content %}{% endblock %} </div> </body> </html> The block tags define three blocks that can be filled in by child templates. The base template defines a default version of the block. A base template must define one or more blocks in order to enable inheritance. Blocks can only be specified at the top level of a template or inside other blocks, not inside if/ else branches or in for-loop bodies. Child template Here's an example child template: {% extends "base.html" %} {% block title %}Index{% endblock %} {% block head %} <style> </style> {% endblock %} {% block content %} <h1>Index</h1> <p>Hello, world!</p> {% endblock %} The extends tag tells the code generator that this template inherits from another template. It will search for the base template relative to itself before looking relative to the template base directory. It will render the top-level content from the base template, and substitute blocks from the base template with those from the child template. Inside a block in a child template, the super() macro can be called to render the parent block's contents. HTML escaping Askama by default escapes variables if it thinks it is rendering HTML content. It infers the escaping context from the extension of template filenames, escaping by default if the extension is one of html, htm, or xml. When specifying a template as source in an attribute, the ext attribute parameter must be used to specify a type. Additionally, you can specify an escape mode explicitly for your template by setting the escape attribute parameter value (to none or html). Askama escapes <, >, &, ", ', \ and /, according to the OWASP escaping recommendations. Use the safe filter to prevent escaping for a single expression, or the escape (or e) filter to escape a single expression in an unescaped context. Control structures For Loop over each item in an iterator. For example: <h1>Users</h1> <ul> {% for user in users %} <li>{{ user.name|e }}</li> {% endfor %} </ul> Inside for-loop blocks, some useful variables are accessible: - loop.index: current loop iteration (starting from 1) - loop.index0: current loop iteration (starting from 0) - loop.first: whether this is the first iteration of the loop - loop.last: whether this is the last iteration of the loop If The if statement is used as you might expect: {% if users.len() == 0 %} No users {% else if users.len() == 1 %} 1 user {% else %} {{ users.len() }} users {% endif %} Match In order to deal with Rust enums in a type-safe way, templates support match blocks from version 0.6. Here is a simple example showing how to expand an Option: {% match item %} {% when Some with ("foo") %} Found literal foo {% when Some with (val) %} Found {{ val }} {% when None %} {% endmatch %} That is, a match block can optionally contain some whitespace (but no other literal content), followed by a number of when blocks and and an optional else block. Each when block must name a list of matches ( (val)), optionally introduced with a variant name. The else block is equivalent to matching on _ (matching anything). Include The include statement lets you split large or repetitive blocks into separate template files. Included templates get full access to the context in which they're used, including local variables like those from loops: {% for i in iter %} {% include "item.html" %} {% endfor %} * Item: {{ i }} The path to include must be a string literal, so that it is known at compile time. Askama will try to find the specified template relative to the including template's path before falling back to the absolute template path. Use include within the branches of an if/ else block to use includes more dynamically. Expressions Askama supports string literals ( "foo") and integer literals ( 1). It supports almost all binary operators that Rust supports, including arithmetic, comparison and logic operators. The parser applies the same precedence order as the Rust compiler. Expressions can be grouped using parentheses. The HTML special characters &, < and > will be replaced with their character entities unless the escape mode is disabled for a template. Methods can be called on variables that are in scope, including self. Warning: if the result of an expression (a {{ }} block) is equivalent to self, this can result in a stack overflow from infinite recursion. This is because the Display implementation for that expression will in turn evaluate the expression and yield self again. Templates in templates Using expressions, it is possible to delegate rendering part of a template to another template. This makes it possible to inject modular template sections into other templates and facilitates testing and reuse. use askama::Template; #[derive(Template)] #[template(source = "Section 1: {{ s1.render().unwrap() }}", ext = "txt")] struct RenderInPlace<'a> { s1: SectionOne<'a> } #[derive(Template)] #[template(source = "A={{ a }}\nB={{ b }}", ext = "txt")] struct SectionOne<'a> { a: &'a str, b: &'a str, } let t = RenderInPlace { s1: SectionOne { a: "a", b: "b" } }; assert_eq!(t.render().unwrap(), "Section 1: A=a\nB=b") See the example render in place using a vector of templates in a for block. Askama supports block comments delimited by {# and #}. Recursive Structures Recursive implementations should preferably use a custom iterator and use a plain loop. If that is not doable, call .render() directly by using an expression as shown below. Including self does not work, see #105 and #220 . use askama::Template; #[derive(Template)] #[template(source = r#" //! {% for item in children %} {{ item.render().unwrap() }} {% endfor %} "#, ext = "html", escape = "none")] struct Item<'a> { name: &'a str, children: &'a [Item<'a>], } Optional functionality Rocket integration Enabling the with-rocket feature appends an implementation of Rocket's Responder trait for each template type. This makes it easy to trivially return a value of that type in a Rocket handler. See the example from the Askama test suite for more on how to integrate. In case a run-time error occurs during templating, a 500 Internal Server Error Status value will be returned, so that this can be further handled by your error catcher. Iron integration Enabling the with-iron feature appends an implementation of Iron's Modifier<Response> trait for each template type. This makes it easy to trivially return a value of that type in an Iron handler. See the example from the Askama test suite for more on how to integrate. Note that Askama's generated Modifier<Response> implementation currently unwraps any run-time errors from the template. If you have a better suggestion, please file an issue. Actix-web integration Enabling the with-actix-web feature appends an implementation of Actix-web's Responder trait for each template type. This makes it easy to trivially return a value of that type in an Actix-web handler. See the example from the Askama test suite for more on how to integrate. Gotham integration Enabling the with-gotham feature appends an implementation of Gotham's IntoResponse trait for each template type. This makes it easy to trivially return a value of that type in a Gotham handler. See the example from the Askama test suite for more on how to integrate. In case of a run-time error occurring during templating, the response will be of the same signature, with a status code of 500 Internal Server Error, mime */*, and an empty Body. This preserves the response chain if any custom error handling needs to occur. The json filter Enabling the serde-json filter will enable the use of the json filter. This will output formatted JSON for any value that implements the required Serialize trait.
https://docs.rs/crate/askama/0.9.0
2020-02-17T04:25:20
CC-MAIN-2020-10
1581875141653.66
[]
docs.rs
Easy to Use Web Portal Easy to Use Web Portal Simple portal that gives the power to utilize Xameel services and provide the key to the business success in the logistics department. Powerful API Powerful API Set of APIs that allows access to most Xameel services for seamless integration with your applications and business workflow. Chat & Communication Chat & Communication Send chat messages to agents on route, or use the API to allow your staff or clients to communicate directly for best service. Custom Delivery Charge Custom Delivery Charge Whether you plan to generate additional revenu or offer to discounts on your logistics service, Xameel allows customizing the cost to fit the need. Cash Collection Cash Collection Request cash to be collected on delivery by the agent for an limited flexability by the business or the agent with complete control. And more And more Visit our website to learn about all the freat features Xameel offer to your business and the different ways you can customize it. User Guide User Guide This documentation is built to guide all type of users to use the Xameel. It's being continueslly maintained and updated to help answer questions and complete tasks with minimume effert as possible. You can also find more help on our social media accounts that can be found on our contact us pages on the web.
https://docs.xameel.com/
2021-10-16T09:16:50
CC-MAIN-2021-43
1634323584554.98
[array(['/img/web-portal.png', None], dtype=object) array(['/img/powerful-apis.png', None], dtype=object) array(['/img/chat-and-communication.png', None], dtype=object) array(['/img/customize-delivery-charge.png', None], dtype=object) array(['/img/agent-control.png', None], dtype=object) array(['/img/mission-and-success.png', None], dtype=object) array(['/img/start-here.png', None], dtype=object)]
docs.xameel.com
Zoetrope: Create Motion from Pictures on Windows 8 I.
https://docs.microsoft.com/en-us/archive/blogs/synergist/zoetrope-create-motion-from-pictures-on-windows-8
2021-10-16T10:41:42
CC-MAIN-2021-43
1634323584554.98
[]
docs.microsoft.com
# Work logs # Overview As employees, we all do work every day. Work is meaningless if it's not shared with the team the employee works with, and the company in general, because everyone needs to know what every other person is up to, every day. Communication is key, and sharing what we will either do, or did during the day, is extremely important. OfficeLife provides a simple way of helping in this regard. Every day, OfficeLife lets employees record what they will do, or what they've done. We call this feature work logs. # Anatomy of a work log On the dashboard, every day, employees are being asked to describe their work. Rules - Adding a work log can be done once per day. - Work logs are reset every day. - The text supports Markdown and is to 65555 characters. - Once written, work logs can not be modified by anyone, not even the employee. # Reading work logs Once logged, work logs are available for everyone to see on different places. Remember, the goal is to share to everyone what's going on. # On the team's dashboard When the employee is part of a team, he has access to a tab on his/her dashboard. Inside, there is a summary of every work log of every team member for everyone to see, per day. The summary of work logs lists the 5 working days of the current week, and the entries for Friday of the previous week. Next of each day is a visual indicator of how many employees have written a work log for the day. - If it's red, that means no employees have written for the given day. - Yellow means only some team members have written something. - Green means that 100% of the team members have written a work log. When you click on a day, you can view the details of the work logs for this day. # On the employee's profile page When we view the profile page of an employee, under the Work tab, we can view the history of work logs that the employee has written in the past. We believe that knowing what we've done 2 years ago doesn't have real value. It might be fun to read, but what would be the purpose? We might revisit this decision in the future, but for now, OfficeLife will display the work logs from now up to three weeks ago. Remember: this information is available to everyone in the company, regardless of their roles. # Deleting a work log Work logs can be deleted on certain conditions. Only the following people can delete a work log: - the employee herself, - her manager, - someone with the HR or admin role.
https://docs.officelife.io/documentation/communicate/worklogs.html
2021-10-16T08:20:13
CC-MAIN-2021-43
1634323584554.98
[array(['https://d33wubrfki0l68.cloudfront.net/1f94bcad69fba8b241f4872c32716a6653c60b9b/65a3a/assets/img/dashboard_worklog.0f90c43d.png', 'add a worklog'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/5389e759add1d02c808899429e8e0716980c7aa7/fb249/assets/img/dashboard_worklog_teams.039b12be.png', 'image of the work logs'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/ed152ec077706da5eb4aa8424947ea9df0bdc586/1c8cf/assets/img/employee_worklog.7ccc3e26.png', 'image of work log on employee page'], dtype=object) ]
docs.officelife.io
New to Telerik Reporting? Download free 30-day trial Run Time Cannot create an object of type 'Telerik.Reporting.Report' from its string representation 'MyNameSpace.MyClass, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' for the 'Report' property. This error might occur if you are using website and you have the report class in the website directly. This is due to the fact that when using a website, after rebuild the following will always be different and it will not match with the originally assigned report: App_Code.unch8s_n. Thus we recommend following our best practices and have the report in a separate class library that is referenced in the aplication/website.. This exception might surface if you try to use objects which do not implement ISerializable for a Report/Data Item data source. For example, if this is an IList, you can try using a List instead as shown in our cars example. You can also use the NeedDataSource event of the report and assign the data source to the "processing report", thus avoiding any need for serialization/deserialization. Another approach is to use a DataSet/DataTable: When deploying a project with Telerik Reporting on a server, you get the following error: Could not load file or assembly 'Telerik.ReportViewer.WebForms, Version=x.x.x.x, Culture=neutral, PublicKeyToken=a9d7983dfcc261be' or one of its dependencies. The system cannot find the file specified. During the installation of Telerik Reporting on a dev machine, the Telerik Reporting assemblies are added to GAC. When deploying a project using Visual Studio's built-in functionality, the assemblies from the GAC are not copied automatically, so you need to make sure the assemblies physically exist in the bin folder of your application. Full details are available in the Deploying Web Applications help article. The expression contains undefined function call MyUserFunction() There are three cases in which this error message occurs: - The function name is typed in manually in the expression, without building the class library, and it does not exist yet. - Using the function with incorrect number/type of parameters. The passed fields must match the function signature. - The field specified in the function is null. Make sure such cases are handled in the user function. My ASP.NET Web Forms ReportViewer looks messed up - its styles and images are missing - Check if runat="server" is present in your web page's head tag.: - Check if the report viewer's styles are registered on the page. This can be accomplished with http debugging proxy like Fiddler. - Check for global styles defined in the application which affect the page with the report viewer and conflict with the report viewer's styles. Remove any style declarations from the page. Make sure the global style rules do not affect HTML elements directly, but are applied through the CssClass property.
https://docs.telerik.com/reporting/troubleshooting-runtime-problems
2021-10-16T08:11:56
CC-MAIN-2021-43
1634323584554.98
[]
docs.telerik.com
The SetDocumentUserText command attaches text information to a Rhino .3dm file. The information is stored in a key/value format. Retrieve the information with the GetDocumentUserText command. This information can also be attached by .NET plug-ins and VisualBasic scripts. This information can be easily accessed in .NET and Visual Basic scripts. Note: Hidden keys will not appear in Document Properties > Document User Text and the Document User Text panel. See: Naming conventions in Rhino The GetDocumentUserText command retrieves text information attached using the SetDocumentUserText command. This information can also be retrieved by .NET plug-ins and VisualBasic scripts. The DocumentText command opens the Document User Text panel that lets you manage the document user text. Type in the search box to only display the User Text that contains the typed text in its key or value. The name of the UserText key. The value of the UserText key. See: Naming conventions in Rhino Adds a User Text key to the 3dm file. Prefix a key with a "." (period) to add a hidden key. Hidden keys will not appear in Document Properties > Document User Text and the Document User Text panel. Retrieves information from the current document or an object. Removes the selected User Text key from the 3dm file. Use the SetDocumentUserText command to remove a hidden key. Reads User Text keys from a .3dm, .csv or .txt file into the current 3dm file. Saves User Text keys in the current 3dm file to a .csv or .txt file. Copies the selected key and value to the Clipboard. Pastes the key and value from the Clipboard. Rhinoceros 6 © 2010-2020 Robert McNeel & Associates. 11-Nov-2020
https://docs.mcneel.com/rhino/6/help/en-us/commands/setdocumentusertext.htm
2021-10-16T08:43:40
CC-MAIN-2021-43
1634323584554.98
[]
docs.mcneel.com
Broadcasting pre-recorded shows What's the purpose of this? In case the show you are planning to broadcast is pre-recorded (e.g. in form of a .wav or .mp3 file) and you are not planning on having any additional live element (such as life hosting via microphone, etc.), this option of automated streaming might be just the one you're looking for. Embedding your prerecorded show in the Playlists on LibreTime will enable you to broadcast as easy as it gets. Step-by-step instruction How to add your file to LibreTime Method 1: Upload via upload.radioangrezi.de Radio Angrezi has a seperate upload interface at which is publicly accessible. The goal of this interface is to provide external show makers without Libretime access, an easy way to upload their shows and files. Uploaded audio files automatically appear in Libretime. This method also has an advantage with large audio files as they can be uploaded with breaks or interrupts in between. Method 2: Upload via Libretime Before you can use your show file for a Playlist, you will have to upload it onto the LibreTime server. Sounds easy, and it is, but some of us have so far missed the quite visible Upload Button on the page: Now you can simply drag your show files onto LibreTime (" Drop files here or click to browse your computer "). The files you uploaded will then appear in the Tracks list (you'll find this underneath the Upload button on the left). You will also find our jingle (radioangrezi_jingle.wav) and the corona info line (einspieler_raw.wav) within this list. Add your tracks to your scheduled show Now you have to add all the tracks that shall be played during your broadcast, to the show. We assume that your show is already scheduled. Otherwise: How to add a show in the calendar is described here in Step 2. Find your show in the calendar, click on it and use "Schedule Tracks". Now you can find or search (using the search bar above) for your tracks and add them to the show. Use the + button or drag & drop your tracks into your show (on the right hand side). Keep in mind to add the jingle before and after your show files, plus the corona info line, to your playlist. After all tracks have been added, including the jingle, use the Ok button to save. Please remember to turn off , both the master source and the show source if you want to use the fully automated playback in LibreTime. Otherwise the signal form the other sources will be used and overrule your scheduled programme. Alternatively you can use a playlist. This is only necessary if playlists (or parts of playlists) shall be used more than once. How to create a Playlist After uploading your files you will have to create a Playlist. These are as simple as regular playlists; you basically add several audio files and rearrange them in your preferred order. You can create a Playlist by heading to the Playlists page (one further below the Tracks button on the left). "+New" will create a new Playlist, the little pen icon will let you edit an existing one after you select it. The selected Playlist will then pop up on the right or underneath the list of existing Playlists: After creating and naming your Playlist (it doesn't necessarily need a description, but good naming helps reducing chaos!), you will have to return to the Tracks to add the ones you uploaded to your Playlist. The 'edit box' of your Playlist will remain opened, but the list of Playlists on the left (or top) will be exchanged with the Tracklist. Now, you can simply drag & drop your tracks into your playlist. Keep in mind to add the jingle before and after your show files, plus the corona info line, to your playlist. You will still be able to rearrange the tracks you added to the playlist by dragging them up and down, to create the order you want. And don't forget to hit the save button when you're finished! How to connect your Playlist to your show slot After creating your Playlist, you will have to let LibreTime know when to play it. For this, you need a show slot in the calendar (leftern side of the webpage). How to add a show in the calendar is described here in Step 2. When creating or editing your show, there is an option named "Automatic Playlist". You simply need to check the "Auto Schedule Playlist ?" box and select your playlist underneath. That's it. Good to know : If your show is, for example, 30 minutes long but you have a 1.5 h time slot in the calendar, you may also check the "Repeat AutoPlaylist Until Show is Full" box to make your Playlist repeat itself automatically within the time slot. Done! Congratulations, you made it! Besides the steps above, you don't have to do anything else, and your show will simply broadcast itself via LibreTime – and no, you don't have to switch "On Air", LibreTime is doing this for you as well. Engineering - Previous Recording Next - Engineering Telephony Last modified 1yr ago Copy link Contents What's the purpose of this? Step-by-step instruction How to add your file to LibreTime Add your tracks to your scheduled show How to create a Playlist How to connect your Playlist to your show slot Done!
https://docs.radioangrezi.de/engineering/automatic-broadcasting
2021-10-16T09:06:08
CC-MAIN-2021-43
1634323584554.98
[]
docs.radioangrezi.de
CloudAware Policy Anatomy Under the section POLICY LIST click the policy name. Click the tab 'Editor' on the left → open the tab Code* to review the policy code. 1) // SObject Type Define an input object your policy will be checking (e.g. AWS EC2 instances) 2) // Output SObject Type Select the output object type which will store the policy check results (e.g. CloudAware Policy Violation). You will not be able to make any changes to the input object and the output object type selected once the policy is deployed! As for other changes, you can make updates to unmanaged policies. Managed policies can be updated by Cloudaware only. 3) // How many objects will be processed per job call You can change the batch size ( final Integer batchSize = ???). Maximum size is 2000. If exceeded, you can receive the error "Apex CPU time limit exceeded". 4) // Lifecycle configuration Configure the lifecycle to define under what conditions the output objects are created or closed after evaluation of input objects (e.g. incomplianceСreates means that the output object is created only in cases when an input object is incompliant). You can customize your policy either using pre-built lifecycles or writing a lifecycle of your own applying available methods. Use the following methods to define the conditions when an output object is created or closed: incomplianceCreates() - if an input object is considered to be incompliant based on evaluation in Process, the corresponding output object gets the status 'incompliant'; complianceCreates() - if an input object is considered to be compliant based on evaluation in Process, the corresponding output object gets the status 'compliant'; complianceCloses() - if an input object is recognized as compliant, the corresponding output object gets "Close Date" assigned; incomplianceCloses() - if an input object is recognized as incompliant, the corresponding output gets "Close Date" assigned; inapplicabilityCreates() и inapplicabilityCloses() - if an input object is not assigned with any status except 'inapplicable' during Process, the corresponding output object is created or closed as inapplicable. (see 6); scopeLossCloses() - if an input object is off the policy scope, e.g. it has been deleted, the corresponding output object gets "Close Date" assigned; deleteAfterDays(Integer value) - this parameter defines the number of days before the deletion of the output object and should be used along with 1 condition <...Closes()>minimum for correct configuration. updateField(String objectFieldName/SObjectField field, String outputKey) - this parameter allows to store data in corresponding fields of an output object and refer to input objects based on their master-detail relationship, lookups, text Ids, etc. For example, you can save ARN of AWS IAM User which is evaluated by the policy in the output object using .updateField(CA10__CaBenchmarkCheck__c.CA10__awsIamUserArn__c, 'userArn'in the Lifecycle; externalIdField(SObjectField field) - use this parameter to define externalIdField. 5) // Start code Use the variable context to work with a policy context ( global void start() {...); 6) // SOQL Query Define input objects that will enter the policy scope. You can make changes to SOQL query to define what objects will be evaluated and what will not. 7) // Process Set up the logic your policy will use to check an input object for compliance and assign the corresponding statuses to output objects. Input objects are evaluated one-by-one. The policy logic may be the following: For each AWS EC2 Instance with a value A in <FIELD1> assign the status 'incompliant' to the output object. If <FIELD1> is B - assign the status 'compliant'. By default, every object which is evaluated in Process is considered Inapplicable. The policy logic may be customized any way you like, however, Salesforce limits must be observed. Keep in mind that you should re-configure output objects in the policy lifecycle in order they could be saved with the corresponding statuses (see step 3). 8) // Finish Code Customize your policy. This part of the policy is run after all objects are evaluated.
https://docs.cloudaware.com/DOCS/CloudAware-Policy-Anatomy.1174929428.html
2021-10-16T09:30:30
CC-MAIN-2021-43
1634323584554.98
[]
docs.cloudaware.com
This is a setting hack that can be enabled on the "Settings" page of the Mods List. This hack makes the game's window resizable and maximisable. Settings Start Maximised Set whether or not the window will start maximized. Defaults to Disabled. While Resizing TODO Command Line Arguments This hack is affected by certain Command Line Arguments for the Mod Launcher. Version History 1.23 -. 1.19 -. 1.18 Moved this hack to the new "Settings" page. 1.16.3 Made the game maximisable and added a setting for it to start maximised. 1.16 Added this hack.
https://docs.donutteam.com/docs/lucasmodlauncher/hacks/resizable-window
2021-10-16T09:59:53
CC-MAIN-2021-43
1634323584554.98
[]
docs.donutteam.com
Using VisualShaders¶ Just as VisualScript is an alternative for users that prefer a graphical approach to coding, VisualShaders are the visual alternative for creating shaders. As shaders are inherently linked to visuals, the graph-based approach with previews of textures, materials, etc. offers a lot of additional convenience compared to purely script-based shaders. On the other hand, VisualShaders do not expose all features of the shader script and using both in parallel might be necessary for specific effects. Note If you are not familiar with shaders, start by reading Introduction to shaders. Creating a VisualShader¶ VisualShaders can be created in any ShaderMaterial. To begin using VisualShaders, create a new ShaderMaterial in an object of your choice. Then assign a VisualShader resource to the Shader property. Click on the new VisualShader resource and the Visual Shader Editor will open automatically. The layout of the Visual Shader Editor comprises two parts: the upper toolbar and the graph itself. From left to right in the toolbar: The Add Nodebutton displays a popup menu to let you add nodes to the shader graph. The drop-down menu is the shader type: Vertex, Fragment and Light. Like for script shaders, it defines what built-in nodes will be available. The following buttons and number input control the zooming level, grid snapping and distance between grid lines (in pixels). The last icon shows the generated shader code corresponding to your graph. Note Although VisualShaders do not require coding, they share the same logic with script shaders. It is advised to learn the basics of both to have a good understanding of the shading pipeline. The visual shader graph is converted to a script shader behind the scene, and you can see this code by pressing the last button in the toolbar. This can be convenient to understand what a given node does and how to reproduce it in scripts. Using the Visual Shader Editor¶ By default, every new VisualShader will have an output node. Every node connection ends at one of the output node's sockets. A node is the basic unit to create your shader. To add a new node, click on the Add Node button on the upper left corner or right click on any empty location in the graph, and a menu will pop up. This popup has the following properties: If you right-click on the graph, this menu will be called at the cursor position and the created node, in that case, will also be placed under that position; otherwise, it will be created at the graph's center. It can be resized horizontally and vertically allowing more content to be shown. Size transform and tree content position are saved between the calls, so if you suddenly closed the popup you can easily restore its previous state. The Expand Alland Collapse Alloptions in the drop-down option menu can be used to easily list the available nodes. You can also drag and drop nodes from the popup onto the graph. While the popup has nodes sorted in categories, it can seem overwhelming at first. Try to add some of the nodes, plug them in the output socket and observe what happens. When connecting any scalar output to a vector input, all components of the vector will take the value of the scalar. When connecting any vector output to a scalar input, the value of the scalar will be the average of the vector's components. Visual Shader nodes¶ Below are some special nodes that are worth knowing about. The list is not exhaustive and might be expanded with more nodes and examples. Expression node¶ The Expression node allows you to write Godot Shading Language (GLSL-like) expressions inside your visual shaders. The node has buttons to add any amount of required input and output ports and can be resized. You can also set up the name and type of each port. The expression you have entered will apply immediately to the material (once the focus leaves the expression text box). Any parsing or compilation errors will be printed to the Output tab. The outputs are initialized to their zero value by default. The node is located under the Special tab and can be used in all shader modes. The possibilities of this node are almost limitless – you can write complex procedures, and use all the power of text-based shaders, such as loops, the discard keyword, extended types, etc. For example: Fresnel node¶ The Fresnel node is designed to accept normal and view vectors and produces a scalar which is the saturated dot product between them. Additionally, you can setup the inversion and the power of equation. The Fresnel node is great for adding a rim-like lighting effect to objects. Boolean node¶ The Boolean node can be converted to Scalar or Vector to represent 0 or 1 and (0, 0, 0) or (1, 1, 1) respectively. This property can be used to enable or disable some effect parts with one click. If node¶ The If node allows you to setup a vector which will be returned the result of the comparison between a and b. There are three vectors which can be returned: a == b (in that case the tolerance parameter is provided as a comparison threshold – by default it is equal to the minimal value, i.e. 0.00001), a > b and a < b. Switch node¶ The Switch node returns a vector if the boolean condition is true or false. Boolean was introduced above. If you convert a vector to a true boolean, all components of the vector should be above zero. Note The Switch node is only available on the GLES3 backed. If you are targeting GLES2 devices, you cannot use switch statements.
https://docs.godotengine.org/en/latest/tutorials/shaders/visual_shaders.html
2021-10-16T09:45:37
CC-MAIN-2021-43
1634323584554.98
[array(['../../_images/shader_material_create_mesh.png', '../../_images/shader_material_create_mesh.png'], dtype=object) array(['../../_images/visual_shader_create.png', '../../_images/visual_shader_create.png'], dtype=object) array(['../../_images/visual_shader_editor2.png', '../../_images/visual_shader_editor2.png'], dtype=object) array(['../../_images/vs_popup.png', '../../_images/vs_popup.png'], dtype=object) array(['../../_images/vs_expression.gif', '../../_images/vs_expression.gif'], dtype=object) array(['../../_images/vs_expression2.png', '../../_images/vs_expression2.png'], dtype=object) array(['../../_images/vs_fresnel.png', '../../_images/vs_fresnel.png'], dtype=object) array(['../../_images/vs_boolean.gif', '../../_images/vs_boolean.gif'], dtype=object) array(['../../_images/vs_if.png', '../../_images/vs_if.png'], dtype=object) array(['../../_images/vs_switch.png', '../../_images/vs_switch.png'], dtype=object) ]
docs.godotengine.org
Microsoft Power Platform ISV Studio [This topic is pre-release documentation and is subject to change.] ISV Studio is designed to become the go-to Power Platform destination for Independent Software Vendors (ISV) to monitor and manage their applications. ISV Studio provides a consolidated cross tenant view of all the applications an ISV is distributing to customers. Note Unsure about entity vs. table? See Developers: Understand terminology in Microsoft Dataverse. Important - ISV Studio is a preview feature. - Preview features aren’t meant for production use and may have restricted functionality. These features are available before an official release so that customers can get early access and provide feedback. ISV Studio supports applications built on the Microsoft Dataverse that are published to and deployed through AppSource. No telemetry will be provided on side loaded solutions not deployed through AppSource. The applications currently available on the Dataverse includes Power Apps and Dynamics 365 for Sales, Marketing, Service, and Talent. ISV Studio now provides telemetry features in Dynamics 365 Finance and Operations. When end user installs an application from AppSource, a consent dialog will be displayed requesting the user to acknowledge that contact, usage, and transactional information may be shared with the application provider. This information is used by the provider to support billing and other transactional activities and to enable telemetry in ISV Studio for the ISV to learn from and act on. A customer can request that data not be shared with the provider, in which case Microsoft will remove all data from that particular tenant within ISV Studio. To access the public preview of ISV Studio, navigate your browser to. Pre-requisites for Microsoft Dataverse The ISV must be associated with a Microsoft registered Partner organization [ISV] that has one or more supported apps published in AppSource. Supported apps include model-driven apps created using Power Apps and Dynamics 365 apps such as Dynamics 365 Sales and Dynamics 365 Customer Service. Pre-requisites for Dynamics 365 Finance and Operations - For Dynamics 365 Finance and Operations, update the SolutionIDin the descriptors with the ProductIdof their offer in Partner Center. The ProductIdof their offer can be found in the URL in Partner Center. - Ask customers to install the latest solution with the above. They need to be on version 10.0.16or above to see the telemetry feature in ISV Studio. More information: ISV Studio solutions Admin access to ISV Studio To be an admin in ISV Studio, your Azure Active Directory account must be configured as an app owner in Partner Center for their publisher account. Once you get the admin access, you'll be able to give more users access to ISV Studio from the studio directly. Grant read access to users If you want more users within your tenant/organization to get, read-only access to ISV Studio, you must have admin access. Once you are an admin for your publisher account, follow these steps below to give more users read-only access: In ISV Studio, select Settings gear icon on the top right. Select Manage access. Search for a Publisher account in the organization. Select a user within your organization to give access. Select Save. Note You can only search and add users within your tenant/organization. After giving access to the user, a confirmation email is sent to the user with access details. Remove read access to users You can follow the same steps to remove read-only access from a user to ISV Studio in the same window. Continue reading the App and Tenant page articles listed below to learn about the capabilities of ISV Studio. How to provide feedback Please send an email to [email protected] with any feedback or questions. Your feedback is important for us to shape the experiences moving forward. In this Section App page Tenant page AppSource checker Connector Certification See also Introduction to solutions Publish your app on AppSource
https://docs.microsoft.com/en-us/powerapps/developer/data-platform/isv-app-management
2021-10-16T10:25:11
CC-MAIN-2021-43
1634323584554.98
[array(['media/isv-studio-home-page.png', 'Studio home page Studio home page.'], dtype=object)]
docs.microsoft.com
SAML support in Trend Micro Vision One supports the SAML 2.0 protocol as defined by the OASIS organization. Any IDP vendor that supports SAML 2.0 should work with Trend Micro Vision One; however, Trend Micro can only guarantee support on the tested IDP providers and does not support other single sign-on technologies. The Trend Micro Vision One Service Provider metadata XML file downloads to your computer. For more information, see the topic below for your IdP. Configuring Active Directory Federation Services Configuring Azure Active Directory SAML single sign-on is configured and you can now add SAML accounts in Trend Micro Vision One. For more information, see Configuring Accounts.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-xdr-online-help/administrative-setti/administration/saml-single-sign-on/configuring-saml-sin.aspx
2021-10-16T09:37:56
CC-MAIN-2021-43
1634323584554.98
[]
docs.trendmicro.com