content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
My student can't see paid content If one of your students is unable to access our paid courses, there are a few things you can do. Check they have an active subscription Go to your Teacher Dashboard and find the student. Under Subscription, it will either say how much time they have left on their subscription, or if they do not have one, it will say 'Free courses only'. If they don't have an active subscription, you will need to purchase one before they'll be able to access all content. Check they're logged in with the correct account If they do have a subscription, they may have two accounts with different email addresses. Subscriptions is attached to only one account, so if this happens, their other account does not have access to all content. If the student is logged in to the wrong one, ask them to log out and re-log in using the correct email address. You can check their email address by clicking on the students' name in the teacher dashboard. For information on merging two accounts, see My student has multiple accounts.
https://docs.groklearning.io/article/92-student-cant-see-content
2019-04-18T15:01:46
CC-MAIN-2019-18
1555578517682.16
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/59b46e292c7d3a73488cbc3e/images/5b0e4f8a2c7d3a0fa9a267c0/file-iD50riJDku.png', None], dtype=object) ]
docs.groklearning.io
8x8 Virtual Office for NetSuite allows you to define custom codes to collect information on the outcome of each call per user. For example, if you are a customer support agent attending to customer issues, you can indicate whether the issue was solved, escalated, requires a scheduled call back, and so on. If you define call result codes, at the end of each call, you are prompted to select an outcome from the pre-defined list. This information is saved in the call log. To define call result codes:
https://docs.8x8.com/8x8WebHelp/VONetSuiteAdmin2_0/Content/VONetSuite2_0/Settings_CallCodes.htm
2019-04-18T14:26:06
CC-MAIN-2019-18
1555578517682.16
[]
docs.8x8.com
Adding Form Fields Form Management - Form Types - Adding Internal Forms - - - Disabling forms - Deleting forms - Form Permissions (who sees what?) - - If you form is an external form, you'll need to add the fields to your form as well. Make sure that the name attribute of the form is the same as what you've entered in the "field name" column. Another use for the add fields feature is to add fields only for use within the Form Tools UI. For example, if your form was keeping track of information about individual people, perhaps you'd want a "Comments" field where you could track private information about the person. You could do this by adding the field here, then assign the new field to one or more Views - perhaps one that's only accessible by you, the administrator.
https://docs.formtools.org/userdoc/form_management/adding_form_fields/
2019-04-18T15:16:14
CC-MAIN-2019-18
1555578517682.16
[]
docs.formtools.org
General, Environment, Options Dialog Box Note This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here. Note The dialog boxes and menu commands you see might differ from those described in Help depending on your active settings or edition. To change your settings, open the Tools menu, and then choose Import and Export Settings. For more information, see Customizing Development Settings in Visual Studio. Visual Experience Color theme Choose the Blue, Light or Dark color theme for the IDE. You can install additional predefined themes, and create custom themes, by downloading and installing the Visual Studio 2015 Color Theme Editor from the Visual Studio Marketplace. After you install this tool, additional color themes appear in the Color theme list box. Apply title casing in menu bar Menus are in Title Casing by default in Visual Studio 2015. Un-check this option to set them to ALL CAPS. Automatically adjust visual experience based on client performance Specifies whether Visual Studio sets the adjustment to the visual experience automatically or you set the adjustment explicitly. This adjustment may change the display of colors from gradients to flat colors, or it may restrict the use of animations in menus or popup windows. Enable rich client experience Enables the full visual experience of Visual Studio, including gradients and animations. Clear this option when using Remote Desktop connections or older graphics adapters, because these features may have poor performance in those cases. This option is available only when you clear the Automatically adjust visual experience based on client option. Use hardware graphics acceleration if available Uses hardware graphics acceleration if it is available, rather than software acceleration. Other Items shown in Window menu Customizes the number of windows that appear in the Windows list of the Window menu. Type a number between 1 and 24. By default, the number is 10. Items shown in recently used lists Customizes the number of most recently used projects and files that appear on the File menu. Type a number between 1 and 24. By default, the number is 10. This is an easy way to retrieve recently used projects and files. Show status bar Displays the status bar. The status bar is located at the bottom of the IDE window and displays information about the progress of ongoing operations. Close button affects active tool window only Specifies that when the Close button is clicked, only the tool window that has focus is closed and not all of the tool windows in the docked set. By default, this option is selected. Auto Hide button affects active tool window only Specifies that when the Auto Hide button is clicked, only the tool window that has focus is hidden automatically and not all of the tool windows in the docked set. By default, this option is not selected. Manage File Associations Displays the Windows Set Program Associations dialog box, where you can view file extensions for items that are typically associated with Visual Studio and the current default program for opening each type of file. To make Visual Studio the default application for types of files that are not already associated with it, choose the file extension, and then choose Save. This option can be useful if you have two versions of Visual Studio installed on the same computer and you later uninstall one of the versions. After you uninstall, the icons for Visual Studio files no longer appear in File Explorer. In addition, Windows no longer recognizes Visual Studio as the default application for editing these files. This option restores those associations. See Also Environment Options Dialog Box Customizing window layouts
https://docs.microsoft.com/en-us/visualstudio/ide/reference/general-environment-options-dialog-box?view=vs-2015
2019-04-18T14:49:25
CC-MAIN-2019-18
1555578517682.16
[]
docs.microsoft.com
Installing RPM and Debian Packages Manually or Through a Configuration Management Tool This tutorial describes how to install and start the Platform9 OpenStack host agent using the RPM or Debian packages extracted from the installer. Instructions on extracting the packages from the installer can be found here. Step 1 - (Optional) Add the Packages to Your Yum or Apt Repository Add the packages to your repository and make sure that the repository is accesible to the machine that needs to the install the packages. Consult your operating system's documentation to find instructions on creating and using repositories. Step 2 - Install the Packages Install the packages using your operating system's package manager or a configuration management tool such as Chef/Puppet/Ansible. In the case of CentOS/Redhat/Scientific Linux: yum install -y pf9-hostagent yum install -y pf9-comms If both packages are registered in a yum repository, installing pf9-comms will automatically install pf9-hostagent. If installing from local package files instead of a repository, you will need to specify the full file names of the packages, including version and extension. Step 3 - Start the services service pf9-comms start service pf9-hostagent start
https://docs.platform9.com/support/installing-rpm-and-debian-packages-manually-or-through-a-configuration-management-tool/
2019-04-18T14:21:33
CC-MAIN-2019-18
1555578517682.16
[]
docs.platform9.com
Introduction¶ MODE-TASK is a collection of tools for analysing normal modes and performing principal component analysis on biological assemblies. Novel coarse graining techniques allow for analysis of very large assemblies without the need for high performance computing clusters and workstations. Cite this project¶ Caroline Ross, Bilal Nizami, Michael Glenister, Olivier Sheik Amamuddy, Ali Rana Atilgan, Canan Atilgan, Özlem Tastan Bishop; MODE-TASK: Large-scale protein motion tools, Bioinformatics, May 2018 , Contributing¶ To contribute to the documentation please follow this guide. To contribute to the source code, submit a pull request to our Github repository.
https://mode-task.readthedocs.io/en/latest/intro.html
2019-04-18T15:12:04
CC-MAIN-2019-18
1555578517682.16
[]
mode-task.readthedocs.io
Installation The documents in this section outline all the necessary steps for the installation of ApiOmat . All supported operating systems can be found at the requirements page. Then, before running ApiOmat , kindly set up your operating systems as outlined on our System preparations page. General definitions and conventions Variables The installation manuals use some variables, which habe a different meaning between operating systems. The following table will explain the specific value: Users and Groups On Linux installations, e ach service will run under its own user name. Most of the times, the username corresponds to aom-<SERVICENAME>. The user and its group, apiomat, is created during installation and removed after uninstallation. Common paths like /etc/apiomat and /var/log/apiomat are owned by the group, while all other (sub-)directories are owned by the service user. On Windows installations, services started via the services tool are running as system user. Next steps After installation Finally, after completing your installation, review the next steps: Common (all operating systems) - Create SuperAdmin by sending a POST request to /yambas/rest/initSuperAdmin with form parameters "email" and "password" (not necessary when using the graphical installer) e.G.: curl -X POST -d "[email protected]&password=secret" Change Default Organization password in apidocs and apiomat.yaml Obtain a license key - Remove or set up firewall rules for the installer and admin webapp from Tomcat container Run the graphical installer Linux only You can activate the backup script for the restore module by executing chmod a+x /etc/cron.daily/restoreModule on each application server
http://docs.apiomat.com/32/Installation.html
2019-04-18T14:37:55
CC-MAIN-2019-18
1555578517682.16
[]
docs.apiomat.com
Generating the GPSS Client Classes A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation. Generating the GPSS Client Classes The Greenplum Stream Server service definition defines the messages and services exposed by the GPSS gRPC server. This definition resides in the gpss.proto file. You must run the protocol buffer compiler (protoc) on the gpss.proto file to generate the classes that the GPSS client uses to query metadata information from, and write data to, Greenplum Database. In some build environments such as gradle, you can configure the build to automatically generate the GPSS client classes for you. Refer to the gRPC examples for your programming language for sample build configurations with automatic code generation. For example, the gradle build and settings files for the examples in the grpc-java repository are configured to automatically generate the Java classes for you. $ protoc --plugin=protoc-gen-grpc-java=../grpc-java/compiler/build/exe/java_plugin/protoc-gen-grpc-java \ --proto_path=./src/main/proto --grpc-java_out=./gen/grpc/ \ --java_out=./gen/java/ gpss.proto
http://gpdb.docs.pivotal.io/5160/greenplum-stream/api/gen_classes.html
2019-04-18T14:32:41
CC-MAIN-2019-18
1555578517682.16
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
This risk check initiates when if the PayPal account used by the shopper is verified. Otherwise it will be marked as unverified. This check also triggers fraud when the payer status is missing. Set a negative trust score for trustworthy shoppers to decrease the chance of a transaction being refused. For example, shoppers with a history of legitimate purchases. Configuration options - There are no configuration options for this check.
https://docs.adyen.com/developers/risk-management/revenueprotect-engine-rules/consistency-rules/paypal-verified-shopper
2019-04-18T14:52:03
CC-MAIN-2019-18
1555578517682.16
[]
docs.adyen.com
Access preview features in Talent As part of our continuous rollout of product capabilities, we want to let customers experience new features as soon as possible. Administrators can see and use preview features in their environments. These features are almost ready for general availability and have gone through extensive testing. We are just looking for a final round of customer feedback and validation before we generally release them. This topic describes how an administrator. Enable or disable preview features You can use the Preview Features setting in the Microsoft Dynamics 365 for Talent admin center to enable or disable preview features. By default, the setting is turned off. The action of enabling or disabling preview features is environment-specific. Important By turning on the Preview Features setting, you enable preview features for all users in your organization who are in that environment. By turning off the setting, you disable preview features and make them inaccessible to your users. Preview features have limited support in Talent. They might use fewer privacy and security measures, and they aren't included in the Talent service level agreement. You should not use preview features to process personal data (that is, any information that could identify you), or to process other data that is subject to legal or regulatory compliance requirements. Enable or disable preview features for your organization Attract - On the Setup menu (the gear symbol) in the upper-right corner, select Admin settings. - On the Feature management tab, select the option next to Preview features so that it turns blue. - Optionally you can control individual features by enabling/disabling specific features on this page. - Refresh your browser to start to see the new features. (Any users who are already signed in will see the features the next time that they sign in, or they can refresh their browser to see the features immediately.) Core HR - Sign in to Talent. The core Human resources workspace will open, from which you'll complete the remaining steps. - Select System administration > Links System parameters. - On the System Parameters page, on the Preview features tab, set the Enable preview mode for all users option to Yes to make preview features available. Note To disable preview features, use the same basic steps. When you disable preview features, they become inaccessible to your users, and errors might occur in processes that are associated with the features. Features that are currently in preview Attract Relevant Candidates in a Job – Recruiters and hiring managers can easily see which candidates may be the most relevant for the job across all applicants. The top 5 applicants are shown based on their the relevance of their resume/profile to the job description. Relevant Jobs – Candidates now see a list of other jobs that are relevant to them based on their resume/profile and the job descriptions. Currently this is shown to candidates once they apply as a suggestion for other opportunities. EEO/OFCCP Support – New activity types enable the use of a predefined form for the collection of Equal Employment Opportunity (EEO) and Office of Federal Contract Compliance Program (OFCCP) data from the candidate. This is a predefined form and is not editable. Note Jobs that are posted are visible only to customers who subscribe to one or more LinkedIn job listing products. Otherwise, customers see a job only if they explicitly search for it. There is a delay when jobs are posted to LinkedIn. A job might take up to a few hours to appear after it's posted from Attract. Candidate apply – Both internal and external candidates can now apply directly from the job page on the career site. Offer management – Users can now create offer letters from templates that include placeholders. As candidates advance to the Offer stage, recruiters and hiring managers can use the Offer tool to prepare a candidate's formal offer via templates, send the offer for internal approval, and finally send the offer to the candidate for signature. Many new capabilities will be added to the Offer tool over time, and the preview feature will be updated with these capabilities as we are ready to release them to preview. Core HR - Open Enrollment – Benefits open enrollment gives employees a simple, self-service experience for selecting their benefits. Human Resource (HR) administrators can configure the benefits open enrollment process for their organization, and the enrollment experience for employees, by using an easy-to-follow guided solution. Feedback Regardless of whether the feedback is positive or negative, we want to hear from you about your use of the preview features. We encourage you to regularly post your feedback on the following sites as you use these or any other features. Community – This site is a great resource where users can discuss use cases, ask questions, and get community help. Use the following sites to suggest product ideas. Let us know about features that you want to see in the product, and also any changes that you think should be made to existing features. Don't include personal data (any information that could identify you) in your feedback or product review submissions. Information that is collected might be analyzed further, and it won't be used to answer requests under applicable privacy laws. Personal data that is collected separately under these programs is subject to the Microsoft Privacy Statement. Tip Bookmark this topic, and check back often to stay up to date about new preview features as we release them. Feedback Send feedback about:
https://docs.microsoft.com/en-us/dynamics365/unified-operations/talent/access-preview-feature
2019-04-18T15:21:25
CC-MAIN-2019-18
1555578517682.16
[]
docs.microsoft.com
The customer portal is a secure place where your customers can manage their own account information. Every PayWhirl account comes with a unique customer portal hosted at "" that can be accessed from any device on the internet. You can ALSO embed a customer login widget (like we do in our portal demo) into a page on your website so your customers can login securely without ever leaving your website. Main Menu > Manage Widgets > New Widget - To embed the PayWhirl customer login widget simply select the Customer Login widget type and then copy the embed code to paste into html of your website: For example you can create a new "Manage Subscriptions" page and embed a customer login widget on the page for customers to access their account... OR... If you don't want to embed the portal login into your website, you can always link directly to the hosted version on the web. The customer login widget looks like this once it's been installed... You can also integrate an existing website login (if you already have one) to work with your PayWhirl customer portal login so customers only have to login once. For example, if you use an ecommerce platform like Shopify or Bigcommerce customers will technically have TWO different places to manage account information. - The native platform login (usually integrated with your theme) - The PayWhirl customer portal login However, because we create customers in those systems when they make a purchase, customers will have the same credentials for both systems and can login to either/both with the same information. SO... What are your options to create a seamless experience? It's actually a simple solution and works like this: - Find your website's "my account" pages (display after customers login). The files are usually in your theme files on Shopify, Bigcommerce, Etc. - Add a link that says "Manage Subscriptions" (or whatever you'd like) - Load the customer login widget when customers click the link and/or load a sub-section at this point with the paywhirl customer portal login widget. When you integrate the PayWhirl login INSIDE the native theme login it just feels like an added security step (which it is because customers are about to access a secure area within your website, with their stored payment information) and most customers don't even think twice about it, as this is becoming more and more common everyday (2 factor authentication). Most customers will have no problems because their credentials will be the same in both systems. It's a quick and inexpensive solution that will work for MOST businesses that use PayWhirl. However, if you need true "single sign-on" (aka SSO or MultiPass) features for your website or application we also support this through our API but it will take some custom development to implement. Here is the api endpoint for SSO / Multi-Auth if you are interested: With our MultiAuth API endpoint you can automatically login customers and direct them to widgets and/or pages within the customer portal if you need. Related Articles: How customers manage their own account information PayWhirl developers & experts PayWhirl widgets & embed codes Please let us know if you have any questions! Team PayWhirl
https://docs.paywhirl.com/PayWhirl/getting-started-with-paywhirl/quick-start-tutorials-and-how-to-guides/how-to-integrate-paywhirls-customer-portal-login-with-your-website
2019-04-18T14:44:46
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/22411431/b64f8a3ac67525acc4002a4b/Screen+Shot+2017-04-18+at+12.43.33+PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/22413253/bcea9c7082f075165f2be2fb/Screen+Shot+2017-04-18+at+1.06.54+PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/22413270/ea955ed81dc7aa4219a94460/Screen+Shot+2017-04-18+at+1.11.38+PM.png', None], dtype=object) ]
docs.paywhirl.com
Moving the SentryOne Database to a new SQL Server Instance. - Once you have validated that the SentryOne platform is working as planned with the database on the new server, you can remove the old database from the server that was hosting it. Note: Repeat steps seven and eight on all other machines with SentryOne monitoring services. You won't be asked for the license again. You may now start any other SentryOne client and enter the new SentryOne database location when asked..
https://docs.sentryone.com/help/relocating-the-sentryone-database
2019-04-18T14:55:57
CC-MAIN-2019-18
1555578517682.16
[]
docs.sentryone.com
. Use the VLAN policy at the uplink port level to propagate a trunk range of VLAN IDs to the physical network adapters for traffic filtering. The physical network adapters drop the packets from the other VLANs if the adapters support filtering by VLAN.. Prerequisites To override the VLAN policy at the port level, enable the port-level overrides. See Configure Overriding Networking Policies on Port Level..
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-CE05FC58-CC1C-44D1-AFFC-C10ADA59BC10.html
2019-04-18T14:34:56
CC-MAIN-2019-18
1555578517682.16
[]
docs.vmware.com
0.9.9.0 Start.swift File changes We took the liberty to make the start.swift class file more elegant and remove some boilerplate code. Now, the code is even clearer. The old syntax is on the left, the new on the right: - we added the onFinishLaunching() method. Override this method to add that should be executed after the app has loaded its internal library and is ready to execute app code.
https://docs.scade.io/docs/breaking-changes
2019-04-18T15:08:04
CC-MAIN-2019-18
1555578517682.16
[array(['https://d5rrulwm8ujxz.cloudfront.net/UGBreakingChanges_startSwiftChanges.png', None], dtype=object) ]
docs.scade.io
Testing¶ The test cases are located in fontParts.test.test_*. Test Structure¶ import unittest from fontParts.base import FontPartsError class TestFoo(unittest.TestObject): # -------------- # Section Header # -------------- def getFoo_generic(self): # code for building the object def test_bar(self): # Code for testing the bar attribute. def test_changeSomething(self): # Code for testing the changeSomething method. Test Definitions¶ The test definitions should be developed by following the FontParts API documentation. These break down into two categories. - attributes - methods These will be covered in detail below. In general follow these guidelines when developing - Keep the test focused on what is relevant to what is being tested. Don’t test file saving within an attribute test in a sub-sub-sub-sub object. - Make the tests as atomic as possible. Don’t modify lots of parts of an object during a single test. That makes the tests very hard to debug. - Keep the code clear and concise so that it is easy to see what is being tested. Add documentation to clarify anything that is ambiguous. Try to imagine someone trying to debug a failure of this test five years from now. Will they be able to tell what is going on in the code? - If testing an edge case, make notes defining where this situation is happening, why it is important and so on. Edge case tests often are hyper-specific to one version of one environment and thus have a limited lifespan. This needs to be made clear for future reference. - Test valid and invalid input. The base implementation’s normalizers define what is valid and invalid. Use this as a reference. - Only test one thing per test case. Tests are not a place to avoid repeated code, it’s much easier to debug an error in a test when that test is only doing one thing. Testing Attributes¶ Attribute testing uses the method name structure test_attributeName. If more than one method is needed due to length or complexity, the additional methods use the name structure test_attributeNameDescriptionOfWhatThisTests. def test_bar_get(self): foo, unrequested = self.getFoo_generic() # get self.assertEqual( foo.bar, "barbarbar" ) def test_bar_set_valid(self): foo, unrequested = self.getFoo_generic() # set: valid data foo.bar = "heyheyhey" self.assertEqual( foo.bar, "heyheyhey" ) def test_bar_set_invalid(self): foo, unrequested = self.getFoo_generic() # set: invalid data with self.assertRaises(FontPartsError): foo.bar = 123 def test_barSettingNoneShouldFail(self): foo, unrequested = self.getFoo_barNontShouldFail() with self.assertRaises(FontPartsError): foo.bar = None Getting¶ When testing getting an attribute, test the following: - All valid return data types. Use the case definitions to specify these. - (How should invalid types be handled? Is that completely the responsibility of the environment?) Setting¶ When testing setting an attribute, test the following: - All valid input data types. For example if setting accepts a number, test int and float. If pos/neg values are allowed, test both. - A representative sample of invalid data types/values. If an attribute does not support setting, it should be tested to make sure that an attempt to set raises the appropriate error. Testing Methods¶ Testing methods should be done atomically, modifying a single argument at a time. For example, if a method takes x and y arguments, test each of these as independently as possible. The following should be tested for each argument: - All valid input data types. For example if setting accepts a number, test int and float. If pos/neg values are allowed, test both. - A representative sample of invalid data types/values. def test_changeSomething(self): bar, unrequested = self.getBar_something() bar.changeSomething(x=100, y=100) self.assertEqual( bar.thing, (100, 100) ) def test_changeSomething_invalid_x(self): bar, unrequested = self.getBar_something() with self.assertRaises(FontPartsError): bar.changeSomething(x=None, y=100) def test_changeSomething_invalid_y(self): bar, unrequested = self.getBar_something() with self.assertRaises(FontPartsError): bar.changeSomething(x=100, y=None) Objects for Testing¶ Objects for testing are defined in methods with the name structure getFoo_description. The base object will be generated by the environment by calling self.objectGenerator("classIdentifier"). This will return a fontParts wrapped object ready for population and testing. It will also return a list of objects that were/are required for generating/retaining the requested object. For example, if an environment doesn’t support orphan glyphs, the unrequested list may contain a parent font. The objects in the unrequested list must not be used within tests. def getFoo_generic(self): foo = self.objectGenerator("foo") foo.bar = "barbarbar" return foo, []
https://fontparts.readthedocs.io/en/latest/development/testing.html
2019-04-18T15:30:49
CC-MAIN-2019-18
1555578517682.16
[]
fontparts.readthedocs.io
Metacloud Neutron Networking Overview When you purchase your private cloud, there are three network configurations available: Customer Managed Network, Cisco Managed Network, and Cisco Managed Gateway Network. Both Cisco Managed network configurations let you create and manage network objects using the Netutron Service and the Networking APIs. As of the Metacloud 4.1 release, project members as well as administrators can create and manage network objects. In Cisco Managed Networks, the Metacloud Support team fully manages your Metacloud Network Platform. A Cisco Managed Network ensures that your Metacloud network and Metacloud environment are managed and monitored by Metacloud Support Team. In Customer Managed Network, Metacloud customers manage all aspects of the Metacloud Network Platform. The Metacloud team does not manage any aspects of the Metacloud Network Platform but will notify the customer when network issues are detected. These network objects include networks, subnets, and ports which other Metacloud OpenStack services can use. Neutron, the networking service, provides and API and CLI that lets you define network connectivity including Layer 3 forwarding and Network Address Translation (NAT), and addressing in the cloud. For more information on the Cisco Managed deployment methodologies and use cases, see: Supported Networking Use Cases and Limitations Cisco Managed Network Cisco Managed Gateway Network Neutron, the networking service, includes the following components: API server—The Metacloud Neutron—Neutron uses OpenStacks’s messaging platform to accept and route RPC neutron networking requests between Metacloud services to complete API operations. Read more in the OpenStack Networking Guide. To configure network topologies, administrators can create and configure networks and subnets and instruct other services, like Compute, to attach virtual devices to ports on these networks. Refer to the Metacloud Controller Installation Guide for networking diagrams. Compute is a prominent consumer of networking resources to provide connectivity for its instances. Networking supports each tenant having multiple private networks and enables tenants to choose their own IP addressing scheme, even if those IP addresses overlap with those that other tenant use. There are two types of networks, project and provider networks. It is possible to share any of these types of networks among tenants as part of the network creation process. Tenant Networks A Tenant Network is the network that provides connectivity to a project. An Administrator can create, delete and modify tenant networks. Each tenant network is isolated from other tenant networks by a VLAN. Administrators are allowed to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers, and other networking infrastructure on the same layer 2 VLAN. Provider Networks Provider networks map to existing physical networks in the data center. Networking services like Compute connect to provider networks by requesting virtual ports. Networking supports each tenant having multiple private networks and enables tenants to choose their own IP addressing scheme, even if those IP addresses overlap with those that other tenants use. To add a provider network to your existing configuration, you must contact Metacloud Support and file a Change-Maintenence Support ticket to edit your configuration files. Include your network configuration settings and the IP addresses for included routers. Subnets Subnets contain a block of IP addresses and associated configuration state. This is also known as the native IPAM (IP Address Management) provided by the networking service for both tenant and provider networks. Subnets are used to allocate IP addresses when new ports are created on a network. Ports A port is a connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. Routers This is a logical component that forwards data packets between networks. It also provides L3 and NAT forwarding to provide external network access for VMs on tenant networks. Security Groups A security group acts as a virtual firewall for your compute instances to control inbound and outbound traffic. Security groups act at the port level, not the subnet level. Each port in a subnet could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for that network. Security groups and security group rules give administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for security group rules. When a port is created, it is associated with a security group. If a security group is not specified, the port is associated with a ‘default’ security group. By default, the default group allows all ingress traffic for all other instances in the same tenant that are assigned to the default group and allows all egress. Rules can be added to this group to change the behavior.
http://docs.metacloud.com/4.1/admin-guide/networking-overview/
2019-04-18T14:18:53
CC-MAIN-2019-18
1555578517682.16
[array(['/4.1/images/adminguide/cisco-managed-network.png', 'Cisco Managed Network'], dtype=object) array(['/4.1/images/adminguide/cisco-managed-gateway.png', 'Cisco Managed Gateway'], dtype=object) ]
docs.metacloud.com
You can connect your MailChimp account to PayWhirl by following the steps below: Click "Install App" next to the MailChimp logo on your PayWhirl Integrations page: Next, click "Connect" when prompted: On the next screen, you will be asked to login with your MailChimp username and password. Follow the instructions and login to proceed: You can set the list you'd like to add all customers to in the Integration settings page: You'll also notice that you have an extra section on plan settings. If you setup the MailChimp list for a specific plan, then customers who buy this plan will be added to the specified list. That's it! Once you're all setup you'll start seeing customers added to your MailChimp lists!
https://docs.paywhirl.com/PayWhirl/apps-and-integrations/other-apps-and-integrations/how-to-integrate-mailchimp-with-paywhirl
2019-04-18T14:18:51
CC-MAIN-2019-18
1555578517682.16
[array(['https://uploads.intercomcdn.com/i/o/12654180/bc72a63f8884d9b49efd2962/Screen_Shot_2016-03-24_at_3.23.28_PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12654207/3810ac7de58f6f9332b2b940/Screen_Shot_2016-03-24_at_3.24.17_PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12654235/8c7eb729304a7c4c6fa5d134/Screen_Shot_2016-03-24_at_3.24.23_PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12654250/4591607b9dc5161e5f988557/Screen_Shot_2016-03-24_at_3.28.28_PM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/12654279/1d3cbccfd1bd02685b868293/Screen_Shot_2016-03-24_at_3.21.27_PM.png', None], dtype=object) ]
docs.paywhirl.com
SHazaM¶: - Quantification of mutational load SHazaM includes methods for determine the rate of observed and expected mutations under various criteria. Mutational profiling criteria include rates under SHM targeting models, mutations specific to CDR and FWR regions, and physicochemical property dependent substitution rates. - Statistical models of SHM targeting patterns Models of SHM may be divided into two independent components: (a) a mutability model that defines where mutations occur and (b) a nucleotide substitution model that defines the resulting mutation. Collectively these two components define an SHM targeting model. SHazaM provides empirically derived SHM 5-mer context mutation models for both humans and mice, as well tools to build SHM targeting models from data. - Analysis of selection pressure using BASELINe The Bayesian Estimation of Antigen-driven Selection in Ig Sequences (BASELINe) method is a novel method for quantifying antigen-driven selection in high-throughput Ig sequence data. BASELINe uses SHM targeting models can be used to estimate the null distribution of expected mutation frequencies, and provide measures of selection pressure informed by known AID targeting biases. - Model-dependent distance calculations SHazaM provides methods to compute evolutionary distances between sequences or set of sequences based on SHM targeting models. This information is particularly useful in understanding and defining clonal relationships. For help and questions please contact the Immcantation Group or use the issue tracker. Dependencies¶ Depends: ggplot2, stringi Imports: alakazam, ape, diptest, doParallel, dplyr, foreach, graphics, grid, igraph, iterators, kedd, KernSmooth, lazyeval, MASS, methods, parallel, progress, SDMTools, scales, seqinr, stats, tidyr, utils Suggests: knitr, rmarkdown, testthat Authors¶ Mohamed Uduman (aut) Gur Yaari (aut) Namita Gupta (aut) Jason Vander Heiden (aut, cre) Ang Cui (ctb) Susanna Marquez (ctb) Julian Zhou (ctb) Nima Nouri (ctb) Steven Kleinstein (aut, cph) Citing¶ To cite the selection analysis methods, please use: Yaari G, Uduman M, Kleinstein S (2012). “Quantifying selection in high-throughput Immunoglobulin sequencing data sets.” Nucleic acids research, 40(17), e134. doi: 10.1093/nar/gks457 (URL:). To cite the HH_S5F model and the targeting model generation methods, please use: Yaari G, Vander Heiden J, Uduman M, Gadala-Maria D, Gupta N, Stern J, O’Connor K, Hafler D, Lasserson U, Vigneault F, Kleinstein S (2013). “Models of somatic hypermutation targeting and substitution based on synonymous mutations from high-throughput immunoglobulin sequencing data.” Frontiers in Immunology, 4(358), 1-11. doi: 10.3389/fimmu.2013.00358 (URL:). To cite the HKL_S1F, HKL_S5F, MK_RS1NF, and MK_RS5NF models, please use: Cui A, Di Niro R, Vander Heiden J, Briggs A, Adams K, Gilbert T, O’Connor K, Vigneault F, Shlomchik M, Kleinstein S (2016). “A Model of Somatic Hypermutation Targeting in Mice Based on High-Throughput Ig Sequencing Data.” The Journal of Immunology, 197(9), 3566-3574. doi: 10.4049/jimmunol.1502263 (URL:). To see these entries in BibTeX format, use ‘print(
https://shazam.readthedocs.io/en/version-0.1.11_a/
2019-04-18T15:24:34
CC-MAIN-2019-18
1555578517682.16
[]
shazam.readthedocs.io
createTargetingModel - Creates a TargetingModel Description¶ createTargetingModel creates a 5-mer TargetingModel. Usage¶ createTargetingModel(db, model = c("S", "RS"), sequenceColumn = "SEQUENCE_IMGT", germlineColumn = "GERMLINE_IMGT_D_MASK", vCallColumn = "V_CALL", multipleMutation = c("independent", "ignore"), minNumMutations = 50, minNumSeqMutations = 500, modelName = "", modelDescription = "", modelSpecies = "", modelCitation = "", modelDate = NULL) calls. - multipleMutation - string specifying how to handle multiple mutations occuring within the same 5-mer. If "independent"then multiple mutations within the same 5-mer are counted indepedently. If "ignore"then 5-mers with multiple mutations are excluded from the otal mutation tally. - minNumMutations - minimum number of mutations required to compute the 5-mer substitution rates. If the number of mutations for a 5-mer is below this threshold, its substitution rates will be estimated from neighboring 5-mers. Default is 50. - minNumSeqMutations - minimum number of mutations in sequences containing each 5-mer to compute the mutability rates. If the number is smaller than this threshold, the mutability for the 5-mer will be inferred. Default is 500. - modelName - name of the model. - modelDescription - description of the model and its source data. - modelSpecies - genus and species of the source sequencing data. - modelCitation - publication source. - modelDate - date the model was built. If NULLthe current date will be used. Value¶ A TargetingModel object. and ignore multiple mutations model <- createTargetingModel(db, model="S", multipleMutation="ignore") Warning:Insufficient number of mutations to infer some 5-mers. Filled with 0. See also¶ See TargetingModel for the return object. See plotMutability plotting a mutability model. See createSubstitutionMatrix, extendSubstitutionMatrix, createMutabilityMatrix, extendMutabilityMatrix and createTargetingMatrix for component steps in building a model.
https://shazam.readthedocs.io/en/version-0.1.11_a/topics/createTargetingModel/
2019-04-18T15:24:28
CC-MAIN-2019-18
1555578517682.16
[]
shazam.readthedocs.io
Here's what you get with Phone System in Office 365 A PBX is a phone system within a Feedback Send feedback about:
https://docs.microsoft.com/en-us/microsoftteams/here-s-what-you-get-with-phone-system
2019-04-18T15:25:36
CC-MAIN-2019-18
1555578517682.16
[]
docs.microsoft.com
cancel_or_refund_on_terminal_request The cancel_or_refund_on_terminal_request contains the following data elements: Field Type Required Description ped void if not specified the PED object will be automatically populated. merchant_account char The merchant account processing this transaction. Transactions can be performed with any of the merchant accounts that were returned when registering the POS. tender_reference char A reference to the tender provided by the PED. terminal_id char The terminal_id of the PED. handle_receipt boolean Specifies that the POS handles and prints receipts. If omitted, it is required that the PED prints the receipt. If there is no printer unit, the transaction will fail. additional_data_obj additional_data_struct Contains key/value pairs that can be used by the merchant to return specific additional data, in particular in the final transaction result.
https://docs.adyen.com/developers/point-of-sale/build-your-integration/classic-library-integrations/c-library-integration/structs/cancel_or_refund_on_terminal_request
2019-04-18T14:17:50
CC-MAIN-2019-18
1555578517682.16
[]
docs.adyen.com
The custom scripts app on PayWhirl has MANY uses. It can be used to CUSTOMIZE any of the pages PayWhirl provides, from widgets, to the cart & checkout flow, to the customer portal, and much more. The app also allows you to include 3rd party CONVERSION or TRACKING scripts on widgets, payment forms or after completed checkouts. POPULAR USES FOR THE CUSTOM SCRIPTS APP - Conversion Tracking - Affiliate Tracking - Customizing PayWhirl To install the custom tracking scripts integration click Apps & Integration in the Main Menu of your account and then click "Install App" next to Custom Tracking Scripts app: NOTE: If you have multiple tracking scripts, you can install as many custom script apps as you need, or add multiple scripts to a single app. Once you have installed the app successfully you will see two different tabs on the settings page. One section will load scripts on EVERY PAGE PayWhirl provides and the other section will only load scripts AFTER CONVERSIONS, when people complete checkout successfully. Typically the service you are using for tracking will provide a snippet of code that needs to get run on "Every Page" or just after "Conversions". Make sure to paste the code in the correct tab, including <script> tags. PayWhirl also provide variables (information from your customers & checkout data) to help you customize your tracking scripts and/or inject data into scripts as needed. You can use any of the following variables to inject customer data into the tracking scripts: Customer Tracking Variables ---------------------------------------------------------------------- {{ customer.id }} Customer ID {{ customer.first_name }} First name {{ customer.last_name }} Last name {{ customer.email }} Email {{ customer.phone }} Phone {{ customer.address }} Street Address {{ customer.city }} City {{ customer.state }} State/Province {{ customer.zip }} Zip/Postal Code {{ customer.country }} Country Code {{ customer.utm_source }} Acquisition Source {{ customer.utm_medium }} Acquisition Medium {{ customer.utm_term }} Acquisition Keyword/Search Term {{ customer.utm_content }} Acquisition Ad Content {{ customer.utm_campaign }} Acquisition Ad Campaign {{ customer.utm_group }} Acquisition Ad Group On the CONVERSION tracking side (after checkout) there are additional variables: Conversion Tracking Variables ---------------------------------------------------------------------- {{ invoice.id }} Invoice ID {{ invoice.paid }} Invoice Paid Status (0 or 1) {{ invoice.subtotal }} Invoice Subtotal {{ invoice.shipping_total }} Invoice Shipping Costs {{ invoice.tax_total }} Invoice Tax Costs {{ invoice.amount_due }} Invoice Grand Total {{ invoice.promo_code }} Invoice Promo Code (if used) {{ invoice.items[X].description }} Invoice Item Description {{ invoice.items[X].quantity }} Invoice Item Quantity {{ invoice.items[X].amount }} Invoice Item Amount {{ invoice.items[X].currency }} Invoice Item Currency {{ invoice.items[X].sku }} Invoice Item SKU {{ address.address }} Invoice Shipping Address {{ address.city }} Invoice Shipping City {{ address.state }} Invoice Shipping State {{ address.zip }} Invoice Shipping Zip/Postal code {{ address.country }} Invoice Shipping Country {{ address.phone }} Invoice Shipping Phone Number {{ subscription.customer_id }} Customer ID {{ subscription.plan_id }} Plan ID {{ subscription.quantity }} Subscription Quantity {{ subscription.current_period_start }} Current Subscription Period Start Date {{ subscription.current_period_end }} Current Subscription Period End Date {{ subscription.trial_start }} Trial Start Date {{ subscription.trial_end }} Trial End Date {{ subscription.installment_plan }} Subscription is an Installment Plan {{ subscription.installments_left }} Subscription Installments Left {{ plan.name }} Plan name {{ plan.setup_fee }} Plan Setup Fee {{ plan.installments }} Plan Number of Installments {{ plan.require_shipping }} Plan Require Shipping {{ plan.require_tax }} Plan Require Tax {{ plan.active }} Plan Active {{ plan.image }} Plan Image {{ plan.description }} Plan Description {{ plan.billing_amount }} Plan Billing Amount {{ plan.billing_interval }} Plan Billing Interval {{ plan.billing_frequency }} Plan Billing Frequency {{ plan.trial_days }} Plan Trial Days {{ plan.sku }} Plan SKU {{ plan.currency }} Plan Currency {{ plan.billing_cycle_anchor }} Plan Billing Cycle Anchor {{ plan.file }} Plan File {{ plan.autorenew_plan }} Plan Auto-renew To {{ plan.tags }} Plan Tags {{ plan.enabled }} Plan Enabled You can also use LOGIC inside of the custom scripts app to help you loop through data from a transaction (for example): {% for item in invoiceitems %} trackItem({ 'sku': '{{ item.sku }}', 'quantity': '{{ item.quantity }}', 'price': '{{ item.amount }}' }); {% endfor %} You can also use jQuery to customize, animate and/or style pages within PayWhirl as needed... Finally, after loading conversion or tracking scripts you can optionally forward customers to a specific thank you page if you'd rather not direct customers to your customer portal after a purchase. NOTE: If you are not familiar with how to edit the code to include the variables we provide, you may need the help of a developer to get scripts working correctly. Related Articles: - LeadDyno Affiliate Tracking - Affiliatly Conversion Tracking - ReferralCandy Affiliate Tracking If you have any questions please let us know. Team PayWhirl
https://docs.paywhirl.com/PayWhirl/apps-and-integrations/other-apps-and-integrations/how-to-install-3rd-party-scripts-for-conversion-tracking-customization
2019-04-18T14:18:06
CC-MAIN-2019-18
1555578517682.16
[array(['https://downloads.intercomcdn.com/i/o/78148573/2ba1b1a76b45ff48e28758b6/customscripts.jpg', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/78148574/4b12954f709461bd614d29ee/Screen_Shot_2016-03-11_at_5.27.59_PM.png', None], dtype=object) ]
docs.paywhirl.com
Hyrax Development From OPeNDAP Documentation We are in the process of moving pages from our Trac Wiki on Hyrax to the is wiki. More Soon. Until then, here's where a good bit of the Hyrax design docs are living: Hyrax Design Documentation 1 Background Information 2 Features & Projects These are projects currently in their initial stages. - Fixing the (currently) broken Hyrax catalogs - Relational_Database_Handler - AIS Using NcML - Parse NcML - BES Aggregation using NcML - How to build the DataDDX response in/with Hyrax - Configurable BES cmdln client - Configurable Communication Protocol - AMQP instead of PPT for example 3 Projects that are now Released These might not be all complete, but all have passed through the initial planning and implementation stages and are at a stage where we are adding new capabilities to them or are 'in maintenance.' In Hyrax 1.7: In Hyrax 1.6: In Hyrax 1.5:
http://docs.opendap.org/index.php/Hyrax_Development
2017-02-19T19:04:50
CC-MAIN-2017-09
1487501170249.75
[]
docs.opendap.org
Support,. estimated performance (Performance Vector) This port delivers a performance vector of the SVM model which gives an estimation of statistical performance of this model. weights (Attribute Weights) This port delivers the attribute weights. This is possible only when the dot kernel type is used, it is not possible with other kernel types. Parameters - -.,. In this process the scale parameter is checked. Uncheck the scale parameter and run the process again. You will see that this time it takes a lot longer than the time taken with scaling..
http://docs.rapidminer.com/studio/operators/modeling/predictive/support_vector_machines/support_vector_machine.html
2017-02-19T18:40:47
CC-MAIN-2017-09
1487501170249.75
[]
docs.rapidminer.com
date¶ Formats" }} If mydate is {2009,6,1} this returns 2009-06-01 as output. To show the year of the current date: {{ now|date:"Y" }} See also the timesince filter to display a human readable relative time like 10 hours ago. Timezones¶ Dates in Zotonic are stored in UTC. If a date is displayed then it is converted to the timezone of the current request context. This timezone can be one of the following, in order of preference: - Preferred timezone set by the user - Timezone of the user-agent - Default timezone of the site - Default timezone of the Zotonic server - UTC A specific timezone can be enforced by adding a second parameter to the date-filter. For example, to display a date in UTC: {{ mydate|date:"Y-m-d H:i T":"UTC" }} Timezone and all day date ranges¶ If a resource’s date range is set with the date_is_all_day flag then the dates are not converted to or from UTC but stored as-is. This needs to be taken into account when displaying those dates, otherwise a conversion from (assumed) UTC to the current timezone is performed and the wrong date might be displayed. The timezone conversion can be prevented by adding the date_is_all_day flag to the date-filter as the timezone. Example, for displaying the start date of a resource: {{ id.date_start|date:"Y-m-d":id.date_is_all_day }} Date formatting characters¶, now
http://docs.zotonic.com/en/latest/ref/filters/filter_date.html
2017-02-19T18:38:06
CC-MAIN-2017-09
1487501170249.75
[]
docs.zotonic.com
Disk I/O¶ Glances displays the disk I/O throughput. The unit is adapted dynamically. There is no alert on this information. It’s possible to define: - a list of disks to hide - aliases for disk name under the [diskio] section in the configuration file. For example, if you want to hide the loopback disks (loop0, loop1, ...) and the specific sda5 partition: [diskio] hide=sda5,loop.*
http://glances.readthedocs.io/en/stable/aoa/disk.html
2017-02-19T18:44:22
CC-MAIN-2017-09
1487501170249.75
[array(['../_images/diskio.png', '../_images/diskio.png'], dtype=object)]
glances.readthedocs.io
Outdated Version You are viewing an older version of this section. View current production version. Status Page MemSQL Ops has been deprecated Please follow this guide to learn how to migrate to SingleStore tools. The Status page in MemSQL Ops top level information about the database workload currently running. The metrics shown on this page include:
https://archived.docs.singlestore.com/v5.5/tools/memsql-ops/status-page/
2022-06-25T10:23:57
CC-MAIN-2022-27
1656103034930.3
[array(['/images/r0z6j9hoQ664TMu95nKZ_Screen%20Shot%202016-03-29%20at%209.00.48%20PM.png', 'image'], dtype=object) ]
archived.docs.singlestore.com
.NET Framework problems. Summary 64-bit processes on x64 systems: Go to the HKLM\SOFTWARE\MICROSOFT\.NETFrameworkregistry key and change the EnableIEHosting value to 1. For 32-bit processes on x64 systems: Go to the HKLM\SOFTWARE\Wow6432Node\MICROSOFT\.NETFrameworkregistry key and change the EnableIEHosting value to 1. More information IEHost is a Microsoft .NET Framework 1.1-based technology that provides a better model than ActiveX controls to host controls within the browser. The IEHost controls are lightweight and are operated under the .NET security model where they are operated inside a sandbox. From the .NET Framework 4, we remove the IEHost.dll file for the following reasons: - IEHost/HREF-EXE-style controls are exposed to the Internet. This poses a high security risk, and most customers who install the Framework are benefiting very little from this security risk. - Managed hosting controls and invoking random ActiveX controls may be unsafe, and this risk cannot be countered in the .NET Framework. Therefore, the ability to host is disabled. We strongly suggest that IEHost should be disabled in any production environment. - Potential security vulnerabilities and assembly versioning conflicts in the default application domain. By relying on COM Interop wrappers to load your assembly, it is implicitly loaded in the default application domain. If other browser extensions do the same function, they have the risks in the default application domain such as disclosing information, and so on. If you are not using strong-named assemblies as dependencies, type loading exceptions can occur. You cannot freely configure the common language runtime (CLR), because you do not own the host process, and you cannot run any code before your extension is loaded. For more information about .NET Framework application compatibility, see Application compatibility in the .NET Framework.
https://docs.microsoft.com/en-us/internet-explorer/ie11-deploy-guide/net-framework-problems-with-ie11
2022-06-25T12:13:57
CC-MAIN-2022-27
1656103034930.3
[]
docs.microsoft.com
Replay Configuration You can tune the snapshot-trigger-size and snapshots-to-keepEngine Variables to make efficient use of the logs and snapshots. A large snapshot-trigger-size decreases the frequency that snapshots are taken. But a large snapshot-trigger-size increases the time needed to replay the snapshot. A large snapshots-to-keep increases the number of snapshots available, and it increases the amount of space needed to store the snapshots and logs. snapshots-to-keep defaults to 2. The datadir engine variable stores the location of the snapshots and logs. See Managing Disk Space Used by Transaction Logs and Configuration for more information.
https://docs.singlestore.com/db/v7.3/en/user-and-cluster-administration/high-availability-and-disaster-recovery/transaction-logs-snapshots/replay-configuration.html
2022-06-25T10:54:29
CC-MAIN-2022-27
1656103034930.3
[]
docs.singlestore.com
Skinning Editor The Skinning Editor is available as a module in the Sprite Editor after you install the 2D Animation package. You can use the available tools in the Skinning Editor to create the bones of the animation skeleton, generate and edit the mesh geometry of your character, and adjust the weights used to bind the bones to the Sprite meshes. To open your imported character in the Skinning Editor: Select the character Prefab created after importing your character with the PSD Importer. Select the Prefab and go to its Inspector window. Select the Sprite Editor button to open the Prefab in the Sprite Editor. In the Sprite Editor, open the drop-down menu at the upper left of the editor window and select the Skinning Editor module. See the Editor tools and shortcuts page for more information about the different features and tools available in the Skinning Editor. How to select a Sprite in the editor To select a Sprite in the Skinning Editor window: Double-click a Sprite to select it in the editor window. An orange outline appears around the Sprite that is selected (you can change the outline color in Tool Preferences. If the Sprite you want to select is behind other Sprites, hover over where the Sprite is, and double-click to cycle through all Sprites at the cursor location until you reach the desired Sprite. Double-click on a blank area in the editor window to deselect all Sprites. How to select bone or Mesh vertices in the editor To select a bone or mesh vertices when using the Bone and Geometry tools: Click a bone or mesh vertex to select it specifically. Draw a selection rectangle over multiple bones or vertices to select them all at once. Right click to deselect any selected bone or mesh vertices.
https://docs.unity3d.com/Packages/[email protected]/manual/SkinningEditor.html
2022-06-25T10:34:54
CC-MAIN-2022-27
1656103034930.3
[]
docs.unity3d.com
This 16-entry-deep FIFO contains data received by the UART from JTAG. The FIFO bit definitions are shown in the following table. Reading this register results in reading the data word from the top of the FIFO. When a read request is issued to an empty FIFO, a bus error (SLVERR) is generated and the result is undefined. The register is a read-only register. Issuing a write request to the receive data FIFO does nothing but generates a successful write acknowledgment. The following table shows the location for data on the AXI slave interface. The register is only implemented if C_USE_UART is set to 1.
https://docs.xilinx.com/r/en-US/pg115-mdm/UART-Receive-FIFO-Register-UART_RX_FIFO
2022-06-25T10:46:09
CC-MAIN-2022-27
1656103034930.3
[]
docs.xilinx.com
fn:one-or-more( arg as item()* ) as item()+ Returns $arg if it contains one or more items. Otherwise, raises an error [err:FORG0004]. For detailed type semantics, see Section 7.2.16 The fn:zero-or-one, fn:one-or-more, and fn:exactly-one functions[FS]. fn:one-or-more( () ) => XDMP-ZEROITEMS exception fn:one-or-more("hello") => "hello" Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/fn:one-or-more
2022-06-25T10:28:43
CC-MAIN-2022-27
1656103034930.3
[]
docs.marklogic.com
Hello. I have an exchange on-premises 2013 server environment like this: -Active Directory 1EA -Client Access 1EA (exchange 2013 cu8) -Mailbox Database 2EA (exchange 2013 cu8) The exchange version is 2013 CU 8. I am planning an upgrade to CU 23. During the upgrade process, There is setup.exe /prepareAD command execution. On which server should I execute this command? Case 1. When upgrading cu23 on a 3 EA server with cu 8 installed, execute it 3 times, one each. Case 2. Among the 3EA servers with cu 8 installed, after executing the setup.exe /prepareAD command only in 1EA, upgrade cu 23 without executing it on the rest of the servers. Case 3. Execute setup.exe /prepareAD only in Active Directory server. Which of the above 3 cases should I choose? Additional questions) Exchange 2013 When upgrading from cu 8 to cu 23, setup.exe /prepareschema Should I run this command too? As far as I know, the schema version is the same, so I don't think you need to run it, right?
https://docs.microsoft.com/en-us/answers/questions/317591/exchange-2013-cu-8-to-cu-23-upgrade.html
2022-06-25T12:37:07
CC-MAIN-2022-27
1656103034930.3
[]
docs.microsoft.com
SQLSweet16!, Episode 1: Backup Compression for TDE-enabled Databases San. [caption id="attachment_2765" align="alignnone" width="1417"] Figure 1: Backup Compression with TDE (SQL Server 2014)[/caption]. [caption id="attachment_2755" align="alignnone" width="1299"] Figure 2: Backup Compression with TDE (SQL Server 2016)[/caption].
https://docs.microsoft.com/en-us/archive/blogs/sqlcat/sqlsweet16-episode-1-backup-compression-for-tde-enabled-databases
2022-06-25T12:40:11
CC-MAIN-2022-27
1656103034930.3
[]
docs.microsoft.com
5 Overview¶ In this section we present the basic mechanism of the OptServer. Sec. 5.1 (Synchronous Optimization) Sec. 5.2 (Asynchronous Optimization) Sec. 5.3 (With or without the MOSEK API) Sec. 5.4 (Open or encrypted mode) 5.1 Synchronous Optimization¶ The easiest way to submit an optimization problem to the OptServer is in synchronous mode, where the caller is blocked while waiting for the optimization: A submission request is sent over to the OptServer and the problem is transferred. The submitter is put on hold. The OptServer runs the optimizer and wait for the results. When the optimizer terminates the OptServer collects the result and passes over to the client. The client receives the solution and resumes. The process can be represented as in Fig. 5.1. This workflow has the following advantages: It is effective for problems where the solution is expected reasonably quickly. The changes to the code compared to a local optimization are almost nonexistent. They boil down to invoking a different method in place of the usual optimizeor similar. 5.2 Asynchronous Optimization¶ The OptServer accepts jobs also in asynchronous mode, where the client is not blocked while waiting for the result: A submission request is sent over to the OptServer and the problem is transferred. The client regains control and continues its own execution flow. The client can poll the OptServer at any time about the job status and solution availability. The OptServer runs the optimizer and wait for the results. When the optimizer terminates the OptServer collects the results, which are available to the client next time it queries. The process can be represented as in Fig. 5.2. Asynchronous mode is particularly suitable when A job is expected to run for long time. One wants to submit a set of jobs to run in parallel. The submitter is a short-lived process. 5.3 With or without the MOSEK API¶ Calling OptServer using the MOSEK API The MOSEK API provides an interface to invoke the OptServer from the client, both in synchronous an asynchronous mode. It is currently available for the Optimizer API (synchronous and asynchronous) and Fusion (synchronous). The API is a set of functions such as optimizermt, asyncoptimize, asyncpoll and similar, which form a replacement for the standard optimize call, while the rest of the MOSEK code (creating task, loading data, retrieving results) remains the same. The details and examples can be found in the manuals for the Optimizer and Fusion APIs. It is possible to retrieve the log via a log handler and to interrupt a solver from a callback handler also during remote optimization. Calling OptServer directly Alternatively it is possible to call the OptServer through a REST API, submitting an optimization problem in one of formats supported by MOSEK. In this case the caller is responsible for assembling the data, communicating with the solver and interpreting the answer. Details and examples can be found in Sec. 7 (REST API tutorials). Using this approach it is possible to perform optimization from environments that cannot support a MOSEK client, for example from a Web application. 5.4 Open or encrypted mode¶ The server can be used in two modes. Fully open If no SSL keys are provided at installation, the server will run using HTTP only, without any user authentication, with anonymous job submission open to everybody and without the possibility to use the Web GUI. This is the only mode supported by the MOSEK client API up to version 9.1. Encrypted If SSL keys are provided at installation, the server will run using HTTPS only (also for job submission). Users need to be authenticated (for example via tokens) unless anonymous submission is explicitly enabled. In this mode it is possible to enable the Web GUI. See Sec. 6.3 (Security).
https://docs.mosek.com/10.0/opt-server/overview.html
2022-06-25T11:29:01
CC-MAIN-2022-27
1656103034930.3
[]
docs.mosek.com
To create an IP customization using the IP catalog, select the IP by double-clicking the IP, from the toolbar, select the Customize IP command, or right-click and select the command. The Customize IP dialog box shows the various parameters available for you to customize the IP. This interface varies, depending on the IP you select, and can include one or more tabs in which to enter values. Also, the Customize IP dialog box includes an IP symbol and tabs for setting configuration options for the specific IP. The Vivado IDE writes these configuration options to the <ip_name>.xci file, and stores them as properties on the IP object. The IP symbol supports zoom, re-size, and auto-fit options that are consistent with the schematic viewer canvas in Vivado IDE. The following figure shows the Customize IP interface for the FIFO Generator IP. In the IP dialog box, set the following options: - Documentation > Product Guide: Open the product guide for the selected IP. - IP Location: Specify the location on disk to store the IP. This location can only be adjusted per IP in an RTL-based project. This functionality is not provided in a Managed IP project because Managed IP builds a repository. - Switch to Defaults: Displays a window asking if you want to reset all configuration options back to their default starting point. Customize the IP as needed for your design, and click OK. Xilinx recommends that when specifying a numerical value, use hexadecimal to speed processing. Tcl Commands for IP The following are example Tcl commands for use with IP. Example for Creating an IP Customization You can also create IP customizations using the create_ip Tcl command. For example: create_ip -name fifo_generator -version 12.0 -vendor xilinx.com -library ip\ -module_name fifo_gen You must specify either -vlnv or all of -vendor, -library, -name, and - version. create_ipcommand. Example for Setting IP Properties To define the different configuration settings of the IP, use the set_property command. For example: set_property CONFIG.Input_Data_Width 12 [get_ips fifo_gen] Example for Reporting IP Properties To get a list of properties available for an IP, use the report_property command. For example: report_property CONFIG.* [get_ips <ip_name>] Configuration properties start with CONFIG. Example of a Query of an IP Customization Property To determine if an IP customization property is set to the default or set by the user, see the following example: # Find the read data count width. get_property CONFIG.Read_Data_Count_Width [get_ips char_fifo] 10 # Determine the source of CONFIG.Read_Data_Count_Width property. # See that this is the default value get_property CONFIG.Read_Data_Count_Width.value_src [get_ips char_fifo] default # Get the output data width. get_property CONFIG.Output_Data_Width [get_ips char_fifo] 8 # Determine the source of CONFIG.Output_Data_Width property. # See that this is set by the user. get_property CONFIG.Output_Data_Width.value_src [get_ips char_fifo] user
https://docs.xilinx.com/r/2021.2-English/ug896-vivado-ip/Creating-an-IP-Customization
2022-06-25T11:04:08
CC-MAIN-2022-27
1656103034930.3
[]
docs.xilinx.com
Creating Custom Drush Commands Creating a new Drush command or porting a legacy command is easy. Follow the steps below. - Run drush generate drush-command-file. - Drush will prompt for the machine name of the module that should "own" the file. - (optional) Drush will also prompt for the path to a legacy command file to port. See tips on porting command to Drush 9 - The module selected must already exist and be enabled. Use drush generate module-standardto create a new module. - Drush will then report that it created a commandfile, a drush.services.yml file and a composer.json file. Edit those files as needed. - Use the classes for the core Drush commands at /src/Drupal/Commands as inspiration and documentation. - See the dependency injection docs for interfaces you can implement to gain access to Drush config, Drupal site aliases, etc. - Once your two files are ready, run drush crto get your command recognized by the Drupal container. Specifying the Services File A module's composer.json file stipulates the filename where the Drush services (e.g. the Drush command files) are defined. The default services file is drush.services.yml, which is defined in the extra section of the composer.json file as follows: "extra": { "drush": { "services": { "drush.services.yml": "^9" } } } If for some reason you need to load different services for different versions of Drush, simply define multiple services files in the services section. The first one found will be used. For example: "extra": { "drush": { "services": { "drush-9-99.services.yml": "^9.99", "drush.services.yml": "^9" } } } In this example, the file drush-9-99.services.yml loads commandfile classes that require features only available in Drush 9.99 and later, and drush.services.yml loads an older commandfile implementation for earlier versions of Drush. It is also possible to use version ranges to exactly specify which version of Drush the services file should be used with (e.g. "drush.services.yml": ">=9 <9.99"). In Drush 9, the default services file, drush.services.yml, will be used in instances where there is no services section in the Drush extras of the project's composer.json file. In Drush 10, however, the services section must exist, and must name the services file to be used. If a future Drush extension is written such that it only works with Drush 10 and later, then its entry would read "drush.services.yml": "^10", and Drush 9 would not load the extension's commands. It is all the same recommended that Drush 9 extensions explicitly declare their services file with an appropriate version constraint. Altering Drush Command Info Drush command info (annotations) can be altered from other modules. This is done by creating and registering 'command info alterers'. Alterers are class services that are able to intercept and manipulate an existing command annotation. In order to alter an existing command info, follow the steps below: - In the module that wants to alter a command info, add a service class that implements the \Consolidation\AnnotatedCommand\CommandInfoAltererInterface. - In the module drush.services.ymldeclare a service pointing to this class and tag the service with the drush.command_info_alterertag. - In that class, implement the alteration logic in the alterCommandInfo()method. - Along with the alter code, it's strongly recommended to log a debug message explaining what exactly was altered. This makes things easier on others who may need to debug the interaction of the alter code with other modules. Also it's a good practice to inject the the logger in the class constructor. For an example, see the alterer class provided by the testing 'woot' module: tests/functional/resources/modules/d8/woot/src/WootCommandInfoAlterer.php. Site-Wide Drush Commands Commandfiles that are installed in a Drupal site and are not bundled inside a Drupal module are called 'site-wide' commandfiles. Site-wide commands may either be added directly to the Drupal site's repository (e.g. for site-specific policy files), or via composer require. See the examples/Commands folder for examples. In general, it's better to use modules to carry your Drush commands, as module-based commands may participate in Drupal's dependency injection via the drush.services.yml. If you still prefer using site-wide commandfiles, here are some examples of valid commandfile names and namespaces: - Simple - Filename: $PROJECT_ROOT/drush/Commands/ExampleCommands.php - Namespace: Drush\Commands - Nested in a subdirectory committed to the site's repository - Filename: $PROJECT_ROOT/drush/Commands/example/ExampleCommands.php - Namespace: Drush\Commands\example - Nested in a subdirectory installed via a Composer package - Filename: $PROJECT_ROOT/drush/Commands/contrib/dev_modules/ExampleCommands.php - Namespace: Drush\Commands\dev_modules Installing commands as part of a Composer project requires that the project's type be drupal-drush, and that the installer-paths in the Drupal site's composer.json file contains "drush/Commands/contrib/{$name}": ["type:drupal-drush"]. It is also possible to commit projects with a similar layout using a directory named custom in place of contrib; if this is done, then the directory custom will not be considered to be part of the commandfile's namespace. If a site-wide commandfile is added via a Composer package, then it may declare any dependencies that it may need in its composer.json file. Site-wide commandfiles that are committed directly to a site's repository only have access to the dependencies already available in the site. Site-wide commandfiles should declare their Drush version compatibility via a conflict directive. For example, a Composer-managed site-wide command that works with both Drush 8 and Drush 9 might contain something similar to the following in its composer.json file: "conflict": { "drush/drush": "<8.2 || >=9.0 <9.6 || >=10.0", } Using require in place of conflict is not recommended. A site-wide commandfile should have tests that run with each (major) version of Drush that is supported. You may model your test suite after the example drush extension project, which works on Drush ^8.2 and ^9.6. Global Drush Commands Commandfiles that are not part of any Drupal site are called 'global' commandfiles. Global commandfiles are not supported by default; in order to enable them, you must configure your drush.yml configuration file to add an include search location. For example: drush: paths: include: - '${env.home}/.drush/commands' With this configuration in place, global commands may be placed as described in the Site-Wide Drush Commands section above. Global commandfiles may not declare any dependencies of their own; they may only use those dependencies already available via the autoloader. Tips - The filename must be have a name like Commands/ExampleCommands.php - The prefix Examplecan be whatever string you want. - The file must end in Commands.php - The directory above Commandsmust be one of: It is recommended that you avoid global Drush commands, and favor site-wide commandfiles instead. If you really need a command or commands that are not part of any Drupal site, consider making a stand-alone script or custom .phar instead. See ahoy, Robo and g1a/starter as potential starting points.
https://docs.drush.org/en/9.x/commands/
2022-06-25T11:56:42
CC-MAIN-2022-27
1656103034930.3
[]
docs.drush.org
重要 The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy. New Features - Web transaction types (seen in tooltips) should now be more meaningful. When you mouse over a transaction, instead of showing Action it will show ASP, MVC, WebAPI, WebService, WCF, etc. Upgrade Notes 重要 This improvement to transaction naming may impact the transaction names of existing metrics including Key Transactions, Alert on Anything metrics and Insights queries based on transaction names. These would need to be recreated with the new transaction name after the upgrade. Fixes - The agent will now fallback to using the proxy configured for the executing user when talking to New Relic.
https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/net-release-notes/net-agent-44600/?q=
2022-06-25T11:27:01
CC-MAIN-2022-27
1656103034930.3
[]
docs.newrelic.com
%) - From the ITSI main menu, click Configuration > Multi-KPI Alerts. - Select Overview of correlation searches in ITSI. Create a composite alert Overview of correlation searches in ITSI.!
https://docs.splunk.com/Documentation/ITSI/4.9.6/SI/MKA
2022-06-25T11:01:53
CC-MAIN-2022-27
1656103034930.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Use the Predictive Analytics dashboard in ITSI The Predictive Analytics dashboard helps predict the future health score of a selected service in IT Service Intelligence (ITSI). Machine learning algorithms use historical KPI and service health score data to model what a service's health might look like in 30 minutes. Perform root cause analysis by viewing the top five KPIs contributing to a potential outage and how those KPIs are likely to change in the next 30 minutes. Based on the findings from your root cause analysis, you can take the necessary steps to prevent an imminent service outage before it happens. For example, if you discover that the top offending KPI is Database Service Requests, you can notify the Database team about the predicted outage and request that they address the problem. Prerequisites To use the Predictive Analytics dashboard, you first need to train and test a predictive model for the selected service. See Overview of Predictive Analytics in ITSI in the Service Insights manual. Steps To predict a service's health score, select a service, then a model. The "Recommended" model is the one that performed the best overall according to industry-standard metrics like R2 and RMSE. After you select a model, the dashboard populates with the predicted health score for that service. Click Cause Analysis to view and analyze the top five KPIs that could be contributing to that service's predicted health score. For information about the algorithms used to generate the models, see Choose a machine learning algorithm in the Service Insights manual. Dashboard panels Use the following panels to aid in root cause analysis:!
https://docs.splunk.com/Documentation/ITSI/4.9.6/SI/PADashboard
2022-06-25T11:04:46
CC-MAIN-2022-27
1656103034930.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) array(['/images/thumb/7/77/Usagedashboard.png/800px-Usagedashboard.png', 'Usagedashboard.png'], dtype=object) ]
docs.splunk.com
$ openshift-install create ignition-configs --dir $HOME/testconfig Fedora CoreOS (FCOS) represents the next generation of single-purpose container operating system technology by providing the quality standards of Fedora with automated, remote upgrade features. FCOS is supported only as a component of OKD 4.9 4.9: If you install your cluster on infrastructure that the installation program provisions, FCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the FCOS configuration, are also downloaded the only engine available within OKD clusters. Set of container tools: For tasks such as building, copying, and otherwise managing containers,. bootupd firmware and bootloader updater: Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd, FCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd, see the documentation for Updating the bootloader using bootupd for more information. Updated through the Machine Config Operator: In OKD,. FCOS upgrades in OKD are performed during cluster updates. For F. FCOS is designed to deploy on an OKD, such as DaemonSet and Deployment: config file system objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Differences between FCOS installations installations, config. Ignition is the utility that is used by init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OKD cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. The Ignition process for an FCOS machine in an OKD cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { ==" } ], "mode": 420 }, ...== | base64 --decode This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: $ oc describe machineconfigs 01-worker-container-runtime | grep Path: Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster.
https://docs.okd.io/4.9/architecture/architecture-rhcos.html
2022-06-25T10:32:25
CC-MAIN-2022-27
1656103034930.3
[]
docs.okd.io
If this view does not exist or no rows are found, user-defined international character sets are not available (see CharSetsV), or have not been assigned as host defaults. In this case, the standard default is used (EBCDIC for IBM mainframe hosts and ASCII for all others). Possible Values for DefaultCharSet - EBCDIC - ASCII - The name of a user-defined character set as displayed in the CharSets view.
https://docs.teradata.com/r/Teradata-VantageTM-Data-Dictionary/July-2021/Views-Reference/HostsInfoV/Usage-Notes
2022-06-25T11:02:35
CC-MAIN-2022-27
1656103034930.3
[]
docs.teradata.com
Get a live account Get a live account and start accepting payments. Step 1: Offer & contracting Before we can onboard you, we require more information about your business and your specific payment requirements. You can also contact our Sales department for further support. Once we have received the request, a dedicated Sales Manager will contact you to align on topics including industry, required solution, payment methods, volumes, and ticket sizes. After this, you will receive a customized offer for your dedicated setup. After receiving the signed offer, our Business Operations team sends you the contract package along with information on all the necessary Know-Your-Customer (KYC) documents (such as commercial registration, passport copies, and shareholder information). If something is missing or outdated, Business Operations will inform you accordingly and assist you during the onboarding process. Step 2: Risk & compliance checks After successful review of all submitted documents, our onboarding and risk department will conduct further risk and compliance checks and trigger the live setup. If further clarification is required, Business Operations will get in contact with you. Step 3: Sign the contract After verifying the documents and other checks, you can sign the contract with Unzer. Step 4: Create Unzer Insights live account After your live account is set up, you will receive an email from Unzer Insights. This starts the registration process for the live version of the application. Some of the operations that you can manage using Unzer Insights are: - monitor your executed live transactions - manage live transactions - create and manage reports - view statistics on the dashboard - create additional users to access the portal Learn more about Unzer Insights Step 5: Get live API keys All transactions are authenticated using the API keys. These are available in Unzer Insights. Using your live API keys, you will be able to perform live transactions and accept real-time payments. Learn more about Authentication
https://docs.unzer.com/get-started/get-live-account/
2022-06-25T11:15:32
CC-MAIN-2022-27
1656103034930.3
[]
docs.unzer.com
Pre- and Post-installation Disk Layout reference¶ Source directory This page describes the intended layout of build and install folders, in particular with respect to the Filesystem Hierarchy standard (FHS). e.g. for package ‘pkg’: src/ pkg/ CMakeLists.txt package.xml # contains inter-package and system dependencies # as specified in `REP 127 <>`_ include/ pkg/ header.hpp otherheader.hpp msg/ PkgMsg.msg src/ pkg/ __init__.py module.py CMakeLists.txt source.cpp srv/ PkgSrv.srv Todo Mention what happens with a manifest.xml file for backward compatibility with rosbuild Build directory build/ CATKIN_IGNORE # an empty file to guide catkin to not search in subfolders for package.xml files CMakeCache.txt cmake_install.cmake Makefile devel/ # the layout of that folder follows the (see install directory) .catkin # identifies folder as a catkin devel/install space # it contains a semicolon separated list of source folders if the workspace is a devel space env.sh setup.bash setup.sh setup.zsh _setup_util.py # functions for the setup shell scripts bin/ # just "anointed" central binaries (i.e. rosrun) etc/ # environment hooks, configuration files catkin/ profile.d/ 10.ros.sh # e.g. defining the ROS_MASTER_URI langs/ # to determine which message generators are available roscpp # contains "C++" rospy # contains "Python" include/ # header files of generated code lib/ # all compiled libraries go here pkgconfig/ # generated .pc files for all packages pythonX.Y/ dist-packages/ pkg/ # generated Python code __init__.py # generated file to relay imports into src directory of that package pkg/ # compiled binaries of that package share/ # all package-specific but architecture independent files pkg/ # one folder per package cmake/ # generated pkgConfig.cmake and pkgConfig-version.cmake for find_package() CMakeFiles/ pkgN/ # the usual CMake-generated stuff catkin_generated/ # files generated by catkin cmake/ CMakeFiles cmake_install.cmake installspace/ # files generated by catkin which will be installed Makefile ... Install directory The layout of the install directory follows the Filesystem Hierarchy Standard (FHS). /opt/ros/groovy/ # defined by the CMAKE_INSTALL_PREFIX # very similar to the devel space folder # therefore in the following only the differences are mentioned .catkin # identifies folder as a catkin devel/install space # the file is empty which indicates that it is an installspace lib/ pythonX.Y/ dist-packages/ pkg/ # besides the generated Python code # contains the Python source code of package include/ # besides the generated header files # contains all header files from the source directories share/ pkg/ # further resources (i.e. icons) copied from source directory manifest.xml # provide export information for legacy rosmake based stacks/packages action/ msg/ Foo.msg Bar.msg something.launch # the rest is as the package installs it stacks/ dry_stack1 # packages built via legacy rosmake dry_stack2 # packages built via legacy rosmake
http://docs.ros.org/en/kinetic/api/catkin/html/dev_guide/layout.html
2022-06-25T11:38:17
CC-MAIN-2022-27
1656103034930.3
[]
docs.ros.org
makevars¶ Format¶ makevars(x, vnames, xnames)¶ - Parameters x (NxK matrix) – columns to be converted into individual vectors vnames (string or Mx1 character vector) – names of global vectors to create. If 0, all names in xnames will be used. xnames (string or Kx1 character vector) – names to be associated with the columns of the matrix x Examples¶ Two global vectors, called age and pay, are created from the columns of x. This is the same as the example above, except that strings are used for the variable names. Remarks¶ If xnames = 0, the prefix X will be used to create names. Therefore, if there are 9 columns in x, the names will be X1-X9, if there are 10, they will be X01-X10, and so on. If xnames or vnames is a string, the individual names must be separated by spaces or commas: vnames = "age pay sex"; Since these new vectors are created at execution time, the compiler will not know they exist until after makevars() has executed once. This means that you cannot access them by name unless you previously clear them or otherwise add them to the symbol table. (See setvars() for a quick interactive solution to this.) This function is the opposite of mergevar(). Source¶ vars.src See also Functions mergevar(), setvars()
https://docs.aptech.com/gauss/makevars.html
2022-06-25T11:02:05
CC-MAIN-2022-27
1656103034930.3
[]
docs.aptech.com
Windows Container Image Scanning [BETA] This doc applies only to the Legacy Scanning engine. Make sure you are using the correct documentation: Which Scanning Engine to Use Overview Sysdig provides a standalone vulnerability scanning and policy engine for Windows containers called the Scanning Inspector. It can be used on both Windows and Linux hosts. This is a standalone scanning engine. There is no centralized UI, management, or historical data. These features are planned for a future release. Features Identify Windows container image vulnerabilities from: - Windows OS CVEs Windows or Linux hosts Reports in JSON and PDF Policy support Severity Fix available Days since fixed Ways to Use The Windows Scanning Inspector can be integrated into the CI/CD pipeline or deployed ad hoc during development. CI/CD Pipeline The image below shows how the Scanning Inspector fits within a development pipeline. A policy can pass or fail the workflow and provide a PDF or JSON report for each CI/CD job. Ad Hoc Scanning Developers can run the Windows Scanning Inspector anywhere Docker can be run: a machine (Mac, Windows, or Linux), VM, or Cloud. It provides immediate feedback on Windows OS or .NET vulnerabilities, allowing quick mitigation of known security vulnerabilities. Installation Prerequisites Request a Quay secret from your Sysdig sales agent. Install Scanning Inspector Use the provided secret to authenticate with Quay: PULL_SECRET="enter secret" AUTH=$(echo $PULL_SECRET | base64 --decode | jq -r '.auths."quay.io".auth'| base64 --decode) QUAY_USERNAME=${AUTH%:*} QUAY_PASSWORD=${AUTH#*:} docker login -u "$QUAY_USERNAME" -p "$QUAY_PASSWORD" quay.io Pull the Scanning Inspector component for Windows or Linux: Window Host/Kernel: quay.io/sysdig/scanning-inspector-windows:latest Linux Host/Kernel: : quay.io/sysdig/scanning-inspector-linux:latest Run the --helpcommand to see the parameters available for the Scanning Inspector. docker run --rm -v $(pwd):/outdir quay.io/sysdig/scanning-inspector-linux:latest --help Parameters for Scanning Inspector The --help command lists the available parameters and their usage. They can be divided into those related to scanning for vulnerabilities and generating a report, and those related to creating policies. Use Cases Scan Remote Image and Save PDF Report In this example, the Inspector should scan a remote image on a Linux host and save the resulting report as a PDF to ./scanResults.pdf docker run --rm -v $(pwd):/outdir quay.io/sysdig/scanning-inspector-linux:latest \ -t pull \ # pull image from remote repo -i mcr.microsoft.com/windows/nanoserver:10.0.17763.1518 \ # inspect container name -f pdf \ # format -o /outdir/scanResults.pdf # output name Scan Local Image Apply Policy Conditions and Generate JSON Report In this example, the Inspector should: Scan a local image on a Windows host Mount the Docker socket to access the local image. This can be done with -v "//./pipe/docker_engine://./pipe/docker_engine"in Windows Apply a policy to specify vulnerabilities with a minimum severity of highand a minimum number of days after the vulnerability fix is available set to 7. If the scan does not pass, the container will have an exit 1error. The report is in JSON docker run --rm -v $(pwd):/outdir -v "//./pipe/docker_engine://./pipe/docker_engine" quay.io/sysdig/scanning-inspector-windows:latest \ -t daemon \ # Use local daemon for image scan -i nanoserver:10.0.17763.1518 # local image name -min_severity high # Any sev high or greater CVEs will fail the image scan policy -min_days_fix 7 # Only fail scan if found vulnerabilities have a fix for more than 7 days -f json \ # format -o /outdir/scanResults.json # output name Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.sysdig.com/en/docs/sysdig-secure/scanning/windows-container-image-scanning-beta/
2022-06-25T11:07:19
CC-MAIN-2022-27
1656103034930.3
[array(['/image/win_scan_cicd.png', None], dtype=object)]
docs.sysdig.com
You are viewing an older version of this section. View current production version. change-root-password MemSQL Helios does not support this command. Change the root password for a node ID --password STRING The new database root password for the node sdb
https://archived.docs.singlestore.com/v7.0/reference/memsql-tools-reference/memsqlctl-reference/memsqlctl-change-root-password/
2022-06-25T10:58:54
CC-MAIN-2022-27
1656103034930.3
[]
archived.docs.singlestore.com
ansible.builtin.yum module – Manages packages with the yum package manager Note This module is part of ansible-core and included in all Ansible installations. In most cases, you can use the short module name yum even without specifying the collections: keyword. However, we recommend you use the FQCN for easy linking to the module documentation and to avoid conflicting with other collections that may have the same module name. Synopsis Installs, upgrade, downgrades, removes, and lists packages and groups with the yum package manager. This module only works on Python 2. If you require Python 3 support see the ansible.builtin.dnf module. Note This module has a corresponding action plugin. Requirements The below requirements are needed on the host that executes this module. yum Parameters Attributes Notes Note When used with a loop:each package will be processed individually, it is much more efficient to pass the list directly to the name option. In versions prior to 1.9.2 this module installed and removed each package given to the yum module separately. This caused problems when packages specified by filename or url had to be installed or removed together. In 1.9.2 this was fixed so that packages are installed in one yum transaction. However, if one of the packages adds a new yum repository that the other packages come from (such as epel-release) then that package needs to be installed in a separate task. This mimics yum’s command line behaviour. Yum itself has two types of groups. “Package groups” are specified in the rpm itself while “environment groups” are specified in a separate file (usually by the distribution). Unfortunately, this division becomes apparent to ansible users because ansible needs to operate on the group of packages in a single transaction and yum requires groups to be specified in different ways when used in that way. Package groups are specified as “@development-tools” and environment groups are “@^gnome-desktop-environment”. Use the “yum group list hidden ids” command to see which category of group the group you want to install falls into. The yum module does not support clearing yum cache in an idempotent way, so it was decided not to implement it, the only method is to use command and call the yum command directly, namely “command: yum clean all” Examples - name: Install the latest version of Apache ansible.builtin.yum: name: httpd state: latest - name: Install Apache >= 2.4 ansible.builtin.yum: name: httpd>=2.4 state: present - name: Install a list of packages (suitable replacement for 2.11 loop deprecation warning) ansible.builtin.yum: name: - nginx - postgresql - postgresql-server state: present - name: Install a list of packages with a list variable ansible.builtin.yum: name: "{{ packages }}" vars: packages: - httpd - httpd-tools - name: Remove the Apache package ansible.builtin.yum: name: httpd state: absent - name: Install the latest version of Apache from the testing repo ansible.builtin.yum: name: httpd enablerepo: testing state: present - name: Install one specific version of Apache ansible.builtin.yum: name: httpd-2.2.29-1.4.amzn1 state: present - name: Upgrade all packages ansible.builtin.yum: name: '*' state: latest - name: Upgrade all packages, excluding kernel & foo related packages ansible.builtin.yum: name: '*' state: latest exclude: kernel*,foo* - name: Install the nginx rpm from a remote repo ansible.builtin.yum: name: state: present - name: Install nginx rpm from a local file ansible.builtin.yum: name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state: present - name: Install the 'Development tools' package group ansible.builtin.yum: name: "@Development tools" state: present - name: Install the 'Gnome desktop' environment group ansible.builtin.yum: name: "@^gnome-desktop-environment" state: present - name: List ansible packages and register result to print with debug later ansible.builtin.yum: list: ansible register: result - name: Install package with multiple repos enabled ansible.builtin.yum: name: sos enablerepo: "epel,ol7_latest" - name: Install package with multiple repos disabled ansible.builtin.yum: name: sos disablerepo: "epel,ol7_latest" - name: Download the nginx package but do not install it ansible.builtin.yum: name: - nginx state: latest download_only: true Collection links Issue Tracker Repository (Sources) Communication
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yum_module.html
2022-06-25T10:24:15
CC-MAIN-2022-27
1656103034930.3
[]
docs.ansible.com
madepaceDataSource" class="org.openspaces.persistency.hibernate.DefaultHibernateSpaceDataSourceFactoryBean"> <property name="sessionFactory" ref="sessionFactory"/> <property name="initialLoadChunkSize" value="2000"/> <.
https://docs.gigaspaces.com/xap/10.2/tut-java/java-tutorial-part7.html
2022-06-25T11:15:02
CC-MAIN-2022-27
1656103034930.3
[]
docs.gigaspaces.com
- Integrate your project with Terraform - GitLab-managed Terraform state - Terraform module registry - Terraform integration in merge requests - The GitLab Terraform provider - Create a new cluster through IaC - Related topics Infrastructure as Code with Terraform and GitLab To manage your infrastructure with GitLab, you can use the integration with Terraform to define resources that you can version, reuse, and share: - Manage low-level components like compute, storage, and networking resources. - Manage high-level components like DNS entries and SaaS features. - Incorporate GitOps deployments and Infrastructure-as-Code (IaC) workflows. - Use GitLab as a Terraform state storage. - Store and use Terraform modules to simplify common and complex infrastructure patterns. Watch a video overview of the features GitLab provides with the integration with Terraform. Integrate your project with Terraform SAST test was introduced in GitLab 14.6. The integration with GitLab and Terraform happens through GitLab CI/CD. Use an include attribute to add the Terraform template to your project and customize from there. To get started, choose the template that best suits your needs: All templates: - Use the GitLab-managed Terraform state as the Terraform state storage backend. - Trigger four pipeline stages: test, validate, build, and deploy. - Run Terraform commands: test, validate, plan, and plan-json. It also runs the applyonly on the default branch. - Run the Terraform SAST scanner. Latest Terraform template The latest template is compatible with the most recent GitLab version. It provides the most recent GitLab features, but can potentially include breaking changes. You can safely use the latest Terraform template: - If you use GitLab.com. - If you use a self-managed instance updated with every new GitLab release. Stable and advanced Terraform templates If you use earlier versions of GitLab, you might face incompatibility errors between the GitLab version and the template version. In this case, you can opt to use one of these templates: - The stable template with an skeleton that you can built on top of. - The advanced template to fully customize your setup. Use a Terraform template To use a Terraform template: - On the top bar, select Menu > Projects and find the project you want to integrate with Terraform. - On the left sidebar, select Repository > Files. Edit your .gitlab-ci.ymlfile, use the includeattribute to fetch the Terraform template: include: # To fetch the latest template, use: - template: Terraform.latest.gitlab-ci.yml # To fetch the stable template, use: - template: Terraform/Base.gitlab-ci.yml # To fetch the advanced template, use: - template: Terraform/Base.latest.gitlab-ci.yml Add the variables as described below: variables: TF_STATE_NAME: default TF_CACHE_KEY: default # If your terraform files are in a subdirectory, set TF_ROOT accordingly. For example: # TF_ROOT: terraform/production - (Optional) Override in your .gitlab-ci.yamlfile the attributes present in the template you fetched to customize your configuration. GitLab-managed Terraform state Use the GitLab-managed Terraform state to store state files in local storage or in a remote store of your choice. Terraform module registry Use GitLab as a Terraform module registry to create and publish Terraform modules to a private registry. Terraform integration in merge requests Use the Terraform integration in merge requests to collaborate on Terraform code changes and Infrastructure-as-Code workflows. The GitLab Terraform provider The GitLab Terraform provider is a plugin for Terraform to facilitate managing of GitLab resources such as users, groups, and projects. Its documentation is available on Terraform. Create a new cluster through IaC - Learn how to create a new cluster on Amazon Elastic Kubernetes Service (EKS). - Learn how to create a new cluster on Google Kubernetes Engine (GKE). Related topics - Terraform images. - Troubleshooting issues with GitLab and Terraform.
https://docs.gitlab.com/14.10/ee/user/infrastructure/iac/
2022-06-25T11:42:31
CC-MAIN-2022-27
1656103034930.3
[]
docs.gitlab.com
The integrator.io bundle for NetSuite v1.19.1.0 was released on October 19, 2021 and will be applied to customers’ Production accounts in phases throughout the coming few weeks. The release is immediately available for Sandbox accounts, and you must manually upgrade the bundle in your NetSuite Sandbox Environment. (In compliance with NetSuite policies, Celigo cannot automatically update bundles in Sandbox accounts, only in Production accounts.) Contents What’s enhanced Support for NetSuite OSS Nexus European Union customers running the One Stop Shop (OSS) tax configuration can take advantage of full integration support: - Importing Sales Orders - The Contact Roles sublist is available for the Customer record Previously, body level imports would reset the line-level fields, generating errors for flows with the OSS setup. What’s fixed Additional fixes, including for customer-reported issues: - NetSuite Contacts and Contact Roles customer sublists were not available for exports or import field mapping - In NetSuite custom forms with radio buttons, discount types were not populating flat and percentage values correctly - Syncing gift certificates from NetSuite to Salesforce Commerce Cloud would update the wrong item, due to a nlapiSubmitField function limitation for inbound shipment on custom fields Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/4408619310107-Release-notes-for-NetSuite-Bundle-v1-19-1-0-October-2021
2022-06-25T11:24:52
CC-MAIN-2022-27
1656103034930.3
[]
docs.celigo.com
Title Neighborhood Residents' Production of Order: The Effects of Collective Efficacy on Responses to Neighborhood Problems Document Type Article Abstract. Recommended Citation Wells, William, Joseph A. Schafer, Sean P. Varano, and Timothy S. Bynum. 2006. "Neighborhood Residents’ Production of Order: The Effects of Collective Efficacy on Responses to Neighborhood Problems." Crime & Delinquency 52(4): 523-550. In: Crime & Delinquency, Vol. 52, No. 4, 2006.
https://docs.rwu.edu/sjs_fp/33/
2017-12-11T03:43:36
CC-MAIN-2017-51
1512948512121.15
[]
docs.rwu.edu
Running the application on multiple servers, for performance, resilience or failover reasons, is straightforward. The application is architected simply adding more servers behind the load balancer. Note that each of the servers will connect to the same relational database. While scaling out by adding more servers, don’t forget to also make sure the database can handle the additional load.
http://docs.alfresco.com/process-services1.7/topics/multi_node_clustered_setup.html
2017-12-11T03:46:01
CC-MAIN-2017-51
1512948512121.15
[array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/defaultprocess_services1_7/multi-node-setup.png', 'images/multi-node-setup.png'], dtype=object) ]
docs.alfresco.com
Formal Hypothesis Testing A study comparing the extent to which Chinese versus Mexican feel politically disenfranchised reported the following results:[1] The obvious interpretation of such a result is that the Chinese are much more likely to consider themselves to be disenfranchised than are Mexicans. However, the study only spoke to 499 Chinese, yet there are more than a billion Chinese people living in China. Similarly, the 287 Mexican respondents is only a miniscule proportion of the Mexican population. Perhaps if a different 499 Chinese had been interviewed a substantially different result would have been obtained. Possible, maybe even a result even lower than the 25% figure reported for the Mexicans. Similarly, the Mexican figure could be found to be compeltely different with a different study. When analyzing a survey it is important to keep a clear distinction between what has been recorded versus what is true. The recorded results are 56% for China and 25% for Mexico. Thus, the study has found that the extent of disenfranchisement in China is thirty one percentage points higher in China than that recorded in Mexico (i.e., 55% - 25% = 31%). In this case, the 'truth' is the actual proportion of Chinese people and Mexicans that feel disenfranchised. We have no way of knowing these true proportions. It is posisble that the difference between China and Mexico is 0%. Or, Mexicans may feel even more disenfranchised than the Chinese. The difference between what we have estimated (the difference of 31%) and the truth (the actual difference in the disenfranchisement of Chinese and Mexicans) is known as the total survey error. As we generally have no way of knowing the truth we also generally have no way of knowing the extent of the total survey error. However, a variety of tools have been developed that allow us to approximate the total survey error. If we make some assumptions we can quantify the extent the degree of survey error that can occur as a result of only talking to a fraction of the people in a population. Most commonly, researchers make the assumptions when faced with data such as this that people have been randomly selected to participate in the study. The technical term for this assumption is simple random sampling. In the case of the study we are discussing, for this assumption to be true requires that: - All the people in China had an equal chance of being selected in the study and that the 499 that did participate were randomly selected from the entire Chinese population. - All the people in Mexico had an equal chance of being selected in the study and that the 287 that did participate were randomly selected from the entire Mexican population. By making this somewhat heroic assumption we can use probability theory to make some conclusions about the extent of total survey error that can be anticipated to occur as a result of the random selection of people to participate in a survey. The basic process is as follows: - Stipulate something that we are trying to disprove. This is usually called the null hypothesis. - Compute the probability that we would have recorded the result that was obtained, or a more extreme result, if the null hypothesis was indeed true. This probability is referred to as the p-value, where p is for probability. - Conclude that the null hypothesis is false if the p-value is very small. Most commonly, 'small' is defined as less than or equal to 0.05 (or, to use the jargon, a 0.05 'significance level' is used). In our example: - Our null hypothesis is that the true difference between the perceived disenfranchisement of the Chinese and Mexicans is 0% (i.e., they are the same). - The probability that we would observe a difference of 31% or more if the true difference is 0% is, given our sample sizes of 499 Chinese and 287 Mexicans, essentialy 0. (How such computations are performed is discussed later at Testing Differences Between Proportions.) - As 0 is a very small p-value we conclude that it is wrong to believe there is no difference between the Chinese and the Mexicans and thus, it seems that the difference is not a fluke and can be relied upon. That is, we can conclude that the difference between the countries is 'statistically significant'. Now let is consider a different example, comparing preference for Coca-Cola based on age: The formal hypothesis test proceeds as follows: - Our null hypothesis is that the true difference between the preference for Coca-Cola amongst the age groups is 0% (i.e., they are the same). - We have observed a difference of 65%-41%=24%. The probability that we would observe a difference of 24% or more if the true difference is 0% is, given our sample sizes of 43 people and 39 people in the two age bands respectively is 0.026 (i.e., a little under 3 in 100). - It is hard to say if 0.026 is truly small. Ultimately what is smalll will depend upon context. However, it is smaller than the most common significance level of 0.05 and thus we conclude that it seems that preference for Coca-Cola does differ by age. However, with the following table, we compute a p-value of 0.0793 and conclude that we cannot reject the null hypothesis (i.e., there is insufficent evidence to conclude that age is a determinant of Pepsi preference). Some important conceptual points - The significance testing examples presented above are contradictory. That is, one test concludes that preference for Coca-Cola is related to age while another test of the sample sample concludes that preference for Pepsi is not related to age. It is impossible for both of these conclusions to be correct. There is no neat resolution of this problem. The formal terminology is that statistical tests are not transitive (i.e., the maths that is used to compute p-values does from time-to-time lead to contradictory results, even when it is done correctly). While in some situations more complicated statistical theories can provide some assistance, most of the time common sense is the only way to reconcile such contradictions (i.e., taking into account other evidence). - Significance tests are designed to take into account the extent of sampling error in the data. Sampling error is only one component of total survey error. Total survey error is also determined by measurement error (e.g., ambiguous wordings off questions). (What is described as sampling error on this page is a broader definition than is typical; the typical definition says that total survey error is the sum of sampling error, non-response error, coverage error and measurement error; on this page and the associated pages a simpler model of total survey error is the sum of samping error and measurement error is employed). - Lots of different statistical tests have been developed for computing p-values, such as t--tests, z-tests and chi-square tests - please refer to a statistical textbook or website for more information. In general, testing is left to computers and only a tiny proportion of commercial researchers know or understand the formulas that are used (and, this is not written as a criticism; the mechanics of how to perform significance testing is pretty low on the list of things that a commercial researcher needs to understand.) - The p-values computed using the standard formulas all assume that only a single test is conducted. That is, the implicitly assume that the user is not conducting multiple tests in a study. This assumption is rarely true. And, when it is not true it means that the p-value that is computed is under-estimated and the real p-value is much higher and, as a result, often results that are concluded to be statistically significant should not be concluded to be statistically significant. Fortunately, tools have been developed so that we do not need to make such a heroic and implausible assumption; see Multiple Comparisons (Post Hoc Testing) for more information on this topic. - Statistical tests make a host of other assumptions (see Technical Assumptions of Tests of Statistical Significance).
http://docs.displayr.com/wiki/Formal_Hypothesis_Testing
2017-12-11T04:02:01
CC-MAIN-2017-51
1512948512121.15
[]
docs.displayr.com
OverviewOverview Sunsky Opencart Product Importer- CedCommerce Sunsky Product Importer is the extensions developed by CedCommerce for Opencart. With this ‘Sunsky Opencart Product Import Extension’, the sellers may easily import a huge number of products from Sunsky Marketplace to their Opencart admin panel and their store. Key Features are as follows: - This extension enables you to import your products from Sunsky store into your Opencart store. - Import product using different filters. - Import Single as well as configurable products from your Sunsky store to the Opencart store. - Bother-free dynamic attribute creation. - Price markup. Read on to discover more about the uncomplicated procedure of importing Sunsky product to Opencart. A seller needs to make smart decisions in order to make the business a huge success, with unprecedented results. ×
https://docs.cedcommerce.com/opencart/sunsky-product-importer-opencart-user-guide/
2019-11-12T02:51:35
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
ios.supportsTablet: falseconfigured, your app will still render at phone resolution on iPads and must be usable. app.jsonfile to specify the version of your app, but there are a few different fields each with specific functionality. android.versionCodefunctions as your internal Android version number. This will be used to distinguish different binaries of your app. ios.buildNumberfunctions as your internal iOS version number, and corresponds to CFBundleVersion. This will be used to distinguish different binaries of your app. Constants.nativeAppVersionto access the versionvalue listed above. Constants.nativeBuildVersionto access either android.versionCodeor ios.buildNumbervalues (depending on the current platform) android.permissionskey in your app.jsonfile CAMERApermission upon installation. Your users may be wary of installing since nothing in the app seems to use the camera, so why would it need that permission? "permissions" : []. To use those in addition to CAMERApermission, for example, you'd set "permissions" : ["CAMERA"]. app.json, for example: "infoPlist": { "NSCameraUsageDescription": "This app uses the camera to scan barcodes on event tickets." }, infoPlistconfiguration. Because these strings are configured at the native level, they will only be published when you build a new binary with expo build. app.json, you can provide "locales": { "ru": "./languages/russian.json" } russian.jsonlooks like: { "NSContactsUsageDescription": "Hello Russian words" }
https://docs.expo.io/versions/latest/distribution/app-stores/
2019-11-12T03:12:33
CC-MAIN-2019-47
1573496664567.4
[]
docs.expo.io
Gateway Plus : Child Theme Download Child themes should be used when direct customizations of the theme files are required. We've prepared a child theme that you can use below to help get you started: Gateway Plus Child Theme Download To use the child theme: - Install the parent theme (Gateway Plus) but don't activate it. - Install and activate the child theme. - Any direct customizations you make should be done in the child theme. You can place parent theme files into the child theme for customization and they will be given priority. If updates are made to the parent theme, then the direct customizations you've made won't be overwritten. More information about using WordPress Child themes.
https://docs.rescuethemes.com/article/225-gateway-plus-child-theme-download
2019-11-12T03:26:12
CC-MAIN-2019-47
1573496664567.4
[]
docs.rescuethemes.com
Catch CategoriesCatch Categories As already mentioned in the profile section, the admin has to Map the Catch Category to the Magento Category to upload products on Catch.com. To map the Catch categories to the Magento categories - Go to Magento Admin Panel. - On the top navigation bar, move the cursor over the Catch menu, and then point to the Developer menu. The menu appears as shown in the following figure: - Click View Catch Category. The Catch Category Listing page appears as shown in the following figure: On this page, all the Catch Categories details are listed. ×
https://docs.cedcommerce.com/magento/catch/catch-integration-magento-user-guide-0-0-1/?section=catch-category
2019-11-12T03:05:22
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
SolrMethods SolrMethods contains one-liners for executing Solr-related operations in Gloop and Groovy such as: - creating a Solr schema based on a given Gloop Model and vice versa - creating (bulk inserts also supported), updating, querying, and deleting documents in the index - fetching the SolrClientof a given Solr core Example usages For example usages, please see the examples package and these documents:
https://docs.torocloud.com/martini/latest/developing/one-liner/solr-methods/
2019-11-12T04:27:39
CC-MAIN-2019-47
1573496664567.4
[]
docs.torocloud.com
simple content-based routing of messages where a message is passed through WSO2 EI in the Dumb Client Mode. WSO2 EI acts as a gateway to accept all messages, and then performs mediation and routing based on message properties or content. Prerequisites For a list of prerequisites, see Prerequisites to Start <EI_HOME>/samples/service-bus directory. To build the sample Start WSO2 EI with the sample 1 configuration. For instructions on starting a sample configuration, see Starting the following command from the <EI_HOME>/samples/axis2Clientdirectory, to execute the Stock Quote Client in the Dumb Client Mode. ant stockquote -Dtrpurl= Analyzing the output Analyze the output debug messages for the actions in the Dumb Client Mode. You will see WSO2 EI receiving a message for which WSO2 EI is set as the ultimate receiver. WSO2 EI performs a match to the path /StockQuote based on the To EPR in the following location:
https://docs.wso2.com/display/EI611/Sample+1%3A+Simple+Content-Based+Routing+%28CBR%29+of+Messages
2019-11-12T02:47:28
CC-MAIN-2019-47
1573496664567.4
[]
docs.wso2.com
Advanced troubleshooting for native_osx strategy¶ Note This document is a work in progress. Each time you encounter the scenario below, please revisit this document and report any new findings. The osx_native sync strategy is the fastest sync strategy for docker-sync under Docker4Mac. Unfortunately a recurring issue has emerged where the sync strategy stops functioning. This page is to guide you on how to debug this situation to provide information so that it can be solved. Step 1 - Prepare: Identify the docker-sync container involved¶ First, open your docker-sync.yml file and find the sync that has to do with the code that appears to be failing to sync. For example, if you have two docker-sync mounts like so: syncs: default-sync: src: './default-data/' fullexample-sync: src: './data1' And your file that is not updating is under default-data, then your sync name is default-sync. Run this in your terminal (substitute in your sync name) for use in the remaining steps: DEBUG_DOCKER_SYNC='default-sync' Step 2 - Prepare: A file path to check¶ Next we’re going to assign a file path to a variable for use in the following steps. - Change into your sync directory (in the example cd default-data/) - Prepare the relative path to your file that does not appear to be updating upon save, example some-dir/another-dir/my-file.ext - Run the following command with your path substituted in: DEBUG_DOCKER_FILE='some-dir/another-dir/my-file.ext' Step 3 - Reproduction: Verify your host mount works (host_sync)¶ Run this to verify that your file changes have been synced by OSXFS to the sync-container diff -q "$DEBUG_DOCKER_FILE" <(docker exec "$DEBUG_DOCKER_SYNC" cat "/host_sync/$DEBUG_DOCKER_FILE") Usually this should never get broken at all, if it does, you see one of the following messages, the so called host_sync is broken: Files some-dir/another-dir/my-file.ext and /dev/fd/63 differ diff: some-dir/another-dir/my-file.ext: No such file or directory Step 4 - Reproduction: Verify your changes have been sync by unison (app_sync)¶ Run this to verify that the changes have been sync from host_sync to app_sync on the container (using unison) diff -q "$DEBUG_DOCKER_FILE" <(docker exec "$DEBUG_DOCKER_SYNC" cat "/app_sync/$DEBUG_DOCKER_FILE") If you see a message one of the messages, this so called app_sync is broken: Files some-dir/another-dir/my-file.ext and /dev/fd/63 differ diff: some-dir/another-dir/my-file.ext: No such file or directory If you do not see a message like one of these, then the issue you are encountering is not related to a sync failure and is probably something like caching or some other issue in your application stack, not docker-sync. Step 5 - Reproduction: Unison log¶ If one of the upper errors occurred, please include the unison logs: docker exec "$DEBUG_DOCKER_SYNC" tail -n70 /tmp/unison.log And paste those on Hastebin and include the link in your report Step 6 - Reproduction: Ensure you have no conflicts¶ Put that into your problematic sync container docker-sync.yml config: sync_args: "-copyonconflict -debug verbose" Restart the stack docker-sync-stack clean docker-sync-stack start Now do the file test above and see, if next to the file, in host_sync or app_sync a conflict file is created, its called something like conflict Also then include the log docker exec "$DEBUG_DOCKER_SYNC" tail -n70 /tmp/unison.log And paste those on Hastebin and include the link in your report
https://docker-sync.readthedocs.io/en/latest/troubleshooting/native-osx-troubleshooting.html
2019-11-12T04:44:10
CC-MAIN-2019-47
1573496664567.4
[]
docker-sync.readthedocs.io
Quickstart: Create and assign a custom role In this Intune quickstart, you'll create a custom role with specific permissions for a security operations department. Then you'll assign the role to a group of such operators. There are several default roles that you can use right away. But by creating custom roles like this one, you have precise access control to all parts of your mobile device management system. If you don’t have an Intune subscription, sign up for a free trial account. Prerequisites - To complete this quickstart, you must create a group. Sign in to Intune as a Global Administrator or an Intune Service Administrator. If you have created an Intune Trial subscription, the account you created the subscription with is the Global administrator. Create a custom role When you create a custom role, you can set permissions for a wide range of actions. For the security operations role, we'll set a few Read permissions so that the operator can review a device's configurations and policies. - In Intune, choose Roles > All roles > Add. - Under Add custom role, in the Name box, enter Security operations. - In the Description box, enter This role lets a security operator monitor device configuration and compliance information. - Choose Configure > Corporate device identifiers > Yes next to Read > OK. - Choose Device compliance policies > Yes next to Read > OK. - Choose Device configurations > Yes next to Read > OK. - Choose Organization > Yes next to Read > OK. - Choose OK > Create. Assign the role to a group Before your security operator can use the new permissions, you must assign the role to a group that contains the security user. - In Intune, choose Roles > All roles > Security operations. - Under Intune roles, choose Assignments > Assign. - In the Assignment name box, enter Sec ops. - Choose Member (Groups) > Add. - Choose the Contoso Testers group. - Choose Select > OK. - Choose Scope (Groups) > Select groups to include > Contoso Testers. - Choose Select > OK > OK. Now everyone in the group is a member of the Security operations role and can review the following information about a device: corporate device identifiers, device compliance policies, device configurations, and organization information. Clean up resources If you don't want to use the new custom role any more, you can delete it. Choose Roles > All roles > choose the ellipses next to the role > Delete. Next steps In this quickstart, you created a custom security operations role and assigned it to a group. For more information about roles in Intune, see Role-based administration control (RBAC) with Microsoft Intune To follow this series of Intune quickstarts, continue to the next quickstart. Feedback
https://docs.microsoft.com/en-us/intune/fundamentals/quickstart-create-custom-role
2019-11-12T04:09:38
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
The cache roster provides a flexible interface to the Salt Masters' minion cache to access regular minions over salt-ssh. grains, pillar, mine data matching SDB URLs IPv6 support roster_order per config key default order changed to industry-wide best practices CIDR range selection This roster supports all matching and targeting of the Salt Master. The matching will be done using only the Salt Master's cache. The roster's composition can be configured using roster_order. In the roster_order you can define any roster key and fill it with a parameter overriding the one in roster_defaults: roster_order: host: id # use the minion id as hostname You can define lists of parameters as well, the first result from the list will become the value. # default roster_order: host: - ipv6-private # IPv6 addresses in private ranges - ipv6-global # IPv6 addresses in global ranges - ipv4-private # IPv4 addresses in private ranges - ipv4-public # IPv4 addresses in public ranges - ipv4-local # loopback addresses This is the default roster_order. It prefers IPv6 over IPv4 addresses and private addresses over public ones. The relevant data will be fetched from the cache in-order, and the first match will fill the host key. Other address selection parameters are also possible: roster_order: host: - global|public|private|local # Both IPv6 and IPv4 addresses in that range - 2000::/3 # CIDR networks, both IPv4 and IPv6 are supported Several cached libraries can be selected using the library: `` prefix, followed by the library key. This can be referenced using the same ``: syntax as e.g. pillar.get. Lists of references are also supported during the lookup, as are Salt SDB URLs. This should be especially useful for the other roster keys: roster_order: host: - grain: fqdn_ip4 # Lookup this grain - mine: network.ip_addrs # Mine data lookup works the same password: sdb://vault/ssh_pass # Salt SDB URLs are also supported user: - pillar: ssh:auth:user # Lookup this pillar key - sdb://osenv/USER # Lookup this env var through sdb priv: - pillar: # Lists are also supported - salt:ssh:private_key - ssh:auth:private_key salt.roster.cache. targets(tgt, tgt_type='glob', **kwargs)¶ Return the targets from the Salt Masters' minion cache. All targets and matchers are supported. The resulting roster can be configured using roster_order and roster_default.
https://docs.saltstack.com/en/2018.3/ref/roster/all/salt.roster.cache.html
2019-11-12T03:25:43
CC-MAIN-2019-47
1573496664567.4
[]
docs.saltstack.com
You. Use the Devices page to perform the following tasks: Getting started with infrastructure monitoring Monitoring infrastructure devices
https://docs.bmc.com/docs/display/tsim107/Monitoring+devices+from+the+TrueSight+console
2019-11-12T04:38:26
CC-MAIN-2019-47
1573496664567.4
[]
docs.bmc.com
strong consistency. A client can indicate the level - Before you can use replication to keep replicas current, you must set the column attribute REGION_MEMSTORE_REPLICATION to false for the HBase table, using HBase Shell or the client API. See Activating Read Replicas On a Table. - - Before you can use replication to keep replicas current, you must set the column attribute REGION_MEMSTORE_REPLICATION to false for the HBase table, using HBase Shell or the client API. See Activating Read Replicas On a Table. -..();
https://docs.cloudera.com/documentation/enterprise/5-15-x/topics/admin_hbase_read_replicas.html
2019-11-12T03:40:26
CC-MAIN-2019-47
1573496664567.4
[]
docs.cloudera.com
Key codes returned by Event.keyCode. These map directly to a physical key on the keyboard. Key codes can be used to detect key down and key up events, using Input.GetKeyDown and Input.GetKeyUp: using UnityEngine; public class KeyCodeExample : MonoBehaviour { void Update() { if (Input.GetKeyDown(KeyCode.Space)) { Debug.Log("Space key was pressed."); } if (Input.GetKeyUp(KeyCode.Space)) { Debug.Log("Space key was released."); } } } Keyboard events can also be captured within OnGUI: using UnityEngine; public class KeyCodeOnGUIExample : MonoBehaviour { void OnGUI() { if (Event.current.Equals(Event.KeyboardEvent(KeyCode.Space.ToString()))) { Debug.Log("Space key is pressed."); } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/KeyCode.html
2019-11-12T04:01:25
CC-MAIN-2019-47
1573496664567.4
[]
docs.unity3d.com
Syncing Salesforce account data to Vitally accounts (via Zapier) Both Vitally and Salesforce have powerful Zapier integrations, and in this post, we'll walk through how you can use Zapier to sync data between the 2 platforms. Salesforce -> Vitally Syncing specific Salesforce fields into Vitally is fairly straightforward and simply requires 1) a Workflow Rule in Salesforce and 2) a Zap that uses that rule. Zapier has written a helpful article that documents how to trigger Zaps from updated objects on Salesforce (search for the "Can I Trigger from Updated Objects on Salesforce" section). We'll be using the steps they outline there to detail how to update customer traits in Vitally. Steps - First, create a Zap using the New Outbound Message Salesforce trigger, which creates a webhook you'll use when creating the workflow in Salesforce (the next step). - Next, in your Salesforce setup area, create a new Workflow Rule. This rule will allow you to set the conditions that will trigger the outbound message to be sent out, which is also what triggers your Zap. Here's how to configure your workflow: - Object: In this example, we'll be syncing Salesforce account data, so choose Account. Note you could follow a similar process here for other Salesforce objects though. - Rule Name: Set to whatever you like - e.g. "Updated Account". - Evaluation Criteria: Select created, and every time it's edited. - Rule Criteria: - Make sure the first option says: "Run this rule if the criteria are met". - In the Fields section, select each field you want to push to Vitally. Select the contains operator for each field, and leave Value blank. Click Save once your form looks something like this (note that your Fields will be different - i.e. they'll be the fields you want to sync): - Once you have a created workflow, you'll see a Specify Workflow Actions screen. Click Add Workflow Action -> New Outbound Message. - Here, you'll define the message that Zapier receives. - Name: Set to whatever you like (e.g. "Account outbound") - Endpoint URL: Set this to the URL Zapier gave you in Step #1 above (should be something like "...."). - Available/Selected Fields: In Available Fields, select each field you want to send to Zapier and click the Add arrow. These fields should include 1) the fields you want to add to Vitally and 2) the global ID for the account. Note that for #2, we recommend you track the same ID in both Vitally and Salesforce to guarantee exact lookups. - Once you save your Outbound Message action in Salesforce, navigate back to Zapier. Feel free to test your Salesforce <-> Zapier connection in Zapier by updating a selected field in Salesforce. Zapier should confirm receipt of your new Outbound Message. - In Zapier, add a new Search step for Vitally. Select the Find a Customer search step to have Zapier lookup the Vitally account to update. In the Edit Options step in Zapier, you'll see 2 fields: - Search Field: This defines the field in Vitally to lookup the account by. When possible, we encourage selecting External ID since it is a globally unique field. Alternatively, you can select Name to lookup by account name. However, that can be much less accurate since names aren't unique and can be easily changed. - Search Value: Select the Salesforce field in the New Outbound Message options that (ideally) contains the account's global, unique ID. - Now, add a new Action step in Zapier for Vitally and select the Update Customer Traits action. In the Edit Template step, you'll see 2 fields: - Customer and Custom Value for Customer ID: These should already be filled out for you using your Search step above. - Traits: Here, you'll select the fields from Salesforce that you'd like to add to Vitally. Add as many traits as you like here. The end result will look something like this: - That's it! Just activate your Zap and give things a try. You should now be able to update the Salesforce fields you selected in your Workflow Rule and see those changes in the Traits -> Zapier traits tab for your account in Vitally.
https://docs.vitally.io/en/articles/83-syncing-salesforce-account-data-to-vitally-accounts-via-zapier
2019-11-12T03:45:40
CC-MAIN-2019-47
1573496664567.4
[]
docs.vitally.io
Jump to: GET Response Notes | API Call Builder | Related Models Find all 'publishereventnotificationcodevalue' records matching ... GET Response Notes List of found 'publishereventnotificationcodevalue' records that matched filter criteria. Returns: array API Call Builder Javascript is required to use the API Call Builder. Related Models - Advertisers referenced as advertiser - AdNetworks referenced as ad_network
https://tune.docs.branch.io/management/advertiser-publishers-event_notifications-code_values-find/
2019-11-12T04:37:30
CC-MAIN-2019-47
1573496664567.4
[]
tune.docs.branch.io
apply : apiVersion: rbac.authorization.k8s.io/v1 You cannot modify which Role or ClusterRole a binding object refers to. Attempts to change the roleRef field of a binding object will result in a validation error. To change the roleRef field on an existing binding object, the binding object must be deleted and recreated. There are two primary reasons for this restriction: roleRefensures the full list of subjects in the binding is intended to be granted the new role (as opposed to enabling accidentally modifying just the roleRef without verifying all of the existing subjects should be given the new role’s permissions). roleRefimmutable allows giving updatepermission on an existing binding object to a user, which lets them manage the list of subjects, without being able to change the role that is granted to those subjects. The kubectl auth reconcile command-line utility creates or updates a manifest file containing RBAC objects, and handles deleting and recreating binding objects if required to change the role they refer to. See command usage and examples for more information.: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-and-pod-logs-reader rules: - apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list"] Resources can also be referred to by name for certain requests through the resourceNames list. When specified, requests can be restricted to individual instances of a resource. To restrict a subject to only “get” and “update” a single configmap, you would write: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: configmap-updater rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-configmap"] verbs: ["update", "get"] Note that create requests cannot be restricted by resourceName, as the object name is not known at authorization time. The other exception is deletecollection.: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole: editing the role is not recommended as changes will be overwritten on API server restart via auto-reconciliation (see above). at least one of the following things is true: ClusterRole, within the same namespace or cluster-wide for a Role) escalateverb on the rolesor clusterrolesresource in the rbac.authorization.k8s.ioAPI group (Kubernetes 1.12 and newer) For example, if “user-1” does not have the ability to list secrets cluster-wide, they cannot create a ClusterRole containing that permission. To allow a user to create/update roles: Roleor ClusterRoleobjects, as desired. Roleor ClusterRolewith permissions they themselves have not been granted, the API request will be forbidden) Roleor ClusterRoleby giving them permission to perform the escalateverb on rolesor clusterrolesresources in the rbac.authorization.k8s.ioAPI group (Kubernetes 1.12 and newer). kubectl create role Creates a Role object defining permissions within a single namespace. Examples: Create a Role named “pod-reader” that allows user to perform “get”, “watch” and “list” object. Examples: Create a ClusterRole named “pod-reader” that allows user to perform “get”, “watch” and “list” name “foo” with nonResourceURL specified: kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/* Create a ClusterRole name “monitoring”, including the apiserver is run with a log level of 5 or higher for the RBAC component ( --vmodule=rbac*=5 or --v=5), Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.
https://v1-14.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/
2019-11-12T02:52:40
CC-MAIN-2019-47
1573496664567.4
[]
v1-14.docs.kubernetes.io
OverviewOverview Founded in 2011, Wish is a mobile e-commerce application based company in San Francisco, California operating globally with a large presence in North America, Europe, Brazil, and China. It is the 6th largest e-commerce company in the world. Wish.com is competing with the companies such as Walmart and Amazon. Wish currently works with thousands of merchants serving millions of users globally. Wish Integration for Magento 2 is an advanced API integration that helps the Magento 2 store owners to integrate their store with Wish and to synchronize inventory, price, and other product details for the product creation and its management between the Magento 2 store and the Wish marketplace. The Wish Integration for Magento 2 extension interacts with the Wish Marketplace to integrate the synchronized product listing between the Magento and the Wish retailers. After installing the extension, the merchant can create the Wish Categories and the dependent attributes on the Magento 2 store. The process enables the merchant to configure the desired product category into Magento for automatic submission of the selected product to the same Category on Wish. Key Features are as follows: - Profile-based product upload: Admin can create a profile, map the Wish category and attributes to the Magento category and attributes, and then after assigning the products to the profile can easily upload products to Wish.com - Bulk Upload System: The merchant has the flexibility to upload any number of products on Wish.com using bulk product upload feature. - Synchronized Inventory*: Auto synchronization of the inventory at regular intervals and the listing of the products along with all the details is established between Magento and Wish.com - Enable and Disable Products*: Merchants can close and reopen the products on Wish.com using Enable and Disable feature. - New Order Notifications*: Whenever a new order is fetched from Wish.com, the admin receives a notification. -.
https://docs.cedcommerce.com/magento-2/wish-integration-magento-2-manual/
2019-11-12T03:18:03
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
Jobs via Command Line Interface¶ The present section of the documentation explains how simulation Jobs can be created and executed via the Command Line Interface (CLI) of our platform. Batch script¶ We explain how to compose Batch Scripts (also known as Job Scripts), necessary for job submission via CLI, in this section. Accounting¶ We describe the accounting aspects of Job submission via CLI, such as specifying Projects and inspecting the Account charges and balance, here. Actions¶ The actions pertaining to Jobs submission and execution under the CLI are reviewed in this section of the documentation. Other general actions concerning the CLI, such as the loading of modules, the compilation of new applications or the creation of new python environments, are described separately. Tutorials¶ We provide tutorials guiding the user through the complete procedure for submitting jobs via CLI, and subsequently retrieving the corresponding results under the Web Interface of our platform. These tutorials are introduced here.
https://docs.exabyte.io/jobs-cli/overview/
2019-11-12T02:48:56
CC-MAIN-2019-47
1573496664567.4
[]
docs.exabyte.io
IEnumerator<T> Interface Definition Supports a simple iteration over a generic collection. generic <typename T> public interface class IEnumerator : IDisposable, System::Collections::IEnumerator public interface IEnumerator<out T> : IDisposable, System.Collections.IEnumerator type IEnumerator<'T> = interface interface IDisposable interface IEnumerator Public Interface IEnumerator(Of Out T) Implements IDisposable, IEnumerator Type Parameters - T - The type of objects to enumerate. - Derived - - Implements - Examples; } } } ' Defines the enumerator for the Boxes collection. ' (Some prefer this class nested in the collection class.) Public Class BoxEnumerator Implements IEnumerator(Of Box) Private _collection As BoxCollection Private curIndex As Integer Private curBox As Box Public Sub New(ByVal collection As BoxCollection) MyBase.New() _collection = collection curIndex = -1 curBox = Nothing End Sub Private Property Box As Box Public Function MoveNext() As Boolean _ Implements IEnumerator(Of Box).MoveNext curIndex = curIndex + 1 If curIndex = _collection.Count Then ' Avoids going beyond the end of the collection. Return False Else 'Set current box to next item in collection. curBox = _collection(curIndex) End If Return True End Function Public Sub Reset() _ Implements IEnumerator(Of Box).Reset curIndex = -1 End Sub Public Sub Dispose() _ Implements IEnumerator(Of Box).Dispose End Sub Public ReadOnly Property Current() As Box _ Implements IEnumerator(Of Box).Current Get If curBox Is Nothing Then Throw New InvalidOperationException() End If Return curBox End Get End Property Private ReadOnly Property Current1() As Object _ Implements IEnumerator.Current Get Return Me.Current End Get End Property End Class ' Defines two boxes as equal if they have the same dimensions. Public Class BoxSameDimensions Inherits EqualityComparer(Of Box) Public Overrides Function Equals(ByVal b1 As Box, ByVal b2 As Box) As Boolean If b1.Height = b2.Height And b1.Length = b2.Length And b1.Width = b2.Width Then Return True Else Return False End If End Function Public Overrides Function GetHashCode(ByVal bx As Box) As Integer Dim hCode As Integer = bx.Height ^ bx.Length ^ bx.Width Return hCode.GetHashCode() End Function End Class Remarks. However, if you choose to do this, you should make sure no callers are relying on the Reset functionality. Current property as an explicit interface implementation. This allows any consumer of the nongeneric interface to consume the generic interface. In addition, IEnumerator<T> implements IDisposable, which requires you to implement the Dispose() method. This enables you to close database connections or release file handles or similar operations when using other resources. If there are no additional resources to dispose of, provide an empty Dispose() implementation.
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.ienumerator-1?view=netframework-4.7
2019-11-12T04:09:48
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
cleanup_plugin_for_restore A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation. cleanup_plugin_for_restore Plugin command to clean up a storage plugin after restore. Synopsis plugin_executable cleanup_plugin_for_restore plugin_config_file local_backup_dir.
https://gpdb.docs.pivotal.io/43250/admin_guide/managing/plugin_api/cleanup_plugin_for_restore.html
2019-11-12T04:03:32
CC-MAIN-2019-47
1573496664567.4
[]
gpdb.docs.pivotal.io
events related to DB instances, DB clusters, DB parameter groups, DB security groups, DB snapshots, and DB cluster snapshots for the past 14 days. Events specific to a particular DB instances, DB clusters, DB parameter groups, DB security groups, DB snapshots, and DB cluster snapshots group can be obtained by providing the name as a parameter. Note By default, the past hour of events are returned.>] [-. - Can't" ... --filters (list) This parameter isn't currently supported. The following describe-events example retrieves details for the events that have occurred for the specified DB instance. aws rds describe-events \ --source-identifier test-instance \ --source-type db-instance Output: { "Events": [ { "SourceType": "db-instance", "SourceIdentifier": "test-instance", "EventCategories": [ "backup" ], "Message": "Backing up DB instance", "Date": "2018-07-31T23:09:23.983Z", "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance" }, { "SourceType": "db-instance", "SourceIdentifier": "test-instance", "EventCategories": [ "backup" ], "Message": "Finished DB Instance backup", "Date": "2018-07-31T23:15:13.049Z", "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance" } ] } Marker -> (string) An optional pagination token provided by a previous.
https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html
2020-11-24T04:30:40
CC-MAIN-2020-50
1606141171077.4
[]
docs.aws.amazon.com
Styling Pages Style PropertiesStyle Properties You can edit the following properties to customize the style of your prototype's pages. You can access these properties in the Style pane. Page DimensionsPage Dimensions By default, page dimensions are automatically calculated based on the widgets on the canvas, and the canvas itself is not constrained. If desired, you can set a static width and/or height for your page using the Page Dimensions dropdown. Select from a number of popular device dimensions or define your own custom dimensions with the Web and Custom Device options. When you select dimensions for a page, the canvas changes size to match, with grey negative space framing a white viewport region. This framing is reflected in the web browser as well. Tip When viewing a prototype on a mobile device, select Scale to Width in the prototype player's view settings to make the page content fit to the device's viewport. To clear static dimensions from a page, set the Page Dimensions droplist back to Auto. Adaptive ViewsAdaptive Views If you're designing for multiple viewport sizes, click Add Adaptive Views to set up a separate set of page dimensions for each target viewport. You can resize and reposition your content for each viewport size, and the web browser will automatically display the appropriate view for the target device's viewport. Page AlignmentPage Alignment Choose between Align Left and Align Center to determine whether the page's content is aligned to the left edge of the browser window or the center. Note Page alignment does not affect the alignment of widgets on the Axure RP canvas. FillFill Color: Sets a page's background color and opacity. Image: Sets a page's fill image and the image's alignment and cover type. Note Color and image fills are applied both in Axure RP and in the web browser. Low Fidelity ModeLow Fidelity Mode Low Fidelity mode reduces the visual fidelity of your pages to help your stakeholders focus on the UX of a design rather than its visuals. While Low Fidelity mode is active on a page, it is converted to greyscale, and all fonts are replaced with the Axure Handwriting font for a rougher look. Page StylesPage Styles Page styles are reusable, centrally managed sets of style properties. You can apply a single page style to multiple pages in order to unify their styling. If you change one of the property selections in the page style, the change will be applied to all pages using that style. To apply a page style to a page, click the canvas and then select the page style in the page style dropdown in the Style pane. You can view and manage the page styles in a prototype by clicking the Manage Page Styles icon next to the page style dropdown in the Style pane. Page Style HierarchyPage Style Hierarchy Every page's visual appearance is determined by style property selections made in the following locations, in order from lowest to highest priority: The Default style at the top of the Page Style Manager dialog, whose style property selections are applied to every page in the prototype. Tip All pages start out in the Default style. If you need to quickly make a style change to all pages in your prototype, and you're not yet using any other page styles, update the Default style. The page's applied page style, whose style property selections override the selections in the Default style. Style property selections made on the page itself in the Style pane, which override the selections in both the Default style and the page's own applied page style. Note When a page's style property selections differ from those of its applied page style, the style's name is followed by an asterisk in the Style pane. Updating and Creating StylesUpdating and Creating Styles Quick UpdateQuick Update To quickly update a style, edit the styling of a page currently using the style. Then in the Style pane, click Update to the right of the style name to update it. The update will apply to all pages in the project currently using that style. It will also apply to any future pages using that page style. Quick CreateQuick Create To quickly create a new style, edit the styling of any page. Then in the Style pane, click Create to the right of the page's currently applied page style. The Page Style Manager dialog will open with the page's styling changes already added to a new style. In the left column of the dialog, rename the new page style. In the right column, you can optionally make additional changes to the style. The Page Style ManagerThe Page Style Manager Click the Manage Page Styles icon, next to the page style dropdown in the Style pane, to open the Page Style Manager. To add a new style, click Add at the top of the dialog. Alternatively, you can click Duplicate to make a new style from an existing style. To delete a style, select it and click Delete. Use the Up and Down arrows to reorganize the styles in the dialog. To edit the style properties of a page style, select it in the left column. In the right column, check the box next to a style property to have that property override the Default style, and then make your selection for the property in the applicable field. You can also edit multiple styles at once. Hold CTRL or CMD while selecting styles in the left column and then make changes in the right column. Styles in Team ProjectsStyles in Team Projects Styles in a team project can be edited without checking out the project styles. To update style changes for all users of the project, send changes via the Team → Send Changes to Team Directory menu command.
https://docs.axure.com/axure-rp/reference/styling-pages/
2020-11-24T04:00:02
CC-MAIN-2020-50
1606141171077.4
[array(['/assets/screenshots/axure-rp/styling-pages-dimensions.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-adaptive-views.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-align.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-fill.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-low-fi-mode.gif', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-page-styles1.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-page-styles3.png', None], dtype=object) array(['/assets/screenshots/axure-rp/styling-pages-page-styles4.png', None], dtype=object) ]
docs.axure.com
How to make the most of the Full Page Zoom app The Full Page Zoom app has multiple features which you can tune to integrate it perfectly with your Shopify store. However, the most important thing to make the most of the app is curating high quality product images for your store. If your product vendor provides high quality images, use them. If it doesn't or if you build your own products, invest in preparing awesome pictures. Shopify provides a fantastic guide on how to do it at where it also explains the reasons why having good images is critical to improve your sales. In both cases, don't worry about the image sizes: always upload the largest image available to Shopify. Shopify will automatically take care of the product image sizes, making copies of the images in different resolutions. Full Page Zoom can use those different image sizes, just select in the app preferences page the one which looks best in your store: On the other hand, if for some reason you cannot upload high quality images to your store, you may want to select a smaller image size and enable the following option to prevent blurring: In that case, you can configure different 'Overlay' options to make it look more integrated with your Shopify theme. Please feel free to play with the different app options and don't hesitate to contact us if you need any assistance with the app.
https://docs.codeblackbelt.com/article/1377-how-to-make-the-most-of-the-full-page-zoom-app
2020-11-24T03:39:21
CC-MAIN-2020-50
1606141171077.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d1494e52c7d3a6ebd229f19/file-bJDqulk0uT.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d14957804286305cb87c6d4/file-2zj78PEBbr.png', None], dtype=object) ]
docs.codeblackbelt.com
Provides access to data and methods related to an agent's social force state. This can be used in custom social force fields to get/set forces on the agent. It can also be used in other contexts to retrieve information about the current social forces acting upon the agent. The agent's current center point. The agent's current force. You can only set this force as part of a custom social force. The scalar value of the agent's current friction force. This is only applicable if the agent has a friction force applied. The agent's current goal location. 1 if the agent currently has a goal, 0 otherwise. The agent's current velocity.
https://docs.flexsim.com/en/20.2/Reference/CodingInFlexSim/FlexScriptAPIReference/Agent/Agent.SocialForceState.html
2020-11-24T03:32:50
CC-MAIN-2020-50
1606141171077.4
[]
docs.flexsim.com
What Happens When a Credential Fails? Each time one of the credentials doesn't work, it shows up as a failed login attempt in the system logs. Bruteforce attacks are therefore "loud" or "noisy," and can result in locking user accounts if your target has configured a limit on the number of login attempts. Did this page help you?
https://docs.rapid7.com/metasploit/what-happens-when-a-credential-fails/
2020-11-24T03:36:12
CC-MAIN-2020-50
1606141171077.4
[]
docs.rapid7.com
You can configure SDK installations to use a variety of levels of security. Higher levels of security require more administrative effort, as well as extra effort on the part of users of the system, but give you better control of the use of SDK software. The appropriate level of security for your site is determined by the policies and procedures of your organization. This section describes two levels of security, and their advantages and disadvantages. You can choose one of these levels, or combine characteristics of several.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/foundation-system-admin-guide-101/GUID-CD0A317C-979D-4CDB-919B-99377F8EE564.html
2020-11-24T04:23:35
CC-MAIN-2020-50
1606141171077.4
[]
docs.vmware.com
Client Actions: Machine Policy Retrieval and Evaluation Cycle What it does:. How it does it: This tool completes this action by using remote WMI. Navigation: Navigate to the Machine Policy Retrieval and Evaluation Cycle Tool by right clicking on a device, selecting Recast RCT > Client Actions > Machine Policy Retrieval and Evaluation Cycle: Screenshot: When the action is run, the following dialog box will open: Permissions: The Machine Policy Retrieval and Evaluation Cycle tool requires the following permissions: Recast Permissions: - Requires the Machine Policy Retrieval and.
https://docs.recastsoftware.com/features/Device_Tools/Client_Actions/Machine_Policy_Retrieval_and_Evaluation_Cycle/index.html
2020-11-24T03:49:17
CC-MAIN-2020-50
1606141171077.4
[array(['media/SS_NAV.png', 'Machine Policy Retrieval and Evaluation Cycle ScreenShot'], dtype=object) ]
docs.recastsoftware.com
Removing quick filters You can remove an applied quick filter in one of the following ways: To remove a quick filter applied to a single field: While displaying a filtered list, click Quick Filter icon in the column header where you applied the filter and choose All from the drop-down list. To remove quick filters applied to any field: While displaying a filtered list, in the Condition panel, click . Besides, you can remove all applied filters from the grid, including quick and advanced ones, at once.
https://docs.alloysoftware.com/alloydiscovery/8/help/using-grids/removing-quick-filters.htm
2020-11-24T04:17:29
CC-MAIN-2020-50
1606141171077.4
[]
docs.alloysoftware.com
If. For Alfresco One 4.2 and 4.2.1, the Solr server is supported only when running in a Tomcat application server. From Alfresco One 4.2.2 onwards, Solr server is supported on JBoss as well. This file contains the following artifacts: The following instructions use <ALFRESCO_TOMCAT_HOME> to refer to the tomcat directory where Alfresco is installed and <SOLR_TOMCAT_HOME> to the tomcat directory where Solr is installed. These may be the same or different directories, depending on whether you have chosen to install Solr on a standalone server. For example: <?xml version="1.0" encoding="utf-8"?> <Context docBase="<SOLR-ARCHIVE>\apache-solr-1.4.1.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="<SOLR-ARCHIVE>" override="true"/> </Context> Set the data.dir.root property to the location where the Solr indexes will be stored. You can set the same value for the both cores, and the cores will create the sub-directories. For Unix use: mkdir -p <ALFRESCO_HOME>/alf_data/keystore cp <ALFRESCO_TOMCAT_HOME>/webapps/alfresco/WEB-INF/classes/alfresco/keystore/* <ALFRESCO_HOME>/alf_data/keystore For Windows use: mkdir <ALFRESCO_HOME>\alf_data\keystore copy <ALFRESCO_TOMCAT_HOME>\webapps\alfresco\WEB-INF\classes\alfresco\keystore\* <ALFRESCO_HOME>\alf_data\keystore For example: <Connector port="8443" URIEncoding="UTF-8" protocol="org.apache.coyote.http11.Http11Protocol" SSLEnabled="true" maxThreads="150" scheme="https" keystoreFile="/opt/alfresco/keystore/ssl.keystore" keystorePass="kT9X6oe68t" keystoreType="JCEKS" secure="true" connectionTimeout="240000" truststoreFile="/opt/alfresco/keystore/ssl.truststore" truststorePass="kT9X6oe68t" truststoreType="JCEKS" clientAuth="want" sslProtocol="TLS" allowUnsafeLegacyRenegotiation="true" maxHttpHeaderSize="32768" /> For example: dir.keystore=<SOLR_ARCHIVE>/alf_data/keystore For example: <user username="CN=Alfresco Repository, OU=Unknown, O=Alfresco Software Ltd., L=Maidenhead, ST=UK, C=GB" roles="repository" password="null"/> For example: <user username="CN=Alfresco Repository Client, OU=Unknown, O=Alfresco Software Ltd., L=Maidenhead, ST=UK, C=GB" roles="repoclient" password="null"/> If you are applying these instructions to a clustered installation, the steps should be carried out on a single host and then the generated .keystore and .truststore files must be replicated across all other hosts in the cluster. You should see the message Certificate update complete and another message reminding you what dir.keystore should be set to in the alfresco-global.properties file. d_dictionary.datatype.d_text.analyzer=org.alfresco.repo.search.impl.lucene.analysis.AlfrescoStandardAnalyser ### Solr indexing ### index.subsystem.name=solr dir.keystore=${dir.root}/keystore solr.port.ssl=8443 For example, some example properties for activating Solr are: index.subsystem.name=solr solr.host=localhost solr.port=8080 solr.port.ssl=8443 The subsystems have their own related properties. The managed - solr instance exposes the solr.base.url property. The lucene subsystem exposes all the properties that had to be set at start up. Alfresco 4 with Solr subsystem, does not to include any transactional indexing operation. In other words, Alfresco 4 removes the requirement to have the database and indexes in perfect sync at any given time and relies on an index that gets updated on a configurable interval (default: 15s) by Solr itself. The index tracker will take care of polling Alfresco for new transactions and will proceed to update its index. In this sense, indexes will eventually be consistent with the database. You may see a message on the Tomcat console similar to the following (and may find that Solr search and/or the Solr tracking is not working):: [19]. Refer also to the following links: [20] o [21]. If your version of Java does not have the fix, you need to re-enabled renegotiation by performing the following steps: Links: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
https://docs.alfresco.com/print/book/export/html/1924899
2020-11-24T03:14:04
CC-MAIN-2020-50
1606141171077.4
[]
docs.alfresco.com
Account for congestion Out of the box, driving times in Conveyal do not generally reflect congested conditions. Conveyal sets a default driving speed for each OpenStreetMap way based on the way's highway tag values (analogous to roadway functional classifications), with the code here. These speed values can be overridden in a custom network bundle configuration or routing engine version (more details coming soon). To account for congestion with more refined speed values (e.g. varying by link within a given functional classification), options include: - Prepare a CSV with columns for OpenStreetMap way id and a representative speed value. Speed values can then be to joined the corresponding ways as maxspeed:motorcartags (using an adapted version of this code), which will override the default driving speeds. This option works best when you can access the specific OSM extract to which your custom speed values are associated (e.g. through the vendor of your speed dataset), or at least the date of the OSM extract, because OSM identifiers are not guaranteed to be stable over time. The Conveyal team can join the CSV to an OSM file and upload the result as a network bundle for your use. - Prepare a shapefile with representative speed adjustment factors within polygons. This option is suitable when you want to model congestion in certain areas (e.g. downtown cores), or if you can create narrow buffers around a line layer generated by other tools (e.g. exported from other travel modeling software). The Conveyal team will load polygons you supply as a custom congestion modification, which you can then activate in scenarios like any other Conveyal modification. The speed for any link within a polygon will be multiplied by the polygon's adjustment factor. If a link is within multiple polygons, an optional priority field can be used to indicate which takes precedence. - Prepare your own OSM upload, adapting scripts such as this (ignoring the custom generalized cost tags, and just setting maxspeed:motorcartags). This option gives you the most control, but it may require substantial effort. Furthermore, it relies on tooling authored by other parties, so the Conveyal team cannot guarantee support.
https://docs.analysis.conveyal.com/guides/account-for-congestion
2020-11-24T04:12:42
CC-MAIN-2020-50
1606141171077.4
[]
docs.analysis.conveyal.com
How to assign specific legal hold policies to legal administrators? Overview A Legal administrator in inSync has access to all the legal hold policies by default. Using this procedure, you can restrict Legal administrators' access to only specific legal hold policies. Assign specific legal hold policies to Legal administrators Perform the following steps on the inSync Management Console: - Create a custom role that you want to assign to the Legal administrator. Roles allow you to define the access that you want to provide to a Legal adminstrator. For more information on how to create a role, see Create roles for a Legal Administrator. For information on the accesses available for each Legal adminstrator right, see Administrator roles and responsibilities. - On the inSync Management Console menu bar, click > Administrators. The Administrators and Roles page opens. - Click Create Administrator. - Enter the name and email ID of the Legal administrator. - In the Assign Role list, select the role that you created for the Legal administrator. - Click Next. - Under Access Control, select the legal hold policies that you want this Legal administrator to access. - Click Finish.
https://docs.druva.com/Knowledge_Base/inSync/How_To/How_to_assign_specific_legal_hold_policies_to_legal_administrators%3F
2020-11-24T03:28:09
CC-MAIN-2020-50
1606141171077.4
[]
docs.druva.com
;). To understand the problems with half precision, let's look briefly at what an FP16 looks like (more information here).rw. To address those three problems, we don't fully train in FP16 precision. As the name mixed training implies, some of the operations will be done in FP16, others in FP32. This is mainly to take care of the first problem listed above. for instance).). The only annoying thing with the previous implementation of mixed precision training is that it introduces one new hyper-parameter to tune, the value of the loss scaling. Fortunately for us, there is a way around this. We want the loss scaling to be as high as possible so that our gradients can use the whole range of representation, so let's first try a really high value. In all likelihood, this will cause our gradients or our loss to overflow, and we will try again with half that big value, and again, until we get to the largest loss scale possible that doesn't make our gradients overflow. This value will be perfectly fitted to our model and can continue to be dynamically adjusted as the training goes, if it's still too high, by just halving it each time we overflow. After a while though, training will converge and gradients will start to get smaller, so we al so need a mechanism to get this dynamic loss scale larger if it's safe to do so. The strategy used in the Apex library is to multiply the loss scale by 2 each time we had a given number of iterations without overflowing. Before going in the main Callback we will need some helper functions. We use the ones from the APEX library. We will need a function to convert all the layers of the model to FP16 precision except the BatchNorm-like layers (since those need to be done in FP32 precision to be stable). In Apex, the function that does this for us is convert_network. We can use it to put the model in FP16 or back to FP32. model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda() model = convert_network(model, torch.float16) for i,t in enumerate([torch.float16, torch.float32, torch.float16]): test_eq(model[i].weight.dtype, t) test_eq(model[i].bias.dtype, t) model = nn.Sequential(nn.Linear(10,30), BatchNorm(30, ndim=1), nn.Linear(30,2)).cuda() model = convert_network(model, torch.float16) for i,t in enumerate([torch.float16, torch.float32, torch.float16]): test_eq(model[i].weight.dtype, t) test_eq(model[i].bias.dtype, t) From our model parameters (mostly in FP16), we'll want to create a copy in FP32 (master parameters) that we will use for the step in the optimizer. Optionally, we concatenate all the parameters to do one flat big tensor, which can make that step a little bit faster. We can't use the FP16 util function here as it doesn't handle multiple parameter groups, which is the thing we use to - do transfer learning and freeze some layers - apply discriminative learning rates - don't apply weight decay to some layers (like BatchNorm) or the bias terms After the backward pass, all gradients must be copied to the master params before the optimizer step can be done in FP32. The corresponding function in the Apex utils is model_grads_to_master_grads but we need to adapt it to work with param groups. After the step, we need to copy back the master parameters to the model parameters for the next update. The corresponding function in Apex is master_params_to_model_params. For dynamic loss scaling, we need to know when the gradients have gone up to infinity. It's faster to check it on the sum than to do torch.isinf(x).any(). x = torch.randn(3,4) assert not test_overflow(x) x[1,2] = float('inf') assert test_overflow(x) Then we can use it in the following function that checks for gradient overflow: class ModelToHalf[source] ModelToHalf( after_create= None, before_fit= None, before_epoch= None, before_train= None, before_batch= None, after_pred= None, after_loss= None, before_backward= None, after_backward= None, after_step= None, after_cancel_batch= None, after_batch= None, after_cancel_train= None, after_train= None, before_validate= None, after_cancel_validate= None, after_validate= None, after_cancel_epoch= None, after_epoch= None, after_cancel_fit= None, after_fit= None) :: Callback Use with MixedPrecision callback (but it needs to run at the very beginning) learn = synth_learner(cuda=True) learn.model = nn.Sequential(nn.Linear(1,1), nn.Linear(1,1)).cuda() learn.opt_func = partial(SGD, mom=0.) learn.splitter = lambda m: [list(m[0].parameters()), list(m[1].parameters())] learn.to_fp16() learn.fit(3, cbs=[TestAfterMixedPrecision(), TestBeforeMixedPrecision()]) #Check the model did train for v1,v2 in zip(learn.recorder.values[0], learn.recorder.values[-1]): assert v2<v1 learn = learn.to_fp32()
https://docs.fast.ai/callback.fp16.html
2020-11-24T03:23:31
CC-MAIN-2020-50
1606141171077.4
[array(['/images/half.png', 'half float'], dtype=object) array(['/images/half_representation.png', 'half float representation'], dtype=object) array(['/images/Mixed_precision.jpeg', 'Mixed precision training'], dtype=object) ]
docs.fast.ai
ebook deal of the week: Exam Ref 70-768 Developing SQL Data Models This offer expires on Sunday, June 11 at 7:00 AM GMT...
https://docs.microsoft.com/en-us/archive/blogs/microsoft_press/ebook-deal-of-the-week-exam-ref-70-768-developing-sql-data-models
2020-11-24T04:24:18
CC-MAIN-2020-50
1606141171077.4
[]
docs.microsoft.com
DeleteBranch Deletes a branch from a repository, unless that branch is the default branch for the repository. Request Syntax { "branchName": " string", "repositoryName": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - branchName The name of the branch to delete. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: Yes - repositoryName The name of the repository that contains the branch to be deleted. Type: String Length Constraints: Minimum length of 1. Maximum length of 100. Pattern: [\w\.-]+ Required: Yes Response Syntax { "deletedBranch": { "branchName": "string", "commitId": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - deletedBranch Information about the branch deleted by the operation, including the branch name and the commit ID that was the tip of the branch. Type: BranchInfo object Errors For information about the errors that are common to all actions, see Common Errors. - BranchNameRequiredException A branch name is required, but was not specified. HTTP Status Code: 400 - DefaultBranchCannotBeDeletedException The specified branch is the default branch for the repository, and cannot be deleted. To delete this branch, you must first set another branch as the default branch. Examples Example This example illustrates one usage of DeleteBranch. Sample Request HTTP/1.1 Host: codecommit.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 57 X-Amz-Target: CodeCommit_20150413.DeleteBranch X-Amz-Date: 20151028T224659", "branchName": "MyNewBranch" } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: 0728aaa8-EXAMPLE Content-Type: application/x-amz-json-1.1 Content-Length: 88 Date: Wed, 28 Oct 2015 22:47:03 GMT { "deletedBranch": { "branchName": "MyNewBranch", "commitId": "317f8570EXAMPLE" } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/codecommit/latest/APIReference/API_DeleteBranch.html
2020-11-24T04:41:09
CC-MAIN-2020-50
1606141171077.4
[]
docs.aws.amazon.com
Topics Welcome to Topics! We have organized our content resources into topics to get you started on areas of your interest. Each topic page consists of an index listing all related content. It guides you through better understanding GitLab’s concepts through our regular docs, and, when available, through articles (guides, tutorials, technical overviews, blog posts) and videos.
https://docs.gitlab.com/ee/topics/
2020-11-24T04:13:30
CC-MAIN-2020-50
1606141171077.4
[]
docs.gitlab.com
GDScript style guide¶ Description¶ This styleguide lists conventions to write elegant GDScript. The goal is to encourage writing clean, readable code and promote consistency across projects, discussions, and tutorials. Hopefully, this will also encourage development of auto-formatting tools. Since GDScript is close to Python, this guide is inspired by Python’s PEP 8 programming styleguide. Note Godot’s built-in script editor uses a lot of these conventions by default. Let it help you. Code structure¶ Indentation¶ Indent type: Tabs (editor default) Indent size: 4 (editor default)) Blank lines¶ Surround functions and class definitions with a blank line. Use one blank line inside functions to separate logical sections. One statement per line¶ Never combine multiple statements on a single line. No, C programmers, not with a single line conditional statement (except with the ternary operator)! Good: if position.x > width: position.x = 0 if flag: print("flagged") Bad: if position.x > width: position.x = 0 if flag: print("flagged") Avoid unnecessary parentheses¶ Avoid parentheses in expressions and conditional statements. Unless necessary for order of operations, they only reduce readability. Good: if is_colliding(): queue_free() Bad: if (is_colliding()): queue_free() Whitespace¶ Always use one space around operators and after commas. Avoid extra spaces in dictionary references and function calls, or to create “columns.” Good: position.x = 5 position.y = mpos.y + 10 dict['key'] = 5 myarray = [4, 5, 6] print('foo') Bad: position.x=5 position.y = mpos.y+10 dict ['key'] = 5 myarray = [4,5,6] print ('foo') NEVER: x = 100 y = 100 velocity = 500 Naming conventions¶ These naming conventions follow the Godot Engine style. Breaking these will make your code clash with the built-in naming conventions, which is ugly. Classes and nodes¶ Use PascalCase: extends KinematicBody Also when loading a class into a constant or variable: const MyCoolNode = preload('res://my_cool_node.gd') Functions and variables¶ Use snake_case: get_node() Prepend a single underscore (_) to virtual methods (functions the user must override), private functions, and private variables: func _ready()
https://godot-es-docs.readthedocs.io/en/latest/getting_started/scripting/gdscript/gdscript_styleguide.html
2020-11-24T04:30:01
CC-MAIN-2020-50
1606141171077.4
[]
godot-es-docs.readthedocs.io
Customer Balance Report: Default Summarizing The basic Customer balance report summarizes opening balances for each displayed customer as of a start date of the report (in the Opening amount column) and outstanding balances as of an end date of the report (in the Balance amount column). The total amount of all invoices issued for a given period is summarized for each customer (in the Amount Dt column) as well as the total amount of all invoices paid by the customer together with the amounts returned to the customer according to issued credit notes (in the Amount Ct column). The amounts are worked out by summing the original values (using the SUM function). So summary values are presented in 4 columns - Opening amount, Balance amount, Amount Dt and Amount CT columns which are grouped by currencies with the total amounts being shown in a customer’s currency and in your system currency. The final total amount of all transactions and balances for each customer for a given period is presented in the Total column in your system currency. The default summarizing settings are defined in the values section of the Customer Customer balance.
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427397671
2020-11-24T04:08:10
CC-MAIN-2020-50
1606141171077.4
[]
docs.codejig.com
With application pools, you can deliver a single application to many users. The application runs on a farm of RDS hosts. When you create an application pool, you deploy an application in the data center that users can access from anywhere on the network. For an introduction to application pools, see Farms, RDS Hosts, and Desktop and Application Pools. An application pool has a single application and is associated with a single farm. To avoid errors, you must install the application on all of the RDS hosts in the farm. When you create an application pool, View automatically displays the applications that are available to all users rather than individual users from the Start menu on all the RDS hosts in the farm. You can select one or more applications from the list. If you select multiple applications from the list, a separate application pool is created for each application. You can also manually specify an application that is not on the list. If an application that you want to manually specify is not already installed, View displays a warning message. When you create an application pool, you cannot specify the access group in which to place the pool. For application pools and RDS desktop pools, you specify the access group when you create a farm. An application supports the PCoIP and VMware Blast display protocols. To enable HTML Access, see "Prepare Desktops, Pools, and Farms for HTML Access," in the "Setup and Installation" chapter in the Using HTML Access document, available from.
https://docs.vmware.com/en/VMware-Horizon-7/7.0/com.vmware.horizon-view.desktops.doc/GUID-BE58C60A-3B2E-4C01-9DF1-486A0C877093.html
2020-11-24T04:11:20
CC-MAIN-2020-50
1606141171077.4
[]
docs.vmware.com
Does Full Page Zoom work with multiple images? Yes, Full Page Zoom will work with multiple images. If you have configured multiple images for your product the zoom effect will be applied to all of them. If for some reason the zoom effect is not working with multiple images please contact us and we will fix it.
https://docs.codeblackbelt.com/article/154-does-full-page-zoom-work-with-multiple-images
2020-11-24T04:24:31
CC-MAIN-2020-50
1606141171077.4
[]
docs.codeblackbelt.com
Provides access to data and methods related to Agents who are members of an Agent System. Gets access to the Agent associated with an Object. To access the Agent associated with an object, you must either use this constructor, or call Agent.System.getAgent. If an object is a member of multiple agent systems, then calling this constructor will access the first system the object is a member of. Casting from a treenode is not correct. You should use this constructor instead of the as() casting method. Using as() on an object will not work properly. Agent agent = Agent(object); // correct Agent agent = object; // auto-downcast: incorrect Agent agent = object.as(Agent); // explicit cast: incorrect
https://docs.flexsim.com/en/20.2/Reference/CodingInFlexSim/FlexScriptAPIReference/Agent/Agent.html
2020-11-24T04:09:47
CC-MAIN-2020-50
1606141171077.4
[]
docs.flexsim.com
This topic will explain how the Mass Flow Conveyor works and how to use it in your models. The Mass Flow Conveyor (MFC) is similar to the standard conveyor objects except for a few distinct differences. The biggest difference is that each MFC section (straight or curved) can define how many lanes flow items can move along. If the MFC's output rate is not sufficent to move all of the flow items off of the end of the conveyor, the MFC will back up and all of the lanes will be filled with flow items. Common uses of the MFC would be for bottling lines. The MFC uses its own drawing logic for displaying its contents. When flow items enter the MFC they are hidden and a virtual flow item is drawn. This virtual flow item may have a completely different 3D shape from the original flow item. The size of the virtual flow item is defined by the MFC and all flow items are drawn at the same size. Once a flow item exists the MFC, it will be at its original size and original 3D shape. The MFC does not modify the flow items. Since the MFC draws it own contents, it greatly improves the drawing speed of the object. The MFC is designed to handle large numbers of flow items. There are several drawing options for how flow items should look as they move along the MFC. The fastest method is to draw an ordiniary cylinder. However, a custom 3D shape can also be drawn or the original 3D shape of the flow item. These methods will be slower than drawing a simple cylinder, but can be useful when taking videos or presenting the model. The MFC can draw all flow items the same, or sections can draw their own 3D shape or color of the flow items in order to show changes in the flow item as it moves down the line. This can be useful for showing the state the flow item is in. For example, a flow item may start as a small disk, or piece of plastic, that is then blown into a plastic bottle. The bottle can then be filled with liquid and a cap placed on top. Finally a label could be added to the bottle. This can all be done in a single MFC object with multiple sections, one for each state of the flow item. The MFC is made up of different sections. Each section can be straight or curved. It has its own length and vertical rise; its own speed and visualization properties. Since each section can have its own speed, it may move flow items along its length at a different rate than other sections. Each section can define any number of appended section. An appended section does not have its own speed, and as such, cannot be connected to a motor. Instead, it inherits its speed from its parent section. Additionally, appended sections cannot have decision points. Appended sections are specifically designed to increase the speed of moving flow items along the MFC. This is because the flow items on a appended section move with the flow items on the parent section. Appended sections can still define their physical layout; from length, vertical rise, number of lanes and so on. If you have a section of conveyor that is simply moving flow items from point A to point B at a constant speed, appended sections are the way to go. This next section is going to explain how the underlying logic of the MFC works. For most modellers, understanding this logic may not be necessary. However, it will help to explain how flow items are moved along the MFC and how to ensure greater accuracy or faster run speeds. As explained above in Sections and Appended Sections, each section can have its own speed. This means that each section will create its own set of events that will move flow items along the MFC, like a ticker. The more sections there are, the more ticks or events will be created, the more the FlexSim engine has to compute. Appended sections move at the same time as their parent section, so no events will be created by appended sections. When the model is reset, the MFC loops through all of its sections and calculates the physical length of that section. It then takes the Bottle Diameter, as defined in the MFC properties, and divides the length by the bottle diameter. The result is the number of crosslines for that section. Each crossline can hold one flow item per lane. Below is an image showing a series of crosslines. Each colored line represents a different crossline. You'll note that the crosslines are not straight across. Instead, they pack the flow items in order to fit the maximum number of flow items in each section. When a section's timer event fires, each crosslines content moves to the next crossline. If there is insufficient space in the next crossline, only some or none of the flow items will be moved. This then causes the flow items to back up and fill additional lanes. Each section can define a Crossline Multiplier (appended sections inherit from their parent section). The crossline multiplier gives you control over how large or small each crossline is. By default the crossline multiplier is set to one, meaning each crossline is the width of one flow item. If speed is important, increases the crossline multiplier to two or more will reduce the number of events for that section. It will also decrease the accuracy of moving flow items from one section to another. This accuracy is determined by the speed of the section and the bottle diameter. To better explain, start with a crossline multiplier of one, a section length of one meter and a speed of 0.1 meters per second. If the bottle diameter is set to 0.1 meters there will be 10 crosslines in the section. If the flow item enters the section at time 0, it will take 10 seconds and 10 ticks (events) for the flow item to move all of the way across the section. In this simple case the accuracy is 100%. Now take this same case and change the crossline multipler to three. This will cause each crossline to be the width of three bottle diameters. Each tick, or timer event, will move the flow item 0.3 meters. It will take the flow item three ticks to leave the section which will occur at 9 seconds. The accuracy of the flow item's movement down the MFC has decreased, but we have also decreased the number of events the engine has to execute and so, decreased the execution time. On the other side, you can increase the accuracy of your section by setting the crossline multiplier to something less than one. If we use the example above and change the crossline multiplier to 1/2 then each crossline will be the width of half of a bottle diameter or 0.05 meters. Each tick will be at 0.5 seconds and will move the bottle 0.05 meters. This will double the number of events it takes to move flow items from start of the section to the end, but it will also increase the accuracy and ensure that the flow items exit the section and the right time. Motors can be used to control the speeds of multiple MFC sections. When the motor's speed is changed, all connected sections will change to the motor's speed. If the motor is stopped either through code, a Time Table or MTBF/MTTR object, all of the motor's connected sections will also stop. Decision points can be used to execute logic as flow items move through the MFC. When a decision point is positioned along the section, it uses the data from the crossline located at the decision point for calculating its state and firing triggers. Custom code can be written for the triggers or there is a standard list of pick options as well as some MFC specific pick options for controlling the decision point's color as well as section/motor speeds. Decision points fire events as their state changes which allows you to tie Process Flow activities to the decision points like the Event-Triggered Source and Wait for Event.
https://docs.flexsim.com/en/20.2/modules/Bottling/help/MassFlowConveyor/MassFlowConveyor.html
2020-11-24T03:52:53
CC-MAIN-2020-50
1606141171077.4
[]
docs.flexsim.com
ChangelogChangelog 1.8.14 - 2015-08-261.8.14 - 2015-08-26 AddedAdded -Fixed - - 2015-04-281.8.13 - 2015-04-28 ChangedChanged - Updated for compatibility with ExpressionEngine 2.10+. FixedFixed - Fixed a bug with Calendar:ICS_Update where interval comparisons might not correctly update. - Fixed a bug where an Assets field could trigger errors when used with a Calendar channel. 1.8.12 - 2015-02-021.8.12 - 2015-02-02 AddedAdded -. ChangedChanged - Made some minor adjustments and improvements to the Demo Templates. 1.8.11 - 2014-12-101.8.11 - 2014-12-10 ChangedChanged - Updated and refreshed demo templates. 1.8.10 - 2014-11-211.8.10 - 2014-11-21 FixedFixed - Fixed a bug where Calendar could display errors in PHP 5.6.0. - Fixed a bug where caching of some data items could take up a lot of system memory. 1.8.9 - 2014-06-181.8.9 - 2014-06-18 FixedFixed - Fixed an issue with date conditionals in Occurrences tag and variable pairs that could occur in ExpressionEngine 2.9+. 1.8.8 - 2014-05-201.8.8 - 2014-05-20 FixedFixed - Fixed an issue with date conditionals that could occur in ExpressionEngine 2.9+. 1.8.7 - 2014-05-081.8.7 - 2014-05-08 FixedFixed - Fixed an issue with date conditionals that could occur in ExpressionEngine 2.8.2+. 1.8.6 - 2014-04-131.8.6 - 2014-04-13 ChangedChanged - Updated for compatibility with EE 2.8. FixedFixed - Fixed a bug with ICS import where missing elements could possibly cause a PHP error. - Fixed a potential bug with permissions and incomplete channel data. - Fixed a bug where ICS imports/updates could malform 'end_by' rules on repeating events. 1.8.5 - 2013-09-121.8.5 - 2013-09-12 ChangedChanged - Updated iCalendar class to 2.16.12. FixedFixed - Fixed a bug where ICS updates would error and not create entries due to an entry_date error. - Fixed a bug where new ICS imports in new Calendars could result in a PHP error on publish. - Verified compatibility with EE 2.7. 1.8.4 - 2013-05-141.8.4 - 2013-05-14 FixedFixed - Fixed a bug where pagination would fail to work in some situations. - Fixed a critical bug with installing Calendar on ExpressionEngine 2.6+. 1.8.3 - 2013-05-011.8.3 - 2013-05-01 AddedAdded -. ChangedChanged - Updated for compatibility with ExpressionEngine 2.6. FixedFixed - - 2013-03-081.8.2 - 2013-03-08 AddedAdded - Added internal Demo Templates tab in control panel (replaces older "code pack" approach). ChangedChanged - Updates are now automatic and no longer require a manual update screen. FixedFixed - - 2012-12-291.8.1 - 2012-12-29 FixedFixed - Fixed a bug where Calendar would error on upgrades or new installs. 1.8.0 - 2012-12-111.8.0 - 2012-12-11 AddedAdded -. ChangedChanged - Updated the 'date_range_end' parameter to have a default value of end of the day instead of the current date and time as the 'date_range_start' parameter. FixedFixed -. RemovedRemoved - Removed support for ExpressionEngine 1.x (Calendar is now EE2 only). 1.7.0 - 2012-01-251.7.0 - 2012-01-25 AddedAdded - Added member group permissions feature to Calendar, which includes a Permission tab in the Calendar module CP, and permissions blocks for Calendar events in the EE Edit page Publish area (EE2-only). ChangedChanged - Updated pagination to support ExpressionEngine 2.4. FixedFixed - Fixed a bug where the 'rules' variable pair would not get the proper the hour and minutes for 'rule_start_date' and 'rule_end_date' variables. - Fixed a bug where calculating date ranges would possibly add the timezone offset twice, resulting in incorrect ranges. 1.6.4 - 2011-11-161.6.4 - 2011-11-16 FixedFixed - Fixed a bug where the update function could run more than once if fired automatically by EE during the manual running of the module update. - Fixed a bug where the {occurrence_count}conditional was not working in the {occurrences}{/occurrences}variable pair. - Fixed a bug where using an invalid item with the event_name=""parameter in Calendar:Events would show all events instead of no results. - Fixed a bug where date formatting variable '%G' did not render the hour in 24-hour format without leading 0. - Fixed a bug where date formatting variable '%h' did not render the hour in 12-hour format with leading 0. - Fixed a bug where the {occurrence_duration_hours}variable was showing "0" for all-day events instead of "24 hours". - Fixed a bug where Calendar data would not save on submit/update of an entry if the site has a third party addon that modifies the Submit button area in the EE CP Publish page. 1.6.3 - 2011-09-141.6.3 - 2011-09-14 FixedFixed - Fixed a bug where using show_days/weeks/months/years parameters would not work in Calendar:Cal tag. - Fixed a bug where category_url_title's would not work with the category parameter. 1.6.2 - 2011-08-231.6.2 - 2011-08-23 FixedFixed - Fixed a bug where events could not be deleted from the Calendar CP if their parent channel entry had been deleted by other means. - Fixed a bug where dates could show before they were supposed to exist due to a calculation bug in time advancement. - Fixed a bug where a SQL error would be shown when trying to publish an entry in any channel on another MSM site. - Fixed a bug where inclusive ('&') category searching was not working. - Fixed a bug where the adjustment words 'begin' and 'end' were not being parsed from phonetic date range parameters. - Fixed a bug where the default end range for Calendar:Cal tag was "1 day from start date", and not "today". - Fixed a bug where events with just 1 occurrence would not show up in the Occurrences tag. - Fixed a bug where the calendar icon and "selected" day style in the datepickers were missing. 1.6.1 - 2011-08-121.6.1 - 2011-08-12 AddedAdded - Added defaults to all NOT NULL type columns in Calendar's SQL tables to support MySQL with Strict Mode enabled. ChangedChanged - Modified code in event checking to improve performance for repeating events that began far before the start date range. - Updated Exclude rules to now default to "Select Dates" for repeat type. FixedFixed - Fixed a bug where the updating the iCalcreator library to 2.8.x no longer fetched URIs in its parse stage. - Fixed a bug where repeat events across multiple days would not show up on the days in between the start and end dates. - Fixed a bug where showing events over multiple months in Calendar:Cal tag that spanned across years would incorrectly display the month data once a new year was hit. - Fixed a bug where event counts in month, week, and day were off. - Fixed a bug where 'U%' date formatting would output from the current month instead of the date in question. - Fixed a bug where repeat events would stop showing up after a period of time. - Fixed a visual bug in EE 2.2.0+ where days overlapped one another in calendar publish widget. - Fixed a bug where multiple dates in the same day were not respected by upcoming_occurrences_limitand prior_occurrences_limitparameters in the Calendar:Occurrences tag. - Fixed a bug where pagination would not parse {pagination_links}variable in Calendar:Events when the {occurrences}variable pair was present. - Fixed a bug where if the Calendar Event Channel's Dates & Options field were set to 'required', the publish form would never be able to be submitted, even if rules were created. - Fixed a bug where the 'event_never_ends' conditional would not properly parse when another rule in the same event as a never ending item had an end date. - Fixed a bug where adding a new rule when editing an event would sometimes throw an error. - Fixed a bug where the 'exclude' rule type in the Calendar Event publish area was allowing a date range to be selected when it is not supported for excludes. - Removed some expensive 'table_exists' queries where they were not needed. 1.6.0 - 2011-07-111.6.0 - 2011-07-11 AddedAdded - Converted Calendar to use Solspace Add-on Builder Framework, and Solspace Bridge for EE 1.x. - Added event_offset=""parameter in Events and Cal tags to offset events by the given number. - Added offset=""and limit=""parameters to Calendars tag. - Added {event_timeframe_total}variable in Events and Cal tags to show the number all events returned before event_limit is added. - Added pagination to all Calendar tags. - Added Calendar:Date_Widget tag in order to allow editing functionality with SafeCracker in EE 2.x. - Added category=""parameter to Calendar:Cal tag. - Added sorting by 'event_title' and 'event_start_time' to Calendar:Events tag. - Added {event_author_id},{event_author}, {event_status}variables in the Calendar:Events tag and {events}variable pair in Calendar:Cal tag. - Added {calendar_author_id},{calendar_author}, {calendar_status}variables in the Events, Cal, and Calendars tags. - Added {occurrence_author_id},{occurrence_author}, {occurrence_status}variables in the Calendar:Occurrences tag and {occurrences}variable pair in Calendar:Events tag. - Added occurrences_limit="", occurrences_offset="", and paginate=""params to the Calendar:Occurrences tag. - Added support for 'not' in event_id and calendar_id parameters. ChangedChanged - Changed themes folder paths to now be themes/calendar in EE 1.x and themes/third_party/calendar in EE 2.x. - Changed {display_each_month}current month calculation so the removal of {display_each_week}from between it would not affect it. FixedFixed - Fixed a bug where calendar fields didn't get assigned order numbers on install. - Fixed a bug where the installed channels/weblogs were not specifically using the default status_group, which could cause issues if default statuses change in the future. - Fixed a bug in ICS export where unique ID's were unique for the ICS instead of per event, making it where only 1 event per file would properly export. - Fixed an iCalendar error where multiple days weekly repeating would not properly export. Third party iCalendar bug. - Fixed a bug with iCalendar where all day events would go in as 12am to 12pm instead of actually all day. - Fixed a bug in the Calendar Event publish JavaScript that conflicted with jQuery 1.6.1 (Default for ExpressionEngine 2.2.0). - Fixed a bug where month/week/day/hour_event_count were being counted as compounding totals when wrapped in parent display_each_year/month/week tag pairs instead of resetting at the beginning of each's respective loops. - Fixed a bug where event_last_datevariable would parse as the current day instead of the last date of the event in the Calendar:Events tag. 1.5.4 - 2011-03-101.5.4 - 2011-03-10 AddedAdded - Added NSM Add-on Updater support. - Added extra filtering to the Edit Occurrences page in the Control Panel to allow proper editing of never ending repeating events after 100 events has been reached. - Added reverse="true" parameter to the Calendar:Occurrences tag to allow reversing of the order of the results displayed. ChangedChanged - Modified the behavior of the show_years=, show_months=, show_weeks=, and show_days=parameters across the module. They now count the current year, month, week, and day (respectively) in their counts and go to the END day of the point of direction. - Modified the {date format=""}to now parse within any {display_each_variable pair. Deprecated the {year format=""}, {month format=""}, {week format=""}, {day format=""}variable pairs. - Modified the all ICS import entry dates for events to be pre-dated 2 days from the date of import in order to avoid future-dated event channel entries from not showing up. This will positively affect the displaying of ICS imported events and will have no bearing on the actual event dates. FixedFixed - Fixed a bug where a blank action was being inserted on update or install. - Fixed a bug where never ending events would stop showing up after 100 events had occurred. - Fixed a bug where phonetic times like 'midnight' and 'now' were ONLY parsing locale instead of default English and locale. - Fixed a bug where using the phonetic time 'now' was inaccurately getting the current localized time. - Fixed a bug where subsequent rules in the Publish area was not getting properly set when the first rule contained the date picker. - Fixed a bug where checking for leap year for phonetic dates would only work in the current year. - Fixed a bug where recurring dates on a second rule when an event was edited would not show up correctly. - Fixed a bug where the event_last_datevariable on repeat events was not being parsed correctly. - Fixed a bug where a rule with a single day would be skipped over during event display parsing if the second rule was more complex. - Fixed a bug with the Calendar:Mini tag where time_range_end was not set and caused all events to always be skipped. - Fixed a bug where creating an event with an end date past 2012-12-31 would result in an 'end of world' error when the calendar was set to 'Mayan'. - Fixed a bug with {exception_style tags in the Events loop which was causing them to never parse. - Fixed a bug where exception parsing was incorrectly caching the formatted date. - Fixed a bug where exclusions (EXDATE) were never getting inserted into items exported as iCalendar files. - Fixed a bug where previous_occurrences_limit=""parameter was not being respected when showing events that never ended. - Fixed a bug where occurrences where not getting all event data or conditionals. - Fixed a bug where the Channel Entries API was not always being loaded in EE 2.x. - Removed the unused CALENDAR_ACTIONS constant as it was unused. 1.5.3 - 2011-01-121.5.3 - 2011-01-12 AddedAdded - Added an {if event_never_ends}conditional to the Calendar:Events tag to check to see if an event repeats endlessly. ChangedChanged - Updated and enhanced Calendar module CP area (in EE 2.x) to conform completely to native EE styling. - Updated and enhanced Calendar UI in Publish page to match the rest of native EE 1.x styling. - Modified the Calendar JS and CSS output to no longer display on non-Calendar channels. FixedFixed - Fixed a bug where filtering, sorting, and pagination did not work on the Events and Occurrences Edit page in Calendar CP. - Fixed a bug where the Calendar Publish UI would not properly respect the "Datepicker Date Format" preference when adding rules. - Fixed a bug where deleting a Calendar channel entry would not remove all Event entries assigned to it. - Fixed a bug where ICS importing would not work correctly in EE 2.x. - Fixed a bug where the SAEF date pickers would appear behind some fields/divs in EE 1.x. - Fixed a bug where the hour, day, week, month, and year _event_countand _event_totalvariables and conditionals did not always parse correctly. - Fixed a bug where the "First Day of the Week" preference was not being respected on date pickers. 1.5.2 - 2010-11-241.5.2 - 2010-11-24 AddedAdded - Added a helper function because gregoriantojdmay not exist if the php-calendar module isn't installed with PHP. FixedFixed - Fixed a bug where the Calendar channels site_idwas being incorrectly called on sites with MSM installed. - Fixed a bug where some event with a date that did not belong in the requested range where showing up in the Cal tag. - Fixed a bug on install where an expected constant was missing for installing preference defaults. - Fixed a bug where occurrence data did not properly get removed on edit. - Fixed a bug where the ICS_Update tag would not work on the front end because the DSP class was not yet initialized. - Fixed a bug where live auto url_title creation was not working in the Publish page in EE 2.x. - Fixed a bug where an occurrence edit would not receive the correct 'all day' information on publish edit. - Fixed a bug where retrieval of occurrence data that was all day would give the end time as 0000 instead of 2400 Fixed a bug where the end_by_date of an event would be set to the current date if it was set to an earlier date than the current day. - Fixed a bug where the all day checkbox was not respected for occurrence includes if the select dates date picker was used. - Fixed a bug occurrences that lasted more than one day were being lumped into a select date widget and only respected the starting day when editing an entry with include dates. - Fixed a bug where flexible Date parsing (ex: start_date="8 months ago") wouldn't work when alternate language files were loaded. - Fixed a bug where the {event_first_date}variable would output the incorrect time. - Fixed a bug where the end_by_date dropdown would appear incorrectly in some instances in EE 2.x. DeprecatedDeprecated - Deprecated the {event_start_time}, {event_start_year}, {event_start_month}, {event_start_day}, {event_start_hour}, {event_start_minute}variables in favor of {event_start_date format=""}. - Deprecated the {event_end_time}, {event_end_year}, {event_end_month}, {event_end_day}, {event_end_hour}, {event_end_minute}variables in favor of {event_end_date format=""}. - Deprecated the {occurrence_start_time}, {occurrence_start_year}, {occurrence_start_month}, {occurrence_start_day}variables in favor of {occurrence_start_date format=""}. - Deprecated the {occurrence_end_time}, {occurrence_end_year}, {occurrence_end_month}, {occurrence_end_day}variables in favor of {occurrence_end_date format=""}. 1.5.1 - 2010-11-031.5.1 - 2010-11-03 AddedAdded - Added new "calendar_delete_events" extension hook for Calendar to better handle deleting of event data. ChangedChanged - Modified the channel/weblog default limit to be "500" instead of "100" for sites with a larger amount of events. FixedFixed - Fixed a bug where saving Preferences in the Calendar module CP would erase the default Calendar channels/weblogs from the prefs. NOTE: If you changed the channel/weblog shortnames, you MUST change them back to defaults (Calendars: calendar_calendars, Events: calendar_events) BEFORE you upgrade so the upgrade script can fix it for you. You can change them back afterwards. - Fixed a bug where ICS input into a Calendar entry would trigger errors for calendar entries in EE 2.x Publish page. - Fixed a bug where the dynamic=""parameter was defaulting to ON instead of OFF. - Fixed a bug where the Occurrences area in the Calendar module CP was triggering a PHP error in some cases. 1.5.0 - 2010-10-281.5.0 - 2010-10-28 AddedAdded - Added a "calendar_submit" class to the list of things that, upon click, can trigger Calendar's JS to kick into action for saving entries in the SAEF. Just add class="calendar_submit" to a submit button or whatever. Useful for when name="submit" is used by some other script on the page. - Added the {event_first_date}variable to the Calendar:Cal tag. - Added ability to use "not " in calendar_id and calendar_name parameters in the Calendar:Calendars tag. - Added {if calendar_no_results}conditional (to replace 'no_results') to all template tags to avoid variable collisions. - Added {if edited_occurrence}conditional to Calendar:Cal tag, which evaluates to TRUE if the event occurrence has been edited. Also paired this with a subsequent {event_parent_id}to parse the parent event entry ID. - Added include_jqui="off" parameter to {exp:calendar:datepicker_js}and {exp:calendar:datepicker_css}tags to stop Calendar's jQuery UI and CSS from loading in the event you have another instance of jQuery UI loading. ChangedChanged - Updated Calendar to support jQuery 1.4.2 and jQuery UI 1.8.4. - Modified Calendar to look at the weblog ID of the special "Calendars" and "Events" weblogs instead of short_name. - Modified the Calendar UI to contain a checkbox option for events with custom edited occurrences that lets the user delete them all automatically if they make DATE changes to an event (rather than have those occurrences be orphaned). - Modified dynamic=""parameter to now default to "off" in the Calendar:Cal tag. FixedFixed - Fixed a bug that caused modified occurrences to be wiped out if the parent entry was edited. - Fixed a bug where the {occurrence_start_time}and {occurrence_end_time}variables (without formatting parameters) outputted a date in YMD format rather than a 24-hour time. - Fixed a bug where events with multiple rules applying to the same day would only have one of those rules actually applied each day. - Fixed a bug where all-day events didn't display correctly in the absence of an {events}tag pair in the Calendar:Cal tag. - Fixed a bug where specifying more than one calendar in the calendar_name="" parameter would behave as though no calendar name had been specified. - Fixed a bug where the word "tomorrow" as a date parameter might be parsed incorrectly. - Fixed a bug where if a calendar with a {display_each_month}variable pair did not end on the last day of the month, the last month's {month}variables would be set to the previous month. - Fixed a bug where some icon images in the CP were referencing an incorrect URL path. - Fixed a bug that caused all day events to not be counted for the purposes of variables like {month_event_total}. - Fixed a bug where the event_limit=""parameter incorrectly sort the event array before applying the limit. - Fixed a bug where a multi-day, repeating event that spans across two or more months will not display after the initial occurrence in some circumstances. - Fixed a bug in the Calendar:Events tag where no results may be returned in cases where start and end date parameters are not provided. - Fixed a bug where the {event_duration_minutes}variable would output "0" instead of expected minutes. - Fixed a bug where %U date variable formatting character did not work in date variables. - Fixed a bug where literal newline indicators ( \\n) in incoming ICS files were being trimmed to "n", rather than being treated as actual newlines. - Fixed a bug where carriage returns ( \\r) in incoming ICS files were resulting in newlines in incoming data where there shouldn't have been newlines. - Fixed a JS bug where fiddling with one rule's "End" settings would affect other rules' "End" settings as well. - Fixed a bug where the {time}variable with date formatting did not parse. - Fixed a bug with Calendar:Week tag where the output of the {week}variable was often incorrect. - Fixed a bug where Calendar would have some theme issue if the EE install was inside a subdirectory. - Fixed a bug where when using an alternate language on your site, and using the Calendar SAEF Form, the words in the UI on additional rules are incorrectly translated. - Fixed a bug where the {week_event_count}and {week_event_total}variables did not parse. - Fixed a bug where clicking on the label for the "All Day Event" checkbox could check/uncheck the wrong rule's checkbox. 1.0.4 - 2010-05-241.0.4 - 2010-05-24 FixedFixed - Fixed a bug where {day_is_today}wasn't always true on the current day when using the Mini, Week, Day, and Month tags. - Fixed a bug where events with a stop_by rule and over 100 occurrences would be truncated at the 100th occurrence thanks to an incorrectly set last_date value in the DB. - Fixed a bug where the dynamic="on" parameter did not work. - Fixed a couple bugs that sometimes prevented the enable=""parameter from working correctly. - Fixed a bug where the {event_last_date}variable could cause PHP errors to be displayed. - Fixed a bug where PHP errors would display when using the iCalendar export feature. - Fixed a bug where the calendar_name=""parameter was not working properly in the Calendars tag. - Fixed a bug where the date_range_start= and date_range_end= parameters did not work with " ...ago" in the timeframes. - Fixed a bug where the specifying Time in date range parameters had no effect on the Cal and Events tags. - Fixed a bug where dates that didn't fall within the proscribed range were being added to an event object's dates array. - Fixed a bug where the status=""parameter would not work with "not" syntax. 1.0.3 - 2010-05-131.0.3 - 2010-05-13 AddedAdded - Added event_limit=""parameter to Cal, Day, Today, Week, Month, and Mini tags. Limits the number of events that will be displayed. Defaults to no limit. - Added the {event_first_date}date variable to Calendar:Events tag. ChangedChanged - Modified the {rule_days_of_week}and {rule_days_of_month}variable pairs to output the full day of the week (ex "Monday"), and day of month (ex: "15") rather than a shorthand value. - Modified Calendar to play nicer with data for other addons such as Playa. Added {calendar_ignore_title}and {calendar_ignore_url_title}variables for use in these circumstances, which display the equivalent of {title}and {url_title}. FixedFixed - Fixed a bug where the first_day_of_week=""parameter was not working in the Calendar:Cal tag. - Fixed a bug where some extensions could cause submit_new_entry_end to run without a valid entry ID, causing blank events. - Fixed a bug where two or more multi-day events would be merged into a single rule if their start and end times were the same. - Fixed a bug where rule data would sometimes not display correctly after Quick Save. - Fixed a bug where repeating events that spanned across years (ex: December 31 - January 1) would sometimes not display in the latter year. - Fixed a bug where events that start before 10am would sometimes not show up in the Calendar:Cal tag. - Fixed a bug where specifying multiple statuses in the Calendar:Cal tag would result in no results being returned. - Fixed a bug where the {event_start_hour}, {event_start_minute}, {event_end_hour}, {event_end_minute}variables would not parse in the Calendar:Events tag. - Fixed a bug where PHP would display when using the iCalendar export feature. 1.0.2 - 2010-04-191.0.2 - 2010-04-19 AddedAdded - Added support for Quick Save in the EE CP publish/edit form. - Added a "date format" preference for customizing the date as displayed in the datepicker. - Added event deletion capabilities to Events tab in module CP. - Added event_limit=""and event_offset=""parameters to {exp:calendar:events}. ChangedChanged - Changed how EE's Extensions class is called (requires Bridge 1.0.3+). FixedFixed - Removed a couple leftover references to $TMPL. - Fixed a bug where a masked CP path (i.e. not index.php) caused problems. - Fixed a bug where in certain circumstances the month data returned by {exp:calendar:cal}would be off by one month. - Fixed a bug in iCalendar parsing where an over-enthusiastic find/replace had mangled some variable names. 1.0.1 - 2010-04-131.0.1 - 2010-04-13 AddedAdded - Added a feature where changing an event's calendar back to "Select a Calendar" will delete that event's calendar data. ChangedChanged - Modified Flow UI CSS to stop the Sites dropdown menu from hiding behind Flow UI elements in MSM installs. - Modified how "multi-day" is defined in the code so that late-night events (i.e. 11:00pm - 12:30am) are treated correctly. - Changed the default timezone for new calendars to UTC. - Improved time and timezone handling in iCalendar import process. - Improved date caching for improved performance. FixedFixed - Fixed a bug where the show_years=""parameter did not work. - Fixed a bug where editing an event that didn't have calendar/date details set would throw a MySQL error. - Fixed a JS bug that led to rules not being removed correctly in some circumstances. - Fixed a bug where an extra event occurrence might be output if a rule's first occurrence comes after the start date of the rule. - Fixed a bug where related events weren't parsing correctly inside the {events}{/events}variable pair. - Fixed a JS bug that allowed an event to be submitted with only exclude rules. 1.0.0 - 2010-04-091.0.0 - 2010-04-09 AddedAdded - Initial release.
https://docs.solspace.com/expressionengine/calendar/v1/setup/changelog.html
2020-11-24T04:29:36
CC-MAIN-2020-50
1606141171077.4
[]
docs.solspace.com
Updating Repeater Rows In this tutorial, you'll learn how to dynamically update data in a repeater's dataset using the Update Rows action. Note Click here to download the completed RP file for this tutorial. 1. Widget Setup1. Widget Setup Open a new RP file and open Page 1 on the canvas. Drag a repeater widget onto the canvas and double-click it to open its item for editing. 2. Update the Row's Column0 Value When the Rectangle Is Clicked2. Update the Row's Column0 Value When the Rectangle Is Clicked Select the rectangle widget and click New Interaction in the Interactions pane. Select the Click or Tap event in the list that appears, and then select the Update Rows action. Select the repeater widget in the Target dropdown. Leave the This radio button selected. Click +Select Column and select Column0 in the list. Leave Value selected in the dropdown that appears, and enter Clicked!in the text field next to it. Click OK to save the action. 3. Preview the Interaction3. Preview the Interaction Preview the page in your web browser and click a rectangle in the repeater. Its text should update to "Clicked!".
https://docs.axure.com/axure-rp/reference/updating-repeater-rows/
2020-11-24T04:06:41
CC-MAIN-2020-50
1606141171077.4
[array(['/assets/screenshots/tutorials/repeaters-updating-rows-setup.png', None], dtype=object) array(['/assets/screenshots/tutorials/repeaters-updating-rows-onclick.png', None], dtype=object) ]
docs.axure.com
Viewing alert sequence clusters The Alert Sequence Clusters window provides details of the alert sequences detected from the existing alert data and sequences related to an inference. The detected alert sequences are unmodified sequences fetched from the existing alert data. Similar alert sequences are grouped together. The grouping provides a count that explains the number of times alerts are triggered in a certain sequence. The alert sequence clusters window serves as a verification of the ML correlation. For example, if ML (machine learning) correlates alerts cpu.utilization and system.ping together, use the Alert Sequence Clusters window to find the sequences that have cpu.utilization and system.ping together. Viewing alert sequences detected from the existing alert data To view the alert sequences detected from existing alert data: - From All Clients, select a client. - Go to Setup > Alert Management > Alert Correlation. - Click on an ML-based alert correlation policy. Note: To easily identify an ML-based policy, check for the status. The status would be one of these; Training Started, Ready, and so on. - From the Policy Definition field, click Detected alert sequence patterns in alert data. The alert sequences displayed on the Alert Sequence Clusters window are the top alert sequences. Expand an alert sequence to view the sub-sequence clusters. Alert Sequence Cluster Metrics Enter the required alert metric in the search box to fetch the results of a particular alert sequence. The alerts sequences that match the entered alert metric are highlighted in blue. Alert Sequence Cluster Window Viewing alert sequences related to inferences To view alert sequences related to an inference: - From All Clients, select Alerts and click on the required inference name. - Click the Correlated Alerts tab. - From the list of correlated alerts, click Show detected alert sequence patterns. Detected Sequences of an Inference Viewing ML status Machine Learning (ML) status describes the various stages of machine learning implementation in a policy from analyzing a sequence to correlating alerts. Viewing processed inferences To view the number of inferences associated with a policy: - From All Clients, select a client. - Go to Setup > Alert Management > Alert Correlation and select the required policy. - Click on the number in the Processed Inferences column to view the details of the inferences. Number of Processed Inferences The list of processed inferences appears on the Alerts Browser page. List of processed inferences Alert correlation timing The time gap between each adjacent alert is 5 minutes. Only those alerts taking place within a 5-minute interval are correlated. If alerts are continuously generated for every 5 minutes, the overall time of a correlation can be much longer than 5 minutes. Take these example alert correlations: - A1: 10:00 - A2: 10:04 - A3: 10:07 - A4: 10:14 A1, A2, A3 are correlated, as the gap between adjacent alerts is less than 5 minutes. A4 is excluded since the gap between A4 and A3 is more than 5 minutes. In this example, the overall correlation time is 7 minutes.
https://docs.opsramp.com/solutions/events-incidents/alert-correlation/machine-learning-reference/
2020-11-24T03:05:15
CC-MAIN-2020-50
1606141171077.4
[array(['https://docsmedia.opsramp.com/screenshots/Alerts/Alert-Correlation/alert-sequence-clusters-metrics.png', 'Alert Sequence Cluster Metrics'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Alerts/Alert-Correlation/alert-sequence-clusters-window-search.png', 'Alert Sequence Cluster Window'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Alerts/Alert-Correlation/detected-sequences-of-an-inference.png', 'Detected Sequences of an Inference'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Alerts/Alert-Correlation/processed-inferences.png', 'Number of Processed Inferences'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Alerts/Alert-Correlation/list-of-processed-inferences.png', 'List of processed inferences'], dtype=object) ]
docs.opsramp.com
Tutorial¶ Welcome to the Hy tutorial! In a nutshell, Hy is a Lisp dialect, but one that converts its structure into Python ... literally a conversion into Python's abstract syntax tree! (Or to put it in more crude terms, Hy is lisp-stick on a Python!)! Basic intro to Lisp for Pythonistas¶ Okay, maybe you've never used Lisp before, but you've used Python! A "hello world" program in Hy is actually super simple. Let's try it: (print "hello world") See? Easy! As you may have guessed, this is the same as the Python version of: print "hello world" To add up some super simple math, we could do: (+ 1 3) Which would return 4 and would be the equivalent of: 1 + 3 What you'll notice is that the first item in the list is the function being called and the rest of the arguments are the arguments being passed in. In fact, in Hy (as with most Lisps) we can pass in multiple arguments to the plus operator: (+ 1 3 55) Which would return 59. Maybe you've heard of Lisp before but don't know much about it. Lisp isn't as hard as you might think, and Hy inherits from Python, so Hy is a great way to start learning Lisp. The main thing that's obvious about Lisp is that there's a lot of parentheses. This might seem confusing at first, but it isn't so hard. Let's look at some simple math that's wrapped in a bunch of parentheses that we could enter into the Hy interpreter: (setv result (- (/ (+ 1 3 88) 2) 8)) This would return 38. But why? Well, we could look at the equivalent expression in python: result = ((1 + 3 + 88) / 2) - 8 If you were to try to figure out how the above were to work in python, you'd of course figure out the results by solving each inner parenthesis. That's the same basic idea in Hy. Let's try this exercise first in Python: result = ((1 + 3 + 88) / 2) - 8 # simplified to... result = (92 / 2) - 8 # simplified to... result = 46 - 8 # simplified to... result = 38 Now let's try the same thing in Hy: (setv result (- (/ (+ 1 3 88) 2) 8)) ; simplified to... (setv result (- (/ 92 2) 8)) ; simplified to... (setv result (- 46 8)) ; simplified to... (setv result 38) As you probably guessed, this last expression with setv means to assign the variable "result" to 38. See? Not too hard! This is the basic premise of Lisp. Lisp stands for "list processing"; this means that the structure of the program is actually lists of lists. (If you're familiar with Python lists, imagine the entire same structure as above but with square brackets instead, any you'll be able to see the structure above as both a program and a data structure.) This is easier to understand with more examples, so let's write a simple Python program, test it, and then show the equivalent Hy program: def simple_conversation(): print "Hello! I'd like to get to know you. Tell me about yourself!" name = raw_input("What is your name? ") age = raw_input("What is your age? ") print "Hello " + name + "! I see you are " + age + " years old." simple_conversation() If we ran this program, it might go like: Hello! I'd like to get to know you. Tell me about yourself! What is your name? Gary What is your age? 38 Hello Gary! I see you are 38 years old. Now let's look at the equivalent Hy program: (defn simple-conversation [] (print "Hello! I'd like to get to know you. Tell me about yourself!") (setv name (raw-input "What is your name? ")) (setv age (raw-input "What is your age? ")) (print (+ "Hello " name "! I see you are " age " years old."))) (simple-conversation) If you look at the above program, as long as you remember that the first element in each list of the program is the function (or macro... we'll get to those later) being called and that the rest are the arguments, it's pretty easy to figure out what this all means. (As you probably also guessed, defn is the Hy method of defining methods.) Still, lots of people find this confusing at first because there's so many parentheses, but there are plenty of things that can help make this easier: keep indentation nice and use an editor with parenthesis matching (this will help you figure out what each parenthesis pairs up with) and things will start to feel comfortable. There are some advantages to having a code structure that's actually a very simple data structure as the core of Lisp is based on. For one thing, it means that your programs are easy to parse and that the entire actual structure of the program is very clearly exposed to you. (There's an extra step in Hy where the structure you see is converted to Python's own representations ... in "purer" Lisps such as Common Lisp or Emacs Lisp, the data structure you see in the code and the data structure that is executed is much more literally close.) Another implication of this is macros: if a program's structure is a simple data structure, that means you can write code that can write code very easily, meaning that implementing entirely new language features can be very fast. Previous to Hy, this wasn't very possible for Python programmers ... now you too can make use of macros' incredible power (just be careful to not aim them footward)! Hy is a Lisp-flavored Python¶ Hy converts to Python's own abstract syntax tree, so you'll soon start to find that all the familiar power of python is at your fingertips. You have full access to Python's data types and standard library in Hy. Let's experiment with this in the hy interpreter: => [1 2 3] [1, 2, 3] => {"dog" "bark" ... "cat" "meow"} {'dog': 'bark', 'cat': 'meow'} => (, 1 2 3) (1, 2, 3) => #{3 1 2} {1, 2, 3} => 1/2 Fraction(1, 2) Notice the last two lines: Hy has a fraction literal like Clojure. If you start Hy like this (a shell alias might be helpful): $ hy --repl-output-fn=hy.contrib.hy-repr.hy-repr the interactive mode will use hy-repr instead of Python's native repr function to print out values, so you'll see values in Hy syntax rather than Python syntax: => [1 2 3] [1 2 3] => {"dog" "bark" ... "cat" "meow"} {"dog" "bark" "cat" "meow"} If you are familiar with other Lisps, you may be interested that Hy supports the Common Lisp method of quoting: => '(1 2 3) (1L 2L 3L) You also have access to all the built-in types' nice methods: => (.strip " fooooo ") "fooooo" What's this? Yes indeed, this is precisely the same as: " fooooo ".strip() That's right---Lisp with dot notation! If we have this string assigned as a variable, we can also do the following: (setv this-string " fooooo ") (this-string.strip) What about conditionals?: (if (try-some-thing) (print "this is if true") (print "this is if false")) As you can tell above, the first argument to if is a truth test, the second argument is the body if true, and the third argument (optional!) is if false (ie. else). If you need to do more complex conditionals, you'll find that you don't have elif available in Hy. Instead, you should use something called cond. In Python, you might do something like: somevar = 33 if somevar > 50: print "That variable is too big!" elif somevar < 10: print "That variable is too small!" else: print "That variable is jussssst right!" In Hy, you would do: (setv somevar 33) (cond [(> somevar 50) (print "That variable is too big!")] [(< somevar 10) (print "That variable is too small!")] [True (print "That variable is jussssst right!")]) What you'll notice is that cond switches off between a statement that is executed and checked conditionally for true or falseness, and then a bit of code to execute if it turns out to be true. You'll also notice that the else is implemented at the end simply by checking for True -- that's because True will always be true, so if we get this far, we'll always run that one! You might notice above that if you have code like: (if some-condition (body-if-true) (body-if-false)) But wait! What if you want to execute more than one statement in the body of one of these? You can do the following: (if (try-some-thing) (do (print "this is if true") (print "and why not, let's keep talking about how true it is!")) (print "this one's still simply just false")) You can see that we used do to wrap multiple statements. If you're familiar with other Lisps, this is the equivalent of progn elsewhere. (print "this will run") ; (print "but this will not") (+ 1 2 3) ; we'll execute the addition, but not this comment! Hashbang ( #!) syntax is supported: #! /usr/bin/env hy (print "Make me executable, and run me!") Looping is not hard but has a kind of special structure. In Python, we might do: for i in range(10): print "'i' is now at " + str(i) The equivalent in Hy would be: (for [i (range 10)] (print (+ "'i' is now at " (str i)))) You can also import and make use of various Python libraries. For example: (import os) (if (os.path.isdir "/tmp/somedir") (os.mkdir "/tmp/somedir/anotherdir") (print "Hey, that path isn't there!")) Python's context managers ( with statements) are used like this: (with [f (open "/tmp/data.in")] (print (.read f))) which is equivalent to: with open("/tmp/data.in") as f: print f.read() And yes, we do have List comprehensions! In Python you might do: odds_squared = [ pow(num, 2) for num in range(100) if num % 2 == 1] In Hy, you could do these like: (setv odds-squared (list-comp (pow num 2) (num (range 100)) (= (% num 2) 1))) ; And, an example stolen shamelessly from a Clojure page: ; Let's list all the blocks of a Chessboard: (list-comp (, x y) (x (range 8) y "ABCDEFGH")) ; [(0, 'A'), (0, 'B'), (0, 'C'), (0, 'D'), (0, 'E'), (0, 'F'), (0, 'G'), (0, 'H'), ; (1, 'A'), (1, 'B'), (1, 'C'), (1, 'D'), (1, 'E'), (1, 'F'), (1, 'G'), (1, 'H'), ; (2, 'A'), (2, 'B'), (2, 'C'), (2, 'D'), (2, 'E'), (2, 'F'), (2, 'G'), (2, 'H'), ; (3, 'A'), (3, 'B'), (3, 'C'), (3, 'D'), (3, 'E'), (3, 'F'), (3, 'G'), (3, 'H'), ; (4, 'A'), (4, 'B'), (4, 'C'), (4, 'D'), (4, 'E'), (4, 'F'), (4, 'G'), (4, 'H'), ; (5, 'A'), (5, 'B'), (5, 'C'), (5, 'D'), (5, 'E'), (5, 'F'), (5, 'G'), (5, 'H'), ; (6, 'A'), (6, 'B'), (6, 'C'), (6, 'D'), (6, 'E'), (6, 'F'), (6, 'G'), (6, 'H'), ; (7, 'A'), (7, 'B'), (7, 'C'), (7, 'D'), (7, 'E'), (7, 'F'), (7, 'G'), (7, 'H')] Python has support for various fancy argument and keyword arguments. In Python we might see: >>> def optional_arg(pos1, pos2, keyword1=None, keyword2=42): ... return [pos1, pos2, keyword1, keyword2] ... >>> optional_arg(1, 2) [1, 2, None, 42] >>> optional_arg(1, 2, 3, 4) [1, 2, 3, 4] >>> optional_arg(keyword1=1, pos2=2, pos1=3, keyword2=4) [3, 2, 1, 4] The same thing in Hy: => (defn optional-arg [pos1 pos2 &optional keyword1 [keyword2 42]] ... [pos1 pos2 keyword1 keyword2]) => (optional-arg 1 2) [1 2 None 42] => (optional-arg 1 2 3 4) [1 2 3 4] If you're running a version of Hy past 0.10.1 (eg, git master), there's also a nice new keyword argument syntax: => (optional-arg :keyword1 1 ... :pos2 2 ... :pos1 3 ... :keyword2 4) [3, 2, 1, 4] Otherwise, you can always use apply. But what's apply? Are you familiar with passing in *args and **kwargs in Python?: >>> args = [1 2] >>> kwargs = {"keyword2": 3 ... "keyword1": 4} >>> optional_arg(*args, **kwargs) We can reproduce this with apply: => (setv args [1 2]) => (setv kwargs {"keyword2" 3 ... "keyword1" 4}) => (apply optional-arg args kwargs) [1, 2, 4, 3] There's also a dictionary-style keyword arguments construction that looks like: (defn another-style [&key {"key1" "val1" "key2" "val2"}] [key1 key2]) The difference here is that since it's a dictionary, you can't rely on any specific ordering to the arguments. Hy also supports *args and **kwargs. In Python: def some_func(foo, bar, *args, **kwargs): import pprint pprint.pprint((foo, bar, args, kwargs)) The Hy equivalent: (defn some-func [foo bar &rest args &kwargs kwargs] (import pprint) (pprint.pprint (, foo bar args kwargs))) Finally, of course we need classes! In Python, we might have a class like: class FooBar(object): """ Yet Another Example Class """ def __init__(self, x): self.x = x def get_x(self): """ Return our copy of x """ return self.x And we might use it like: bar = FooBar(1) print bar.get_x() In Hy: (defclass FooBar [object] "Yet Another Example Class" (defn --init-- [self x] (setv self.x x)) (defn get-x [self] "Return our copy of x" self.x)) And we can use it like: (setv bar (FooBar 1)) (print (bar.get-x)) Or using the leading dot syntax! (print (.get-x (FooBar 1))) You can also do class-level attributes. In Python: class Customer(models.Model): name = models.CharField(max_length=255) address = models.TextField() notes = models.TextField() In Hy: (defclass Customer [models.Model] [name (models.CharField :max-length 255}) address (models.TextField) notes (models.TextField)]) Macros¶ One really powerful feature of Hy are macros. They are small functions that are used to generate code (or data). When program written in Hy is started, the macros are executed and their output is placed in the program source. After this, the program starts executing normally. Very simple example: => (defmacro hello [person] ... `(print "Hello there," ~person)) => (hello "Tuukka") Hello there, Tuukka The thing to notice here is that hello macro doesn't output anything on screen. Instead it creates piece of code that is then executed and prints on screen. This macro writes a piece of program that looks like this (provided that we used "Tuukka" as parameter): (print "Hello there," "Tuukka") We can also manipulate code with macros: => (defmacro rev [code] ... (setv op (last code) params (list (butlast code))) ... `(~op ~@params)) => (rev (1 2 3 +)) 6 The code that was generated with this macro just switched around some of the elements, so by the time program started executing, it actually reads: (+ 1 2 3) Sometimes it's nice to be able to call a one-parameter macro without parentheses. Sharp macros allow this. The name of a sharp macro must be only one character long, but since Hy operates well with Unicode, we aren't running out of characters that soon: => (defsharp ↻ [code] ... (setv op (last code) params (list (butlast code))) ... `(~op ~@params)) => #↻(1 2 3 +) 6 Macros are useful when one wishes to extend Hy or write their own language on top of that. Many features of Hy are macros, like when, cond and ->. What if you want to use a macro that's defined in a different module? The special form import won't help, because it merely translates to a Python import statement that's executed at run-time, and macros are expanded at compile-time, that is, during the translate from Hy to Python. Instead, use require, which imports the module and makes macros available at compile-time. require uses the same syntax as import. => (require tutorial.macros) => (tutorial.macros.rev (1 2 3 +)) 6 Hy <-> Python interop¶ Using Hy from Python¶ You can use Hy modules in Python! If you save the following in greetings.hy: (defn greet [name] (print "hello from hy," name)) Then you can use it directly from Python, by importing Hy before importing the module. In Python: import hy import greetings greetings.greet("Foo") Using Python from Hy¶ You can also use any Python module in Hy! If you save the following in greetings.py in Python: def greet(name): print("hello, %s" % (name)) You can use it in Hy (see import): (import greetings) (.greet greetings "foo") More information on Hy <-> Python interop. Protips!¶ Hy also features something known as the "threading macro", a really neat feature of Clojure's. The "threading macro" (written as ->) is used to avoid deep nesting of expressions. The threading macro inserts each expression into the next expression's first argument place. Let's take the classic: (loop (print (eval (read)))) Rather than write it like that, we can write it as follows: (-> (read) (eval) (print) (loop)) Now, using python-sh, we can show how the threading macro (because of python-sh's setup) can be used like a pipe: => (import [sh [cat grep wc]]) => (-> (cat "/usr/share/dict/words") (grep "-E" "^hy") (wc "-l")) 210 Which, of course, expands out to: (wc (grep (cat "/usr/share/dict/words") "-E" "^hy") "-l") Much more readable, no? Use the threading macro!
https://hy.readthedocs.io/en/stable/tutorial.html
2017-12-11T03:38:22
CC-MAIN-2017-51
1512948512121.15
[]
hy.readthedocs.io
OnClientRequestStart The OnClientRequestStart occurs when a request to the Web Service is about to be sent. The event is raised only when the Gantt is bound to a Web Service and is raised for every request sent to the service, including all CRUD operations on tasks, dependencies, resources and assignments. The event handler receives two parameters: The instance of the Gantt control firing the event. An eventArgs parameter containing the following methods: get_type returns the type of the request. get_context returns a context object (implements IDictionary) that is passed to the Web Service method that handles the request. set_cancel lets you cancel the event and stop the request. <telerik:RadGantt </telerik:RadGantt> function OnClientRequestStart(sender, eventArgs) { args.set_cancel(true); } The event can be used to send additional information to the Web Service.
https://docs.telerik.com/devtools/aspnet-ajax/controls/gantt/client-side-programming/events/onclientrequeststart
2017-12-11T04:13:47
CC-MAIN-2017-51
1512948512121.15
[]
docs.telerik.com
OP5 Monitor appliance Over 200,000 IT staff across medium to large enterprises worldwide are currently using OP5 Monitor as their preferred network monitoring software. OP5 Monitor allows you to take control of your IT, enabling your network to be more responsive, more reliable and even faster than ever before. With unparalleled scalability, OP5 Monitor grows as your company grows, so you’ll understand why we say this is the last network monitor you’ll ever need to purchase. More informations on Interface eth0 is set to DHCP. Default credentials: - CLI: root / monitor - Web access: admin / monitor - Logserver Extension: admin / admin RAM: 1024 MB You need KVM enable on your machine or in the GNS3 VM. Documentation for using the appliance is available on
http://docs.gns3.com/appliances/op5-monitor.html
2017-12-11T04:08:16
CC-MAIN-2017-51
1512948512121.15
[]
docs.gns3.com