content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The key to selling any NFT (item) is two fold: Have good content on the NFT, such as high quality artwork, or a valuable domain Be descriptive and give the buyer a reason to buy your item its purchased so you don't give away the good version to its. Its special part of your forums. Make the item apart of a collectible series, or make it so that collecting 5 of the items grants the buyer something special such as some merch from your brand. Beautiful Make sure whatever you make it is beautiful or high quality. Even if your selling a domain name, a beautiful image or video explaining why that domain name is so valuable increases your chances of selling your item by a lot! Write a good description, include a video or images to explain why your item is valuable. If you just put one or two words for your item, it probably won't sell Bad This description has a youtube video, detailed message about the item, and explains what you get access to when you purchase it. Good!
https://docs.mintable.app/ethereum-version/faq/how-to-sell-successfully
2021-02-24T20:41:37
CC-MAIN-2021-10
1614178347321.0
[]
docs.mintable.app
: We can see some of the arguments, in particular the names of the compiled functions, e.g: _ZN5numba4cuda5tests6cudapy13test_constmem19cuconstRecAlign$247.
https://numba.readthedocs.io/en/stable/developer/debugging.html
2021-02-24T20:39:27
CC-MAIN-2021-10
1614178347321.0
[]
numba.readthedocs.io
Finding and subscribing to container products You can find container products by browsing the AWS Marketplace website Using the Amazon ECS console You can also find container products in the Amazon ECS console. The navigation pane has links to discover new products from AWS Marketplace and to see existing subscriptions. Browsing product details Once you have found a product you are interested in, choose the title to browse to the product detail page. Here you can find information on the software including product description, supported Amazon services (for example, Amazon ECS or Amazon EKS), pricing details, usage information, support information, and available fulfillment options. Products may be free, BYOL, or PAYG, with either a fixed monthly price or an hourly price that is charged per Amazon ECS task. Each product will have at least one fulfillment option, which is a set of one or more container images that are required to run the software. You can also read and write reviews for the product from this page. Choose Continue to Subscribe to proceed. Subscribing to products If you want to use a product, you will need to subscribe to it first. On the subscription page you can view pricing information for paid products, and access the end-user license agreement (EULA) for the software. Choose Accept Terms to proceed. This will create a subscription to the product, which provides an entitlement to use the software. It will take a minute or two for the subscription to complete. Once you receive an entitlement to a paid product, if you start using the software you will be charged. If you cancel your subscription without terminating all running instances of the software, you will continue to be charged for any software usage. You may also incur infrastructure charges related to using the product. For example, if you create a new Amazon EKS cluster to host the software product, you will be charged for that service.
https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-finding-and-subscribing-to-container-products.html
2021-02-24T21:15:12
CC-MAIN-2021-10
1614178347321.0
[]
docs.aws.amazon.com
Onegini IdP topic guides Welcome! This is the topic guides section for the Onegini IdP. In the topic guides you find detailed information on the configuration of important components of the Onegini Software. Here you see the available Topic Guides on the Onegini IdP: - Attribute mappings - Learn how user attributes returned by the external IdP can be mapped to person attributes managed by Onegini IdP. - Attributes lifecycle - See how attributes' values from various sources are are merged and managed in the profile and SAML responses. - Authentication - all authentication related topic guides - Configuring QR Code Login Method - Mobile Login - learn how to configure mobile login - Mobile Step-up Authentication - learn how to configure mobile step-up authentication - Step-up Authentication - learn how to configure step-up authentication - Token based login - Learn how to use token based login. - Using one time login link - See how to use one time login links to migrate and couple your social accounts with Onegini identity - Identity Provider - Identity Providers related guides - Google Identity Providers - Learn how to allow users to authenticate using Google IdP - SAML Identity Providers - Learn how to allow users to authenticate using external SAML IdP. - Configure SAML Keys - Learn how generate and configure keys used in SAML flows. - OpenID Connect - Learn how to set up and use OpenID Connect identity provider. - DigiD Identity Providers - Learn how to allow users to authenticate using DigiD. - eIDAS Identity Providers - Learn how to allow users to authenticate using eIDAS. - Configure JWT Keys - Learn how manage JWT keys used by OIDC Identity Providers. - Using one time login link - See how to use one time login links to migrate. - Authentication post process actions - Learn how to use post process actions to modify flow of the application - Brand-specific messages with locale variant code - See how to set up different messages per brand - Configure reCAPTCHA V2 - Learn how to configure and use ReCaptcha v2. - Connect Onegini IdP with extension over https - Learn how to connect Onegini IdP with extension over https by configuring truststore and keystore. - Create person with Person API - Learn different ways of creating person via API calls. - Custom email validation - Learn how to enable and configure email validation for non-standard domains. - Email notifications - Learn about email sending capabilities of the Onegini IdP - Flow Context - See how to exchange data between Onegini IdP and its extensions. - Idp Application Config via REST API - See how to get application configuration via REST API. - Automatic Migration - Just-in-time migration will allow your users to seamlessly migrate from legacy system to Onegini IdP, learn how the migration works. - Automatic SignUp - Just-in-time sign-up helps user to create accounts in Onegini IdP without additional screens and interactions, read more details about this feature. - Person's partitioning - Learn how person's partitioning works and how to configure it. - Identity Providers partitioning - Learn how Identity Providers partitioning works and how to configure it. - Profile management with Person APIs - Learn how to update user attributes with API. - Sign up without invitation validation - See how to configure signup without invitation validation - Session API - Learn how to obtain session information through the API - Token Server configuration - Learn how to setup the connection with the Onegini Token Server - Transforming Profile Attributes - Learn how to set up profile attributes transformation and which attributes can be transformed. - Messages resolution order - Get to know how localized messages are being resolved by the Onegini IdP - Person pre creation process - Read about extension points in the flow of creating an account. - Returning SAML authentication information to Service Provider - Read about controlling authentication information on Service Provider level. - Using custom parameters from the SAML authentication request - Sending events to AWS event bridge - Identity Assurance Level - Hooks - QR Device Registration
https://docs.onegini.com/cim/stable/idp/topic-guides/index.html
2021-02-24T20:56:08
CC-MAIN-2021-10
1614178347321.0
[]
docs.onegini.com
Once Doc's creative juices get flowing there's really no limit to what can happen. Visit our dealership and you'll see evidence of Doc's limitless imagination all around the store. Check back here often to see what new oddities Doc is creating. The photo to the left is a "Camshaft Cobra" statue that Doc built in 2008. The cobra is entirely constructed of genuine Harley-Davidson® camshafts & valves. Each year, Doc creates several gifts to be given away at our local H.O.G. Chapter Christmas party. Each time a member of the Wolf River Chapter volunteers to help with different events throughout the year, they get a ticket to be entered into the drawing for these awesome works of art. In past years, Doc has built lamps, tables, towel bars, flower pots, candy dishes, and much more. Almost all components of these cool conversation pieces are genuine Harley-Davidson® parts.
https://docshd.com/docs-creations
2021-02-24T20:45:33
CC-MAIN-2021-10
1614178347321.0
[]
docshd.com
A connection consists of a driver and a data source. A connection handle identifies each connection. The connection handle defines not only which driver to use but which data source to use with that driver. Within a segment of code that implements ODBC (the Driver Manager or a driver), the connection handle identifies a structure that contains connection information, such as the following: The state of the connection The current connection-level diagnostics The handles of statements and descriptors currently allocated on the connection The current settings of each connection attribute ODBC does not prevent multiple simultaneous connections, if the driver supports them. Therefore, in a particular ODBC environment, multiple connection handles might point to a variety of drivers and data sources, to the same driver and a variety of data sources, or even to multiple connections to the same driver and data source. Some drivers limit the number of active connections they support; the SQL_MAX_DRIVER_CONNECTIONS option in SQLGetInfo specifies how many active connections a particular driver supports. Connection handles are primarily used when connecting to the data source (SQLConnect, SQLDriverConnect, or SQLBrowseConnect), disconnecting from the data source (SQLDisconnect), getting information about the driver and data source (SQLGetInfo), retrieving diagnostics (SQLGetDiagField and SQLGetDiagRec), and performing transactions (SQLEndTran). They are also used when setting and getting connection attributes (SQLSetConnectAttr and SQLGetConnectAttr) and when getting the native format of an SQL statement (SQLNativeSql). Connection handles are allocated with SQLAllocHandle and freed with SQLFreeHandle.
https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/connection-handles
2017-07-20T20:06:40
CC-MAIN-2017-30
1500549423320.19
[]
docs.microsoft.com
When you connect the vSphere Web Client to vCenter Server, you can view the health status from the Monitor tab. Before you begin Verify that you have logged in to the vSphere Web Client. Procedure - Select a host in the object navigator. - Click the Monitor tab, and click Hardware Status. - Select the type of information to view.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.monitoring.doc/GUID-3832B0AF-A54B-4DE2-8082-CC104EEEA612.html
2017-07-20T18:55:13
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com
When you install ESXi or use Auto Deploy to provision hosts, you can enable the auto-partitioning boot option to create partitions on your host. You have several options to prevent auto-partitioning from formatting local SSDs as VMFS. Problem By default, auto-partitioning deploys VMFS file systems on any unused local storage disks on your host, including SSD disks. However, an SSD formatted with VMFS becomes unavailable for such features as virtual flash and Virtual SAN. Both features require an unformatted SSD and neither can share the disk with any other file system. Results To use auto-partitioning and to ensure that local SSDs remain unpartitioned, Manage. Click Settings. SSDs that you plan to use with Flash Read Cache and Virtual SAN already have VMFS datastores, remove the datastores.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.troubleshooting.doc/GUID-8435617B-8B8A-4CC9-8C00-B135313A8AAF.html
2017-07-20T18:54:59
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com
If you remove a virtual machine from a datastore but you do not delete the virtual machine from the host that you are managing, you can register the virtual machine on the datastore. About this task The New virtual machine wizard allows you to select one or more virtual machines that you would like to register. By selecting a datastore or a directory, you choose to register all virtual machines on that datastore or in that directory. Procedure - Click Select one or more virtual machines, a datastore, or a directory, locate the virtual machine or virtual machines that you would like to register, and click Select. - (Optional) : To remove a virtual machine from the list, select the name of the file and click Remove selected. - (Optional) : To clear your selection and start again, click Remove all. - Click Next.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.html.hostclient.doc/GUID-66AEE3A0-61E4-40CF-8509-37941F4B9AC4.html
2017-07-20T18:54:51
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com
100% garanti In this paper we will compare to famous corporation from the Soft drinks Industry: Pepsi Co Inc. and The Coca Cola Company. These two companies that have been into a tough competition for years are the leader of this industry. But the soft drinks industry progressively change and trends shows a shift toward health consciousness of consumers in western countries while new markets are emerging in continents such as Asia and South America. As a result we will try to watch through the financial statements of both companies if one more than another could better overcome those changes in the industry. I- The Soft Drinks Industry II- The Coca Cola Company III- PepsiCo Inc. IV- Financial Analysis 1- Income Statements 2- Coca Cola and Pepsi Key Ratios 3- Analysis of the Return on Equity [...] The higher this ratio is, the most profitable is the investment for stockholders . ROE Ratio = (Net Income Preferred Dividends) / Average Common Shareholders Equity Here for the Average Common Shareholders Equity we will use the average between 2008 and Pepsi: . ROE Ratio = (5,946 41) / ((16,763 + 12,203) / = 0.41 Coca: . ROE Ratio = (6,824 / ((24,799 + 20,472) / = Pepsi has the greatest ROE Coca Cola ROE is fair enough with 30% but far from the profitability Pepsi is able to offers to its investor. [...] [...] Debt to Assets Ratio = 23,044 / 39,848 = 0.59 Coca 2009: Debt to Assets Ratio = 23,872 / 48,671 = Pepsi?s assets are financed at 59% with debts whereas Coca Cola's assets are financed at 49% with debts. Pepsi here clearly hold a riskier financial position . However looks to be still reasonable for a large corporation such as Pepsi . Time Interest Earned Ratio The Time Interest Earned ratio measures the company's ability to fulfill its interest payment obligation when they come. If the rate fall under it means that the company no longer produce enough income to meet its obligation and it is a critical situation . [...] [...] Price Earning Ratio = Market Price per Common Share / Earning Per Share . Pepsi 2009: Price Earnings Ratio = 60.80 / 3.73 = 16.3 Coca 2009: . Price Earnings Ratio = 57 / 2.95 = With the Price Earnings Ratio we can notice that Pepsi per in profit) and Coca Cola 19.39 per in profit) are not too far from each other. If we follow the trend that says that lower ratio is for riskier firm, we can assume that Pepsi Cola is an investment slightly riskier than Coca Cola. [...] [...] Alexandre CARCIENTE Analyzing Financial Statements Due on December Pepsi co / Coca Cola 4186 Words . INTRODUCTION . In this paper, we are going to perform a financial analysis of the two big American? based beverage companies: Coca Cola and Pepsi Cola. In this document, we will explain the relative strengths and weaknesses of the two companies by using the companies' latest annual reports (2009) . In the first part, we will talk about the beverage industry and more precisely, the soft drinks industry. [...] [...] III) The Coca Cola Company John Stith Pemberton created the Coca Cola beverage in 1886. Pemberton was a pharmacist located in Georgia. John Pemberton rapidly sold his business to another pharmacist from Atlanta, Asa Griggs Candler. The business was sold along with all the exclusivity rights concerning the Coca Cola formula. This is thus Asa Griggs Candler who really developed Coca Cola as a brand. As the product met an increasing success, he decided to refocus his activity on Coca Cola. [...] Enter the password to open this PDF file: - Consultez plus de 91303 études en illimité sans engagement de durée. Nos formules d'abonnement
https://www.docs-en-stock.com/business-comptabilite-gestion-management/assignment-analysing-financial-statement-128462.html
2017-07-20T18:30:44
CC-MAIN-2017-30
1500549423320.19
[]
www.docs-en-stock.com
Legacy Lua API¶ Welcome to the Legacy Lua API’s documentation! The Legacy mod is the default mod shipped with ET: Legacy. It supports server-side modifications via the Lua scripting language, with the Legacy Lua API being the interface for communication between them. The embedded Lua 5.3 interpreter will load user-defined scripts if present in the legacy directory. The Lua API provides an “et” library of function calls that allow access to the server engine, and also provides callbacks so a server side mod may trigger on specific server events. In some cases values can be returned to Legacy mod, whenever something is intercepted (i.e. a command) and prevented to be further handled. This way, new commands can easily be defined or existing ones can be altered. Through special functions, it is also possible to alter internal structures or entities (manipulate client XP, set and read cvars, remap shaders, etc.). For example, if a player dies the et_Obituary( victim, killer, meansOfDeath ) function is executed, and the Lua API allows you to manipulate and control this information. Note Like qagame, Lua modules are unloaded and reloaded on map_restart and map changes, which means that all global variables and other information is lost. Persistent data can be stored in cvars, external files or database. Legacy’s Lua API follows mostly the ETPub implementation with partial code of the NoQuarter implemention. The ETPub implementation being built to be compatible with ETPro’s Lua, all scripts written in ETPro’s documentation should be valid and more or less compatible with Legacy mod’s Lua API. Important As Legacy uses the newer Lua 5.3, you might want to check the Incompatibilities with the Previous Version sections of the Lua 5.1, Lua 5.2, and Lua 5.3 manuals while porting scripts written for other mods. - Standard libraries - Cvars - Commands - Functions - Callbacks - Fields - Constants - Miscellaneous - Database - Sample code
https://legacy-lua-api.readthedocs.io/en/2.76/
2019-06-16T01:48:51
CC-MAIN-2019-26
1560627997508.21
[]
legacy-lua-api.readthedocs.io
Logging Cluster object allows registering loggers. It then uses these to log the driver’s actions. The library’s Cassandra::Logger can be used to retrieve the timestamp, thread-id, severity, and message. Background - Given - a running cassandra cluster Logging is enabled using internal logger - Given - the following example: require 'cassandra' logger = Cassandra::Logger.new($stderr) cluster = Cassandra.cluster(logger: logger) session = cluster.connect - When - it is executed - Then - its output should contain: DEBUG: Host 127.0.0.1 is found and up - And - its output should contain: INFO: Schema refreshed - And - its output should contain: INFO: Session created
https://docs.datastax.com/en/developer/ruby-driver/3.2/features/debugging/logging/
2019-06-16T00:59:39
CC-MAIN-2019-26
1560627997508.21
[]
docs.datastax.com
Connection Request Policies Applies to: Windows Server (Semi-Annual Channel), Windows Server 2016 You can use this topic to learn how to use NPS connection request policies to configure the NPS as a RADIUS server, a RADIUS proxy, or both. Note In addition to this topic, the following connection request policy documentation is available.. You can create connection request policies so that some RADIUS request messages sent from RADIUS clients are processed locally (NPS is used as a RADIUS server) and other types of messages are forwarded to another RADIUS server (NPS is used as a RADIUS proxy). RADIUS Access-Request messages are processed or forwarded by NPS only if the settings of the incoming message match at least one of the connection request policies configured on the NPS. If the policy settings match and the policy requires that the NPS process the message, NPS acts as a RADIUS server, authenticating and authorizing the connection request. If the policy settings match and the policy requires that the NPS forwards the message, NPS acts as a RADIUS proxy and forwards the connection request to a remote RADIUS server for processing. If the settings of an incoming RADIUS Access-Request message do not match at least one of the connection request policies, an Access-Reject message is sent to the RADIUS client and the user or computer attempting to connect to the network is denied access. Configuration examples The following configuration examples demonstrate how you can use connection request policies. NPS as a RADIUS server The default connection request policy is the only configured policy. In this example, NPS is configured as a RADIUS server and all connection requests are processed by the local NPS. The NPS can authenticate and authorize users whose accounts are in the domain of the NPS domain and in trusted domains.. NPS as both RADIUS server and RADIUS proxy In addition to the default connection request policy, a new connection request policy is created that forwards connection requests to an NPS or other RADIUS server in an untrusted domain. RADIUS server with remote accounting servers In this example, the local NPS that has the same name as the remote user account against which authentication is performed by the remote RADIUS server.) Connection request policy conditions Connection request policy conditions are one or more RADIUS attributes that are compared to the attributes of the incoming RADIUS Access-Request message. If there are multiple conditions, then all of the conditions in the connection request message and in the connection request policy must match in order for the policy to be enforced by NPS. Following are the available condition attributes that you can configure in connection request policies. Connection Properties attribute group The Connection Properties attribute group contains the following attributes. - Framed Protocol. Used to designate the type of framing for incoming packets. Examples are Point-to-Point Protocol (PPP), Serial Line Internet Protocol (SLIP), Frame Relay, and X.25. - Service Type. Used to designate the type of service being requested. Examples include framed (for example, PPP connections) and login (for example, Telnet connections). For more information about RADIUS service types, see RFC 2865, "Remote Authentication Dial-in User Service (RADIUS)." - Tunnel Type. Used to designate the type of tunnel that is being created by the requesting client. Tunnel types include the Point-to-Point Tunneling Protocol (PPTP) and the Layer Two Tunneling Protocol (L2TP). Day and Time Restrictions attribute group The Day and Time Restrictions attribute group contains the Day and Time Restrictions attribute. With this attribute, you can designate the day of the week and the time of day of the connection attempt. The day and time is relative to the day and time of the NPS. Gateway attribute group The Gateway attribute group contains the following attributes. - Called Station ID. Used to designate the phone number of the network access server. This attribute is a character string. You can use pattern-matching syntax to specify area codes. - NAS Identifier. Used to designate the name of the network access server. This attribute is a character string. You can use pattern-matching syntax to specify NAS identifiers. - NAS IPv4 Address. Used to designate the Internet Protocol version 4 (IPv4) address of the network access server (the RADIUS client). This attribute is a character string. You can use pattern-matching syntax to specify IP networks. - NAS IPv6 Address. Used to designate the Internet Protocol version 6 (IPv6) address of the network access server (the RADIUS client). This attribute is a character string. You can use pattern-matching syntax to specify IP networks. - NAS Port Type. Used to designate the type of media used by the access client. Examples are analog phone lines (known as async ), Integrated Services Digital Network (ISDN), tunnels or virtual private networks (VPNs), IEEE 802.11 wireless, and Ethernet switches. Machine Identity attribute group The Machine Identity attribute group contains the Machine Identity attribute. By using this attribute, you can specify the method with which clients are identified in the policy. RADIUS Client Properties attribute group The RADIUS Client Properties attribute group contains the following attributes. - Calling Station ID. Used to designate the phone number used by the caller (the access client). This attribute is a character string. You can use pattern-matching syntax to specify area codes. In 802.1x authentications the MAC Address is typically populated and can be matched from the client. This field is typically used for Mac Address Bypass scenarios when the connection request policy is configured for 'Accept users without validating credentials'. - Client Friendly Name. Used to designate the name of the RADIUS client computer that is requesting authentication. This attribute is a character string. You can use pattern-matching syntax to specify client names. - Client IPv4 Address. Used to designate the IPv4 address of the network access server (the RADIUS client). This attribute is a character string. You can use pattern-matching syntax to specify IP networks. - Client IPv6 Address. Used to designate the IPv6 address of the network access server (the RADIUS client). This attribute is a character string. You can use pattern-matching syntax to specify IP networks. - Client Vendor. Used to designate the vendor of the network access server that is requesting authentication. A computer running the Routing and Remote Access service is the Microsoft NAS manufacturer. You can use this attribute to configure separate policies for different NAS manufacturers. This attribute is a character string. You can use pattern-matching syntax. User Name attribute group. Connection request policy settings Connection request policy settings are a set of properties that are applied to an incoming RADIUS message. Settings consist of the following groups of properties. - Authentication - Accounting - Attribute manipulation - Forwarding request - Advanced The following sections provide additional detail about these settings. Authentication By using this setting, you can override the authentication settings that are configured in all network policies and you can designate the authentication methods and types that are required to connect to your network. Important If you configure an authentication method in connection request policy that is less secure than the authentication method you configure in network policy, the more secure authentication method that you configure in network policy is overridden. For example, if you have one network policy that requires the use of Protected Extensible Authentication Protocol-Microsoft Challenge Handshake Authentication Protocol version 2 (PEAP-MS-CHAP v2), which is a password-based authentication method for secure wireless, and you also configure a connection request policy to allow unauthenticated access, the result is that no clients are required to authenticate by using PEAP-MS-CHAP v2. In this example, all clients connecting to your network are granted unauthenticated access. Accounting By using this setting, you can configure connection request policy to forward accounting information to an NPS or other RADIUS server in a remote RADIUS server group so that the remote RADIUS server group performs accounting. Note If you have multiple RADIUS servers and you want accounting information for all servers stored in one central RADIUS accounting database, you can use the connection request policy accounting setting in a policy on each RADIUS server to forward accounting data from all of the servers to one NPS or other RADIUS server that is designated as an accounting server. Connection request policy accounting settings function independent of the accounting configuration of the local NPS. In other words, if you configure the local NPS to log RADIUS accounting information to a local file or to a Microsoft SQL Server database, it will do so regardless of whether you configure a connection request policy to forward accounting messages to a remote RADIUS server group. If you want accounting information logged remotely but not locally, you must configure the local NPS to not perform accounting, while also configuring accounting in a connection request policy to forward accounting data to a remote RADIUS server group. Attribute manipulation You can configure a set of find-and-replace rules that manipulate the text strings of one of the following attributes. - User Name - Called Station ID - Calling Station ID Find-and-replace rule processing occurs for one of the preceding attributes before the RADIUS message is subject to authentication and accounting settings. Attribute manipulation rules apply only to a single attribute. You cannot configure attribute manipulation rules for each attribute. In addition, the list of attributes that you can manipulate is a static list; you cannot add to the list of attributes available for manipulation. Note examples of how to manipulate the realm name in the User Name attribute, see the section "Examples for manipulation of the realm name in the User Name attribute" in the topic Use Regular Expressions in NPS. Forwarding request You can set the following forwarding request options that are used for RADIUS Access-Request messages: Authenticate requests on this server. By using this setting, NPS uses a Windows NT 4.0 domain, Active Directory, or the local Security Accounts Manager (SAM) user accounts database to authenticate the connection request. This setting also specifies that the matching network policy configured in NPS, along with the dial-in properties of the user account, are used by NPS to authorize the connection request. In this case, the NPS is configured to perform as a RADIUS server. Forward requests to the following remote RADIUS server group. By using this setting, NPS forwards connection requests to the remote RADIUS server group that you specify. If the NPS receives a valid Access-Accept message that corresponds to the Access-Request message, the connection attempt is considered authenticated and authorized. In this case, the NPS acts as a RADIUS proxy. Accept users without validating credentials. By using this setting, NPS does not verify the identity of the user attempting to connect to the network and NPS does not attempt to verify that the user or computer has the right to connect to the network. When NPS is configured to allow unauthenticated access and it receives a connection request, NPS immediately sends an Access-Accept message to the RADIUS client and the user or computer is granted network access. This setting is used for some types of compulsory tunneling where the access client is tunneled before user credentials are authenticated. Note This authentication option cannot be used when the authentication protocol of the access client is MS-CHAP v2 or Extensible Authentication Protocol-Transport Layer Security (EAP-TLS), both of which provide mutual authentication. In mutual authentication, the access client proves that it is a valid access client to the authenticating server (the NPS), and the authenticating server proves that it is a valid authenticating server to the access client. When this authentication option is used, the Access-Accept message is returned. However, the authenticating server does not provide validation to the access client, and mutual authentication fails. For examples of how to use regular expressions to create routing rules that forward RADIUS messages with a specified realm name to a remote RADIUS server group, see the section "Example for RADIUS message forwarding by a proxy server" in the topic Use Regular Expressions in NPS. Advanced You can set advanced properties to specify the series of RADIUS attributes that are: - Added to the RADIUS response message when the NPS is being used as a RADIUS authentication or accounting server. When there are attributes specified on both a network policy and the connection request policy, the attributes that are sent in the RADIUS response message are the combination of the two sets of attributes. - Added to the RADIUS message when the NPS is being used as a RADIUS authentication or accounting proxy. If the attribute already exists in the message that is forwarded, it is replaced with the value of the attribute specified in the connection request policy. In addition, some attributes that are available for configuration on the connection request policy Settings tab in the Advanced category provide specialized functionality. For example, you can configure the Remote RADIUS to Windows User Mapping attribute when you want to split the authentication and authorization of a connection request between two user accounts databases. The Remote RADIUS to Windows User Mapping attribute specifies that Windows authorization occurs for users who are authenticated by a remote RADIUS server. In other words, a remote RADIUS server performs authentication against a user account in a remote user accounts database, but the local NPS authorizes the connection request against a user account in a local user accounts database. This is useful when you want to allow visitors access to your network. For example, visitors from partner organizations can be authenticated by their own partner organization RADIUS server, and then use a Windows user account at your organization to access a guest local area network (LAN) on your network. Other attributes that provide specialized functionality are: - MS-Quarantine-IPFilter and MS-Quarantine-Session-Timeout. These attributes are used when you deploy Network Access Quarantine Control (NAQC) with your Routing and Remote Access VPN deployment. - Passport-User-Mapping-UPN-Suffix. This attribute allows you to authenticate connection requests with Windows Live™ ID user account credentials. - Tunnel-Tag. This attribute designates the VLAN ID number to which the connection should be assigned by the NAS when you deploy virtual local area networks (VLANs). Default connection request policy A default connection request policy is created when you install NPS. This policy has the following configuration. - Authentication is not configured. - Accounting is not configured to forward accounting information to a remote RADIUS server group. - Attribute is not configured with attribute manipulation rules that forward connection requests to remote RADIUS server groups. - Forwarding Request is configured so that connection requests are authenticated and authorized on the local NPS. - Advanced attributes are not configured. The default connection request policy uses NPS as a RADIUS server. To configure a server running NPS to act as a RADIUS proxy, you must also configure a remote RADIUS server group. You can create a new remote RADIUS server group while you are creating a new connection request policy by using the New Connection Request Policy Wizard. You can either delete the default connection request policy or verify that the default connection request policy is the last policy processed by NPS by placing it last in the ordered list of policies. Note If NPS and the Remote Access service are installed on the same computer, and the Remote Access service is configured for Windows authentication and accounting, it is possible for Remote Access authentication and accounting requests to be forwarded to a RADIUS server. This can occur when Remote Access authentication and accounting requests match a connection request policy that is configured to forward them to a remote RADIUS server group. Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows-server/networking/technologies/nps/nps-crp-crpolicies
2019-06-16T01:37:33
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Contents Kingston release notes Previous Topic Next Topic Application Portfolio Management release notes Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Application Portfolio Management release notes ServiceNow® Application Portfolio Management product enhancements and updates in the Kingston release. Kingston upgrade information Application Portfolio Management integrates with Service Mapping differently from what it was in Jakarta release. The application Instances tab has been removed and the application instances [apm_app_instance] table is no longer used to store application instances data. The application instances table is replaced with the Business Services [cmdb_ci_service] table (if Service Mapping is not installed) or the Discovered Business Service [cmdb_ci_discovered_service] table (if Service Mapping product is installed). Any data in the application instances table for service mapping integration must be migrated to the business service table. If you are upgrading to the Kingston release, then contact ServiceNow personnel for help with migrating the data. New in the Kingston release Business capability planning Map your strategic plans to your IT investments in business applications. As a business planner, you can plan the business entities of your organization and establish capabilities on par with the industry to yield effective and better results.As an enterprise architect, you can identify the business applications that impact the business capabilities and address them so that the business applications support the business capabilities effectively. The enterprise architect also looks at the existing capabilities and aims for better allocation of IT investments. Technology Portfolio Management (TPM) Manage and monitor the underlying technologies of each business application. With TPM, the enterprise architect (EA) can track manufacturer support dates and software versions, set an internal lifecycle for the software, plan to retire them, and support upgrade process by creating ideas and projects.To leverage the Product Classification (samp_sw_product_classification) table used in TPM and retrieve technology data from the software product model, subscribe to Software Asset Management Premium.Identify the technology risks that are calculated at the software model level and at the business application level. At the software model level, the risk is calculated based on lifecycle stage and ageing parameters. At the business application level, the risk is calculated based on the risk of the software model on which the application runs. Data certification Maintain business applications records and regularly update the applications inventory. Schedule or request on-demand that system owners certify the business applications data in the business applications table. Domain separation Secure and protect the sensitive data of each customer from other customers. As a managed service provider (MSPs), you can control what your customers can access and view. Create all application portfolio data entities within the domain, specific to the enterprise and not at the global level. The administrator can configure and execute scheduled jobs, certification schedules, and assessments of indicators and scores at the domain level. To enable the domain separation feature for APM, activate the Domain Support – Domain Extensions Installer system plugin. My Applications From the My Applications module, view the applications for which you are the designated application owner. Application Portfolio Management portal An enterprise architect can view all the APM modules and perform all actions from the new Application Portfolio Management home screen sections such as Business Architecture, Application Architecture, Technology Architecture, and Opportunities & Solutions. Business Planning Portal A business planner can access the Business Planning portal to view and create business capabilities, business units, enterprise strategies, business unit strategies and goals. The business planner can also view the capability hierarchy map.The Business Planning portal is activated with the Business Planner (com.snc.apm.business_planner) plugin. Changed in this release Application Portfolio Management home: The look and feel of the Application Portfolio Management home page has been changed to help the enterprise architect (EA) view all APM modules and perform all tasks from a single portal. Select the application owner for each business application record who can authenticate the record information. Each business application record has a mandatory IT application owner field. The application owner has an exclusive right to certify the data fields of the business application. Auditing is enabled for the User base field. Whenever the field is updated, the update is recorded in the Activities field. Name changes: Menu items and modules have been renamed, and menu items were rearranged to improve their usability. Changed plugins: The Performance Analytics - Content Pack - Application Portfolio Management (com.snc.pa.apm) and Performance Analytics - Content Pack - Application Portfolio Management and Incident Management (com.snc.pa.apm.incident) plugins have been merged with Application Portfolio Management (com.snc.apm) plugin. Integration with Service Mapping: Application Portfolio Management integrates with Service Mapping by creating direct relationships through the CI relationship editor, and no longer integrates by creating application instances from a business application. The application instances tab has been removed and the business services table replaces the application instances table. APM Guided Setup: Application Portfolio Management guided setup is updated with new categories and related tasks. The Capabilities, Applications & Business Services Inventory category helps you to relate the CMDB configuration items such as business capability, business application, business service, and finally link the software model to this relationship. The Data Certification and APM Jobs category lists the tasks and jobs that you can schedule and run as a system administrator to get the latest indicator scores. Technology Portfolio Management helps you track the threats that your business capabilities may face from risks involved in the underlying technologies of your business applications, and address them in a timely manner. Technology risk score calculation: The risk on a software model is calculated using parameters such as external ageing risk, internal ageing risk, external stage risk, and internal stage risk. The risk of the business service is derived from the software model risk, as the business service is dependent on the underlying software models. Suggested relationship between configuration items (CI): Use the suggested relationship type to relate two configuration items. The suggested relationship feature of CMDB not only helps to select the relationship type automatically but also ensures consistency in the relationship. The suggested relationship is established between capability and application AND between application and service. Access all business applications in the organization and view them in read-only mode whether APM is activated or not. From the All Business Applications module in the application navigator, you can access the applications on logging in. Removed in this release Business process capability map. The business service applications map has been replaced with the more comprehensive capability map. The business service capability map has been replaced with the capability map. Application roadmap. The Application Portfolio Management with Business Service mapping (com.snc.apm_sm) plugin has been deprecated. Application Rollout [apm_business_app_rollout] table. Application Instance [apm_app_instance] table. Business Entity [apm_rollout_entity] table. Activation information You can activate the Application Portfolio Management (com.snc.apm) plugin if you have the admin role. Browser requirements Internet Explorer version 10 and later. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-release-notes/page/release-notes/business-management/application-portfolio-managmnt-rn.html
2019-06-16T01:29:28
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
Manual ceph-iscsi Installation¶ Requirements To complete the installation of ceph-iscsi, there are 4 steps: - Install common packages from your Linux distribution’s software repository - Install Git to fetch the remaining packages directly from their Git repositories - Ensure a compatible kernel is used - Install all the components of ceph-iscsi and start associated daemons: - tcmu-runner - rtslib-fb - configshell-fb - targetcli-fb - ceph-iscsi 1. Install Common Packages¶ The following packages will be used by ceph-iscsi and target tools. They must be installed from your Linux distribution’s software repository on each machine that will be a iSCSI gateway: - libnl3 - libkmod - librbd1 - pyparsing - python kmod - python pyudev - python gobject - python urwid - python pyparsing - python rados - python rbd - python netifaces - python crypto - python requests - python flask - pyOpenSSL 2. Install Git¶ In order to install all the packages needed to run iSCSI with Ceph, you need to download them directly from their repository by using Git. On CentOS/RHEL execute: > sudo yum install git On Debian/Ubuntu execute: > sudo apt install git To know more about Git and how it works, please, visit 3. Ensure a compatible kernel is used¶ Ensure you use a supported kernel that contains the required Ceph iSCSI patches: - all Linux distribution with a kernel v4.16 or newer, or - Red Hat Enterprise Linux or CentOS 7.5 or later (in these distributions ceph-iscsi support is backported) If you are already using a compatible kernel, you can go to next step. However, if you are NOT using a compatible kernel then check your distro’s documentation for specific instructions on how to build this kernel. The only Ceph iSCSI specific requirements are that the following build options must be enabled: CONFIG_TARGET_CORE=m CONFIG_TCM_USER2=m CONFIG_ISCSI_TARGET=m 4. Install ceph-iscsi¶ Finally, the remaining tools can be fetched directly from their Git repositories and their associated services started tcmu-runner¶ Installation:> git clone > cd tcmu-runner Run the following command to install all the needed dependencies:> ./extra/install_dep.sh Now you can build the tcmu-runner. To do so, use the following build command:> cmake -Dwith-glfs=false -Dwith-qcow=false -DSUPPORT_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX=/usr > make install Enable and start the daemon:> systemctl daemon-reload > systemctl enable tcmu-runner > systemctl start tcmu-runner rtslib-fb¶ Installation:> git clone > cd rtslib-fb > python setup.py install configshell-fb¶ Installation:> git clone > cd configshell-fb > python setup.py install targetcli-fb¶ Installation:> git clone > cd targetcli-fb > python setup.py install > mkdir /etc/target > mkdir /var/target Warning The ceph-iscsi tools assume they are managing all targets on the system. If targets have been setup and are being managed by targetcli the target service must be disabled. ceph-iscsi¶ Installation:> git clone > cd ceph-iscsi > python setup.py install --install-scripts=/usr/bin > cp usr/lib/systemd/system/rbd-target-gw.service /lib/systemd/system > cp usr/lib/systemd/system/rbd-target-api.service /lib/systemd/system Enable and start the daemon:> systemctl daemon-reload > systemctl enable rbd-target-gw > systemctl start rbd-target-gw > systemctl enable rbd-target-api > systemctl start rbd-target-api Installation is complete. Proceed to the setup section in the main ceph-iscsi CLI page.
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
2019-06-16T01:49:45
CC-MAIN-2019-26
1560627997508.21
[]
docs.ceph.com
Contents Customer Service Management Previous Topic Next Topic Delete your content Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Delete your content You can delete any of your contributions, such as a question, blog, or comment. Use search or navigation to find the content you want to remove. Click the corresponding ... icon. Click Delete. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-customer-service-management/page/product/customer-communities/task/delete-content.html
2019-06-16T01:08:28
CC-MAIN-2019-26
1560627997508.21
[]
docs.servicenow.com
Author: Thierry Moreau This is an introduction tutorial on how to use TVM to program the VTA design. In this tutorial, we will demonstrate the basic TVM workflow to implement a vector addition on the VTA design’s vector ALU. This process includes specific scheduling transformations necessary to lower computation down to low-level accelerator operations. To begin, we need to import TVM which is our deep learning optimizing compiler. We also need to import the VTA python package which contains VTA specific extensions for TVM to target the VTA design. from __future__ import absolute_import, print_function import os import tvm import vta import numpy as np VTA is a modular and customizable design. Consequently, the user is free to modify high-level hardware parameters that affect the hardware design layout. These parameters are specified in the vta_config.json file by their log2 values. These VTA parameters can be loaded with the vta.get_env function. Finally, the TVM target is also specified in the vta_config.json file. When set to sim, execution will take place inside of a behavioral VTA simulator. If you want to run this tutorial on the Pynq FPGA development platform, follow the VTA Pynq-Based Testing Setup guide. env = vta.get_env() FPGA Programming¶ When targeting the Pynq FPGA development board, we need to configure the board with a VTA bitstream. # We'll need the TVM RPC module and the VTA simulator module from tvm import rpc from tvm.contrib import util from vta.testing import simulator # We read the Pynq RPC host IP address and port number from the OS environment host = os.environ.get("VTA_PYNQ_RPC_HOST", "192.168.2.99") port = int(os.environ.get("VTA_PYNQ_RPC_PORT", "9091")) # We configure both the bitstream and the runtime system on the Pynq # to match the VTA configuration specified by the vta_config.json file. if env.TARGET == "pynq": # Make sure that TVM was compiled with RPC=1 assert tvm.module.enabled("rpc") remote = rpc.connect(host, port) # Reconfigure the JIT runtime vta.reconfig_runtime(remote) # Program the FPGA with a pre-compiled VTA bitstream. # You can program the FPGA with your own custom bitstream # by passing the path to the bitstream file instead of None. vta.program_fpga(remote, bitstream=None) # In simulation mode, host the RPC server locally. elif env.TARGET == "sim": remote = rpc.LocalSession() Computation Declaration¶ As a first step, we need to describe our computation. TVM adopts tensor semantics, with each intermediate result represented as multi-dimensional array. The user needs to describe the computation rule that generates the output tensors. In this example we describe a vector addition, which requires multiple computation stages, as shown in the dataflow diagram below. First we describe the input tensors A and B that are living in main memory. Second, we need to declare intermediate tensors A_buf and B_buf, which will live in VTA’s on-chip buffers. Having this extra computational stage allows us to explicitly stage cached reads and writes. Third, we describe the vector addition computation which will add A_buf to B_buf to produce C_buf. The last operation is a cast and copy back to DRAM, into results tensor C. Input Placeholders¶ We describe the placeholder tensors A, and B in a tiled data format to match the data layout requirements imposed by the VTA vector ALU. For VTA’s general purpose operations such as vector adds, the tile size is (env.BATCH, env.BLOCK_OUT). The dimensions are specified in the vta_config.json configuration file and are set by default to a (1, 16) vector. In addition, A and B’s data types also needs to match the env.acc_dtype which is set by the vta_config.json file to be a 32-bit integer. # Output channel factor m - total 64 x 16 = 1024 output channels m = 64 # Batch factor o - total 1 x 1 = 1 o = 1 # A placeholder tensor in tiled data format A = tvm.placeholder((o, m, env.BATCH, env.BLOCK_OUT), name="A", dtype=env.acc_dtype) # B placeholder tensor in tiled data format B = tvm.placeholder((o, m, env.BATCH, env.BLOCK_OUT), name="B", dtype=env.acc_dtype) Copy Buffers¶ One specificity of hardware accelerators, is that on-chip memory has to be explicitly managed. This means that we’ll need to describe intermediate tensors A_buf and B_buf that can have a different memory scope than the original placeholder tensors A and B. Later in the scheduling phase, we can tell the compiler that A_buf and B_buf will live in the VTA’s on-chip buffers (SRAM), while A and B will live in main memory (DRAM). We describe A_buf and B_buf as the result of a compute operation that is the identity function. This can later be interpreted by the compiler as a cached read operation. # A copy buffer A_buf = tvm.compute((o, m, env.BATCH, env.BLOCK_OUT), lambda *i: A(*i), "A_buf") # B copy buffer B_buf = tvm.compute((o, m, env.BATCH, env.BLOCK_OUT), lambda *i: B(*i), "B_buf") Vector Addition¶ Now we’re ready to describe the vector addition result tensor C, with another compute operation. The compute function takes the shape of the tensor, as well as a lambda function that describes the computation rule for each position of the tensor. No computation happens during this phase, as we are only declaring how the computation should be done. # Describe the in-VTA vector addition C_buf = tvm.compute( (o, m, env.BATCH, env.BLOCK_OUT), lambda *i: A_buf(*i).astype(env.acc_dtype) + B_buf(*i).astype(env.acc_dtype), name="C_buf") Casting the Results¶ After the computation is done, we’ll need to send the results computed by VTA back to main memory. Note Memory Store Restrictions One specificity of VTA is that it only supports DRAM stores in the narrow env.inp_dtype data type format. This lets us reduce the data footprint for memory transfers (more on this in the basic matrix multiply example). We perform one last typecast operation to the narrow input activation data format. # Cast to output type, and send to main memory C = tvm.compute( (o, m, env.BATCH, env.BLOCK_OUT), lambda *i: C_buf(*i).astype(env.inp_dtype), name="C") This concludes the computation declaration part of this tutorial. Scheduling the Computation¶ While the above lines describes the computation rule, we can obtain C in many ways. TVM asks the user to provide an implementation of the computation called schedule. A schedule is a set of transformations to an original computation that transforms the implementation of the computation without affecting correctness. This simple VTA programming tutorial aims to demonstrate basic schedule transformations that will map the original schedule down to VTA hardware primitives. Default Schedule¶ After we construct the schedule, by default the schedule computes C in the following way: # Let's take a look at the generated schedule s = tvm.create_schedule(C.op) print(tvm.lower(s, [A, B, C], simple_mode=True)) Out: // attr [A_buf] storage_scope = "global" allocate A_buf[int32 * 1024] // attr [B_buf] storage_scope = "global" allocate B_buf[int32 * 1024] produce A_buf { for (i1, 0, 64) { for (i3, 0, 16) { A_buf[((i1*16) + i3)] = A[((i1*16) + i3)] } } } produce B_buf { for (i1, 0, 64) { for (i3, 0, 16) { B_buf[((i1*16) + i3)] = B[((i1*16) + i3)] } } } produce C_buf { for (i1, 0, 64) { for (i3, 0, 16) { A_buf[((i1*16) + i3)] = (A_buf[((i1*16) + i3)] + B_buf[((i1*16) + i3)]) } } } produce C { for (i1, 0, 64) { for (i3, 0, 16) { C[((i1*16) + i3)] = int8(A_buf[((i1*16) + i3)]) } } } Although this schedule makes sense, it won’t compile to VTA. In order to obtain correct code generation, we need to apply scheduling primitives and code annotation that will transform the schedule into one that can be directly lowered onto VTA hardware intrinsics. Those include: - DMA copy operations which will take globally-scoped tensors and copy those into locally-scoped tensors. - Vector ALU operations that will perform the vector add. Buffer Scopes¶ First, we set the scope of the copy buffers to indicate to TVM that these intermediate tensors will be stored in the VTA’s on-chip SRAM buffers. Below, we tell TVM that A_buf, B_buf, C_buf will live in VTA’s on-chip accumulator buffer which serves as VTA’s general purpose register file. Set the intermediate tensors’ scope to VTA’s on-chip accumulator buffer s[A_buf].set_scope(env.acc_scope) s[B_buf].set_scope(env.acc_scope) s[C_buf].set_scope(env.acc_scope) DMA Transfers¶ We need to schedule DMA transfers to move data living in DRAM to and from the VTA on-chip buffers. We insert dma_copy pragmas to indicate to the compiler that the copy operations will be performed in bulk via DMA, which is common in hardware accelerators. # Tag the buffer copies with the DMA pragma to map a copy loop to a # DMA transfer operation s[A_buf].pragma(s[A_buf].op.axis[0], env.dma_copy) s[B_buf].pragma(s[B_buf].op.axis[0], env.dma_copy) s[C].pragma(s[C].op.axis[0], env.dma_copy) ALU Operations¶ VTA has a vector ALU that can perform vector operations on tensors in the accumulator buffer. In order to tell TVM that a given operation needs to be mapped to the VTA’s vector ALU, we need to explicitly tag the vector addition loop with an env.alu pragma. # Tell TVM that the computation needs to be performed # on VTA's vector ALU s[C_buf].pragma(C_buf.op.axis[0], env.alu) # Let's take a look at the finalized schedule print(vta.lower(s, [A, B, C], simple_mode=True)) Out: // attr [A_buf] storage_scope = "local.acc_buffer" // attr [iter_var(vta, , vta)] coproc_scope = 2 produce A_buf { VTALoadBuffer2D(tvm_thread_context(VTATLSCommandHandle()), A, 0, 64, 1, 64, 0, 0, 0, 0, 0, 3) } produce B_buf { VTALoadBuffer2D(tvm_thread_context(VTATLSCommandHandle()), B, 0, 64, 1, 64, 0, 0, 0, 0, 64, 3) } // attr [iter_var(vta, , vta)] coproc_uop_scope = "VTAPushALUOp" produce C_buf { VTAUopLoopBegin(64, 1, 1, 0) VTAUopPush(1, 0, 0, 64, 0, 2, 0, 0) VTAUopLoopEnd() } vta.coproc_dep_push(2, 3) // attr [iter_var(vta, , vta)] coproc_scope = 3 vta.coproc_dep_pop(2, 3) produce C { VTAStoreBuffer2D(tvm_thread_context(VTATLSCommandHandle()), 0, 4, C, 0, 64, 1, 64) } vta.coproc_sync() This concludes the scheduling portion of this tutorial. TVM. my_vadd = vta.build(s, [A, B, C], "ext_dev", env.target_host, name="my_vadd") Saving the Module¶ TVM lets us save our module into a file so it can loaded back later. This is called ahead-of-time compilation and allows us to save some compilation time. More importantly, this allows us to cross-compile the executable on our development machine and send it over to the Pynq FPGA board over RPC for execution. # Write the compiled module into an object file. temp = util.tempdir() my_vadd.save(temp.relpath("vadd.o")) # Send the executable over RPC remote.upload(temp.relpath("vadd.o")) We can load the compiled module from the file system to run the code. f = remote.load_module("vadd.o") Running the Function¶ The compiled TVM function uses a concise C API and can be invoked from any language. TVM provides an array API in python to aid quick testing and prototyping. The array API is based on DLPack standard. - We first create a remote context (for remote execution on the Pynq). - Then tvm.nd.arrayformats the data accordingly. f()runs the actual computation. asnumpy()copies the result array back in a format that can be interpreted. # Get the remote device context ctx = remote.ext_dev(0) # Initialize the A and B arrays randomly in the int range of (-128, 128] A_orig = np.random.randint( -128, 128, size=(o * env.BATCH, m * env.BLOCK_OUT)).astype(A.dtype) B_orig = np.random.randint( -128, 128, size=(o * env.BATCH, m * env.BLOCK_OUT)).astype(B.dtype) # Apply packing to the A and B arrays from a 2D to a 4D packed layout A_packed = A_orig.reshape( o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3)) B_packed = B_orig.reshape( o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3)) # Format the input/output arrays with tvm.nd.array to the DLPack standard A_nd = tvm.nd.array(A_packed, ctx) B_nd = tvm.nd.array(B_packed, ctx) C_nd = tvm.nd.array(np.zeros((o, m, env.BATCH, env.BLOCK_OUT)).astype(C.dtype), ctx) # Invoke the module to perform the computation f(A_nd, B_nd, C_nd) Verifying Correctness¶ Compute the reference result with numpy and assert that the output of the matrix multiplication indeed is correct # Compute reference result with numpy C_ref = (A_orig.astype(env.acc_dtype) + B_orig.astype(env.acc_dtype)).astype(C.dtype) C_ref = C_ref.reshape( o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3)) np.testing.assert_equal(C_ref, C_nd.asnumpy()) print("Successful vector add test!") Out: Successful vector add test! Summary¶ This tutorial provides a walk-through of TVM for programming the deep learning accelerator VTA with a simple vector addition example. The general workflow includes: - Programming the FPGA with the VTA bitstream over RPC. - Describing the vector add computation via a series of computations. - Describing how we want to perform the computation using schedule primitives. - Compiling the function to the VTA target. - Running the compiled module and verifying it against a numpy implementation. You are more than welcome to check other examples out and tutorials to learn more about the supported operations, schedule primitives and other features supported by TVM to program VTA. Total running time of the script: ( 0 minutes 0.229 seconds) Gallery generated by Sphinx-Gallery
https://docs.tvm.ai/vta/tutorials/vta_get_started.html
2019-06-16T01:43:49
CC-MAIN-2019-26
1560627997508.21
[array(['https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/vadd_dataflow.png', 'https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/vadd_dataflow.png'], dtype=object) ]
docs.tvm.ai
This integration collects data from Traefik in order to check its health and monitor: To install the Traefik check on your host: ddev release build traefikto build the package. datadog-agent integration install -w path/to/traefik/dist/<ARTIFACT_NAME>.whl. Edit the traefik.d/conf.yaml file in the conf.d/ folder at the root of your Agent’s configuration directory to start collecting your Traefik metrics: Checks ====== [...] traefik ------- - instance #0 [OK] - Collected 2 metrics, 0 events & 1 service check [...]!
https://docs.datadoghq.com/integrations/traefik/
2019-06-16T01:14:30
CC-MAIN-2019-26
1560627997508.21
[]
docs.datadoghq.com
The OpenStack dashboard project. Also the name of the top-level Python object which handles registration for the app. A Python class representing a top-level navigation item (e.g. “project”) which provides a consistent API for Horizon-compatible applications. A Python class representing a sub-navigation item (e.g. “instances”) which contains all the necessary logic (views, forms, tests, etc.) for that interface. Used in user-facing text in place of the term “Tenant” which is Keystone’s word. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/horizon/latest/glossary.html
2019-06-16T01:07:52
CC-MAIN-2019-26
1560627997508.21
[]
docs.openstack.org
Creating Required Users and Groups This page is supplemental to main article: Creating a Virtual Mail Server with Postfix, Dovecot and MySQL You will need to create a few special users and groups to be able to build and/or run your mail server components. We will use the SBo assigned uid and gid for each user and group. To prepare to build or install postfix, execute the following commands on the target machine: groupadd -g 200 postfix useradd -u 200 -d /dev/null -s /bin/false -g postfix postfix groupadd -g 201 postdrop To prepare to build or install dovecot, execute the following commands on the target machine: groupadd -g 202 dovecot useradd -d /dev/null -s /bin/false -u 202 -g 202 dovecot groupadd -g 248 dovenull useradd -d /dev/null -s /bin/false -u 248 -g 248 dovenull Additionally, we must create a user, group and target directory for the virtual mail functions. There is no recommended uid/gid for these but a common choice for both is 5000. You may wish to change these to suit your own environment. Execute the following commands on the install machine (these are not necessary when building): groupadd -g 5000 vmail useradd -d /var/vmail -s /bin/false -u 5000 -g 5000 vmail mkdir -p /var/vmail/vhosts chown -R vmail:vmail /var/vmail To test whether these users or groups already exist on a machine, substitute the user or group names into the respective commands: cat /etc/passwd |grep ^postfix cat /etc/group |grep ^postdrop If the corresponding name exists it will be shown, otherwise you will see only an empty prompt in response. For troubleshooting you should verify that each user and group is defined on your machine as shown here. Return to main article page
http://docs.slackware.com/howtos:network_services:postfix_dovecot_mysql:uid_gid
2019-06-16T00:37:11
CC-MAIN-2019-26
1560627997508.21
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Connecting a DEP Account What is the Apple Device Enrollment Program? The Apple Device Enrollment Program (DEP) "Devices" link on the left hand side of the screen. - Click the "Enrollments" sub-menu option. - On the Enrollments page, click the "Apple DEP" tab. - Click "Add Account". - Follow the on-screen steps based on whether you are using Apple Business Manager or a legacy Apple DEP account. These steps will guide you through the certificate exchange process. Once you have uploaded your Apple server token, SimpleMDM will link to your Apple DEP / Business Manager account.
https://docs.simplemdm.com/article/23-connecting-a-dep-account
2019-06-16T01:04:16
CC-MAIN-2019-26
1560627997508.21
[]
docs.simplemdm.com
Breaking: #61802 - deprecated isLocalconfWritable function removed¶ See Issue #61802 Description¶ The function isLocalconfWritable() from \TYPO3\CMS\Core\Utility\ExtensionManagementUtility has been removed. The bootstrap now just checks for the existence of the file and redirects to the install tool if it doesn’t exist. Affected installations¶ A TYPO3 instance is affected if a 3rd party extension uses the removed function.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.0/Breaking-61802-IsLocalconfWritableFunctionRemoved.html
2019-06-16T01:48:31
CC-MAIN-2019-26
1560627997508.21
[]
docs.typo3.org
All content with label 2lcache+api+datagrid+faq+gridfs+infinispan+installation+repeatable_read. Related Labels: expiration, publish, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, test, jcache, xsd, maven, documentation, wcm, write_behind, ec2, s, hibernate, getting, aws, interface, clustering, setup, eviction, out_of_memory, concurrency, examples, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, cloud, mvcc, tutorial, notification, read_committed, jbosscache3x, xml, distribution, started, cachestore, data_grid, cacheloader,, batching, store, whitepaper, jta, as5, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - 2lcache, - api, - datagrid, - faq, - gridfs, - infinispan, - installation, - repeatable_read ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+api+datagrid+faq+gridfs+infinispan+installation+repeatable_read
2019-06-16T01:37:23
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
All content with label async+development+faq+grid+hot_rod+infinispan+jboss_cache+listener+release+store+user_guide+xaresource. Related Labels: podcast, expiration, publish, datagrid, interceptor, server, recovery, transactionmanager, dist, partitioning, deadlock, intro, archetype, pojo_cache,, jbosscache3x, distribution, jira, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, websocket, transaction, build, searchable, demo, scala, cache_server, installation, client, non-blocking, migration, jpa, filesystem, tx, article, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, whitepaper, jta, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest more » ( - async, - development, - faq, - grid, - hot_rod, - infinispan, - jboss_cache, - listener, - release, - store, - user_guide, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/async+development+faq+grid+hot_rod+infinispan+jboss_cache+listener+release+store+user_guide+xaresource
2019-06-16T02:16:17
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
All content with label cloud+gridfs+hash_function+infinispan+installation+jsr-107+out_of_memory+repeatable_read+standalone. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, intro, archetype, jbossas, lock_striping, nexus, guide, schema, cache, amazon, s3, grid, memcached, jcache, test, api, xsd, wildfly, maven, documentation, wcm, youtube, write_behind, ec2, s, hibernate, getting, aws, getting_started, interface, custom_interceptor, setup, clustering, eviction, concurrency, examples, jboss_cache, import, index, events, configuration, batch, buddy_replication, loader, write_through, remoting, mvcc, notification, tutorial, presentation, murmurhash2, xml, read_committed, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, resteasy, cluster, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, mod_cluster, client, non-blocking, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, webdav, hotrod, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jgroups, lucene, locking, rest, hot_rod more » ( - cloud, - gridfs, - hash_function, - infinispan, - installation, - jsr-107, - out_of_memory, - repeatable_read, - standalone ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+gridfs+hash_function+infinispan+installation+jsr-107+out_of_memory+repeatable_read+standalone
2019-06-16T01:33:27
CC-MAIN-2019-26
1560627997508.21
[]
docs.jboss.org
This form is for faculty and instructors to request class access to the shared research computing resources managed by the Center for Research Computing. For all other help issues regarding these services, please see our Getting Help form. For Faculty/Instructors Only! This form is for faculty and instructors only to request access to a cluster for their class. Students should not fill out this form. Please log in to confluence (link on the top right of the page) before filling out this form. For best service please request access prior to the beginning of the semester. Email address for all ticket correspondence. Use your [email protected] for best results Name of class being requested, e.g. COMP101 Please include all this information in your request: Without a reservation any jobs submitted will still compete with all other jobs on the system for CPU resources regardless of the wall-time value. User Agreement At the conclusion of the course please send a brief summary of usage including the number of students and a statement of impact regarding access to the shared computing resource for your class to Dr. Jan Odegard, Executive Director, K2I ([email protected]). Submission of a request for a class account constitutes acceptance of this agreement. Please type the word appearing in the picture.
https://docs.rice.edu/confluence/display/CD/Request+Class+Access+to+Shared+Research+Computing+Resources
2019-06-16T01:54:40
CC-MAIN-2019-26
1560627997508.21
[]
docs.rice.edu
Table of Contents Product Index.
http://docs.daz3d.com/doku.php/public/read_me/index/17600/start
2020-01-17T21:28:15
CC-MAIN-2020-05
1579250591234.15
[]
docs.daz3d.com
Hosting with Ansible¶ All components of Drutopia hosting are public as free/libre open source software as part of our commitment to LibreSaaS. But you don’t need to know about any of this to have a site on the Drutopia platform. Nor do you need to use this approach to hosting your own Drutopia site(s), but you can! As befitting techincal documentation, the main resource is the Drutopia Host README Every site on the platform has the concept of a custom repository—which is used only to get configuration, templates, and other non-PHP theme files like CSS and JavaScript—and a drutopia_version which is where all the code is defined. For a truly custom site, as part of Drutopia’s platform as a service or on your own server, you could have your site’s repository available as a drutopia_version within the host definition. Here is the documentation for Drutopia’s private repository. This is for example only (or Drutopia maintainers) as the private hosting list of sites is indeed private. Setup¶ In order to run builds for the official server(s): - If not already, clone drutopia_host - Clone this project into a subfolder of drutopia_host - Also clone the build_artifactssite under the drutopia_hostproject The above can be done with these commands: git clone [email protected]:drutopia-platform/drutopia_host.git cd drutopia_host git clone [email protected]:drutopia-platform/hosting_private.git git clone [email protected]:drutopia-platform/build_artifacts.git The contents of drutopia_host should look like this: ansible.cfg build_artifacts host_vars hosting_private inventory_sample provision.yml README.md roles Quick start:¶ You will have a deploy.log created automatically in the current folder. You may wish to occasionally move this out of the way as it will get rather large. The commands here assume you are in the hosting_private folder is the current working folder. cd hosting_private Ensure you have connectivity and correctly configured your vault key, etc with: ansible -i inventory_live -m ping all Update the entire system (base software, all builds, all sites!). Diff is optional, but I like having it: ansible-playbook -i inventory ../provision.yml --diff Update a single member, using whatever builds already exist on the server: ansible-playbook -i inventory_live ../provision.yml --diff -e "target_member=family_home_test" --tags=members Individual builds can be updated and pushed to the server. Any builds marked not active will not build, but an individual one can also be targeted (i.e. skip all others) just like target_member: ansible-playbook -i inventory_live ../provision.yml --diff -e "target_build=stable" --tags="build,push" Configuration checks are performed prior to import. If a configuration was changed on the server, the site update will stop. To force it to continue, either config_import_force the option for the site, or force ALL sites to import config (recommed to use only when combined with target_member). ansible-playbook -i inventory_live ../provision.yml --diff -e "config_import_force_all=true,target_member=family_home_test" --tags=members
http://docs.drutopia.org/en/latest/hosting-with-ansible.html
2020-01-17T23:01:43
CC-MAIN-2020-05
1579250591234.15
[]
docs.drutopia.org
Data Contract Versioning As applications evolve, you may also have to change the data contracts the services use. This topic explains how to version data contracts. This topic describes the data contract versioning mechanisms. For a complete overview and prescriptive versioning guidance, see Best Practices: Data Contract Versioning. Breaking vs. Nonbreaking Changes Changes to a data contract can be breaking or nonbreaking. When a data contract is changed in a nonbreaking way, an application using the older version of the contract can communicate with an application using the newer version, and an application using the newer version of the contract can communicate with an application using the older version. On the other hand, a breaking change prevents communication in one or both directions. Any changes to a type that do not affect how it is transmitted and received are nonbreaking. Such changes do not change the data contract, only the underlying type. For example, you can change the name of a field in a nonbreaking way if you then set the Name property of the DataMemberAttribute to the older version name. The following code shows version 1 of a data contract. // Version 1 [DataContract] public class Person { [DataMember] private string Phone; } ' Version 1 <DataContract()> _ Public Class Person <DataMember()> _ Private Phone As String End Class The following code shows a nonbreaking change. // Version 2. This is a non-breaking change because the data contract // has not changed, even though the type has. [DataContract] public class Person { [DataMember(Name = "Phone")] private string Telephone; } ' Version 2. This is a non-breaking change because the data contract ' has not changed, even though the type has. <DataContract()> _ Public Class Person <DataMember(Name := "Phone")> _ Private Telephone As String End Class Some changes do modify the transmitted data, but may or may not be breaking. The following changes are always breaking: Changing the Name or Namespace value of a data contract. Changing the order of data members by using the Order property of the DataMemberAttribute. Renaming a data member. Changing the data contract of a data member. For example, changing the type of data member from an integer to a string, or from a type with a data contract named "Customer" to a type with a data contract named "Person". The following changes are also possible. Adding and Removing Data Members In most cases, adding or removing a data member is not a breaking change, unless you require strict schema validity (new instances validating against the old schema). When a type with an extra field is deserialized into a type with a missing field, the extra information is ignored. (It may also be stored for round-tripping purposes; for more information, see Forward-Compatible Data Contracts). When a type with a missing field is deserialized into a type with an extra field, the extra field is left at its default value, usually zero or null. (The default value may be changed; for more information, see Version-Tolerant Serialization Callbacks.) For example, you can use the CarV1 class on a client and the CarV2 class on a service, or you can use the CarV1 class on a service and the CarV2 class on a client. // Version 1 of a data contract, on machine V1. [DataContract(Name = "Car")] public class CarV1 { [DataMember] private string Model; } // Version 2 of the same data contract, on machine V2. [DataContract(Name = "Car")] public class CarV2 { [DataMember] private string Model; [DataMember] private int HorsePower; } ' Version 1 of a data contract, on machine V1. <DataContract(Name := "Car")> _ Public Class CarV1 <DataMember()> _ Private Model As String End Class ' Version 2 of the same data contract, on machine V2. <DataContract(Name := "Car")> _ Public Class CarV2 <DataMember()> _ Private Model As String <DataMember()> _ Private HorsePower As Integer End Class The version 2 endpoint can successfully send data to the version 1 endpoint. Serializing version 2 of the Car data contract yields XML similar to the following. <Car> <Model>Porsche</Model> <HorsePower>300</HorsePower> </Car> The deserialization engine on V1 does not find a matching data member for the HorsePower field, and discards that data. Also, the version 1 endpoint can send data to the version 2 endpoint. Serializing version 1 of the Car data contract yields XML similar to the following. <Car> <Model>Porsche</Model> </Car> The version 2 deserializer does not know what to set the HorsePower field to, because there is no matching data in the incoming XML. Instead, the field is set to the default value of 0. Required Data Members A data member may be marked as being required by setting the IsRequired property of the DataMemberAttribute to true. If required data is missing while deserializing, an exception is thrown instead of setting the data member to its default value. Adding a required data member is a breaking change. That is, the newer type can still be sent to endpoints with the older type, but not the other way around. Removing a data member that was marked as required in any prior version is also a breaking change. Changing the IsRequired property value from true to false is not breaking, but changing it from false to true may be breaking if any prior versions of the type do not have the data member in question. Note Although the IsRequired property is set to true, the incoming data may be null or zero, and a type must be prepared to handle this possibility. Do not use IsRequired as a security mechanism to protect against bad incoming data. Omitted Default Values It is possible (although not recommended) to set the EmitDefaultValue property on the DataMemberAttribute attribute to false, as described in Data Member Default Values. If this setting is false, the data member will not be emitted if it is set to its default value (usually null or zero). This is not compatible with required data members in different versions in two ways: A data contract with a data member that is required in one version cannot receive default (null or zero) data from a different version in which the data member has EmitDefaultValueset to false. A required data member that has EmitDefaultValueset to falsecannot be used to serialize its default (null or zero) value, but can receive such a value on deserialization. This creates a round-tripping problem (data can be read in but the same data cannot then be written out). Therefore, if IsRequiredis trueand EmitDefaultValueis falsein one version, the same combination should apply to all other versions such that no version of the data contract would be able to produce a value that does not result in a round trip. Schema Considerations For an explanation of what schema is produced for data contract types, see Data Contract Schema Reference. The schema WCF produces for data contract types makes no provisions for versioning. That is, the schema exported from a certain version of a type contains only those data members present in that version. Implementing the IExtensibleDataObject interface does not change the schema for a type. Data members are exported to the schema as optional elements by default. That is, the minOccurs (XML attribute) value is set to 0. Required data members are exported with minOccurs set to 1. Many of the changes considered to be nonbreaking are actually breaking if strict adherence to the schema is required. In the preceding example, a CarV1 instance with just the Model element would validate against the CarV2 schema (which has both Model and Horsepower, but both are optional). However, the reverse is not true: a CarV2 instance would fail validation against the CarV1 schema. Round-tripping also entails some additional considerations. For more information, see the "Schema Considerations" section in Forward-Compatible Data Contracts. Other Permitted Changes Implementing the IExtensibleDataObject interface is a nonbreaking change. However, round-tripping support does not exist for versions of the type prior to the version in which IExtensibleDataObject was implemented. For more information, see Forward-Compatible Data Contracts. Enumerations Adding or removing an enumeration member is a breaking change. Changing the name of an enumeration member is breaking, unless its contract name is kept the same as in the old version by using the EnumMemberAttribute attribute. For more information, see Enumeration Types in Data Contracts. Collections Most collection changes are nonbreaking because most collection types are interchangeable with each other in the data contract model. However, making a noncustomized collection customized or vice versa is a breaking change. Also, changing the collection's customization settings is a breaking change; that is, changing its data contract name and namespace, repeating element name, key element name, and value element name. For more information about collection customization, see Collection Types in Data Contracts. Naturally, changing the data contract of contents of a collection (for example, changing from a list of integers to a list of strings) is a breaking change.
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/data-contract-versioning?redirectedfrom=MSDN
2020-01-17T22:08:02
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
CommCell Group Dashboard The CommCell Group Dashboard displays a preview of the most critical information gathered from all of the CommCell computers in the selected group. Use the CommCell Group Dashboard to monitor CommCell performance from a high level. Many windows on the Dashboard open more detailed reports that you can use to analyze the collected statistics. For instructions on creating a CommCell Group, see Creating a CommCell Group for Reports. Usage Overview The Usage Overview table provides a list of all entities in all CommCells in the CommCell Group. SLA Percentage The percentage of all CommCell computers that met or missed Service Level Agreement (SLA) at the last time data was collected. The formula used to calculate SLA is: Number of Clients that Met SLA / Total Number of Clients, at the time of data collection, regardless of the SLA time period configured for that particular client. You can click the Details button to open the SLA Details Report and view information for each CommCell. Current Capacity Usage The Current Capacity Usage chart indicates the maximum amount of storage space used to back up and archive data from all CommCell computers in the last month. You can click the Details button to view the Capacity License Usage report. Storage - Data Retention Storage - Data Retention estimates the amount of data that will be eligible for pruning, based solely on the retention days and extended retention settings configured in the storage policy copy. Other parameters related to data retention, such as retention cycles and dependencies amongst backup types are not considered in this calculation. For a more detailed calculation of data retention, see the Data Retention Forecast and Compliance Report. Use this chart to determine the amount of stored data that is old enough to be archived. Categories in the chart include data that is 0 - 90 days old, data that is 90 - 365 days old, and data that is more than one year old. Last Week Backup Job Summary The Last Week Backup Job Summary chart displays the number of jobs that succeeded or failed during each day in the last week. Click the Details button to view the Worldwide Backup Statistics Report. Strikes The Strikes chart indicates the number of subclients in all CommCell computers in the group that have one or more failed backup jobs since the last successful backup job. Click the Details button to view the Strike Count Report. Deduplication Saving Deduplication Saving displays the percentage of storage space saved through deduplication for all CommCell environments in this group. Click the Details button to view the CommCell Dedupe Savings Report.
http://docs.snapprotect.com/netapp/v11/article?p=features/reports/custom/commcell_group_dashboard.htm
2020-01-17T22:56:18
CC-MAIN-2020-05
1579250591234.15
[array(['images/reports/metrics/commcell_group_dashboard.png', None], dtype=object) ]
docs.snapprotect.com
TOPICS× Use cases: creating overviews In the following example, we will create overview-type Web applications to display all the Web applications in your database. Configure the following elements: - a filter on the folder (refer to Adding a filter on a folder ), - a button for creating a new Web application (refer to Adding a button to configure a new Web application ), - detail display for each entry in the list (refer to Adding detail to a list ), - one filter per link editing tool (refer to Creating a filter using a link editor ), - a refresh link (refer to Creating a refresh link ). Creating a single-page Web application - Create a single Page Web application and disable outbound transitions and transitions to the next page. - Changing the page title.This title will appear in the overview header and in the Web application overview. - In the Web application properties, modify the rendering of your application by selecting the Single-page Web application template. - Open the Page activity of your Web application and open a list ( Static element > List ). - In the Data tab of your list, select the type of Web applications document and the Label , Creation date and Type of application output columns. - In the Filter sub-tab, create the following filter as shown below in order to display Web applications only and exclude templates from your view. - Close the configuration window of your page and click Preview .The list of Web applications available in your database is displayed. Adding a filter on a folder In an overview, you can choose to access data depending on its location in the Adobe Campaign tree. This is a filter on a folder. Apply the following process to add it to your overview. - Place your cursor on the Page node of your Web application and add a Select folder element ( Advanced controls > Select folder ). - In the Storage window which comes up, click the Edit variables link. - Change the variable label to suit your needs. - Change the variable name with the folder value.The name of the variable must match the name of the element linked to the folder (defined in the schema), i.e. folder in this case. You must re-use this name when you reference the table. - Apply the XML type to the variable. - Select the Refresh page interaction. - Place your cursor on your list, and in the Advanced tab, reference the variable previously created in the Folder filter XPath tab of the list. You must use the name of the element concerned by the folder link, i.e. folder .At this stage, the Web application is not within its application context, the filter can therefore not be tested on the folder. Adding detail to a list When you configure a list in your overview, you can choose to display additional details for each entry on your list. - Place your cursor on the previously created list element. - In the General tab, select the Columns and additional detail display mode in the drop-down list. - In the Data tab, add the Primary key , Internal name and Description column and select the Hidden field option for each one.This way, this information will only be visible in the detail of each entry. - In the Additional detail tab, add the following code: <div class="detailBox"> <div class="actionBox"> <span class="action"><img src="/xtk/img/fileEdit.png"/><a title="Open" class="linkAction" href="xtk://open/?schema=nms:webApp&form=nms:webApp&pk= <%=webApp.id%>">Open...</a></span> <% if( webApp.@appType == 1 ) { //survey %> <span class="action"><img src="/xtk/img/report.png"/><a target="_blank" title="Reports" class="linkAction" href="/xtk/report.jssp?_context=selection& _schema=nms:webApp&_selection=<%=webApp.@id%> &__sessiontoken=<%=document.controller.getSessionToken()%>">Reports</a></span> <% } %> </div> <div> Internal name: <%= webApp.@internalName %> </div> <% if( webApp.desc != "" ) { %> <div> Description: <%= webApp.desc %> </div> <% } %> </div> JavaScript libraries take five minutes to refresh on the server. You can restart the server to avoid waiting for this delay. Filtering and updating the list In this section, you will create a filter for displaying the overview of Web applications created by a specific operator. This filter is created with a link editor. Once you have selected an operator, refresh the list to apply your filter; this requires creating a refresh link. These two elements will be grouped in the same container in order to be graphically grouped in the overview. - Place your cursor on the Page element and select Container > Standard . - Set the number of columns to 2 , so that the link editor and the link are next to each other.For information on element layout, refer to this section . - Apply dottedFilter .This style is referred to in the Single-page Web applicatio n template selected previously. Creating a filter using a link editor - Place your cursor on the container created during the previous stage and insert a link editor via the Advanced controls menu. - In the storage window which opens automatically, select the Variables option, then click the Edit variables link and create an XML variable for filtering data. - Modify the label.It will appear next to the Filter field in the overview. - Choose the Operator table as an application schema. - Place your cursor on the list element and create a filter via the Data > Filter tab: - Expression: Foreign key of the 'Created by' link - Operator: equals to - Value: Variables (variables) - Taken into account if: '$(var2/@id)'!='' The Web application user must be an identified operator with the appropriate Adobe Campaign rights to access the information. This type of configuration will not work for anonymous Web applications. Creating a refresh link - Place the cursor on the container and insert a Link via the Static elements menu. - Modify the label. - Select Refresh data in a list . - Add the previously created list. - Add the refresh icon on the Image field: **/xtk/img/refresh.png **. - Using the sort-order arrows, reorganize the various elements of your Web application as shown below. The Web application is now configured. You can click the Preview tab to preview it.
https://docs.adobe.com/content/help/en/campaign-classic/using/designing-content/web-applications/use-cases--creating-overviews.html
2020-01-17T22:33:20
CC-MAIN-2020-05
1579250591234.15
[array(['/content/dam/help/campaign-classic.en/help/web/using/assets/s_ncs_configuration_webapp_overview.png', None], dtype=object) array(['/content/dam/help/campaign-classic.en/help/web/using/assets/s_ncs_configuration_webapp_result.png', None], dtype=object) ]
docs.adobe.com
Galaxy User Guide¶ collections - Finding roles on Galaxy - Installing roles from Galaxy Finding collections on Galaxy¶ To find collections on Galaxy: - Click the Search icon in the left-hand navigation. - Set the filter to collection. - Set other filters and press enter. Galaxy presents a list of collections that match your search criteria. Installing collections¶ Installing a collection from Galaxy¶ You can also directly use the tarball from your build: ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz . play.yml ├── collections/ │ └── ansible_collections/ │ └── my_namespace/ │ └── my_collection/<collection structure lives here> See Collection structure for details on the collection directory structure. Downloading a collection from Automation Hub¶¶. Install multiple collections with a requirements file¶ You can also setup)' The version key can take in the same range identifier format documented above. Roles can also be specified and placed under the roles key. The values follow the same format as a requirements file used in older Ansible releases. ---. Configuring the ansible-galaxy client¶. See API token for details. [galaxy_server.automation_hub] url= auth_url= token=my_ah_token [galaxy_server.my_org_hub] url= username=my_user password=my_pass [galaxy_server.release_galaxy] url= token=my_token [galaxy_server.test_galaxy] url= token=my_test conjuction with username, for basic authentication. auth_url: The URL of a Keycloak server ‘token_endpoint’ if using SSO authentication (for example, Automation Hub). Mutually exclusive with username. Requires token.¶. Get more information about a role¶ Installing roles from Galaxy¶ see GALAXY_SERVER. Installing roles¶ Use the ansible-galaxy command to download roles from the Galaxy website $ ansible-galaxy install namespace.role_name Setting where to install roles¶. - Define roles_pathin an ansible.cfgfile. - Use the --roles-pathoption for the ansible-galaxycommand. The following provides an example of using --roles-path to install the role into the current working directory: $ ansible-galaxy install --roles-path . geerlingguy.apache See also - Configuring Ansible - All about configuration files Installing a specific version of a role¶,v1.0.0 It is also possible to point directly to the git repository and specify a branch name or commit hash as the version. For example, the following will install a specific commit: $ ansible-galaxy install git+ Installing multiple roles from a file¶ You namespace.role_name, if downloading from Galaxy; otherwise, provide a URL pointing to a repository within a git based SCM. See the examples below. This is a required attribute. - scm - Specify the SCM. As of this writing only git or hg are allowed. See the examples below. Defaults to git. - version: - The version of the role to download. Provide a release tag value, commit hash, or branch name. Defaults to the branch set as a default in the repository, otherwise defaults to Installing roles and collections from the same requirements.yml file¶ You can install roles and collections from the same requirements files, with some caveats. ---. Installing multiple roles from multiple files¶ - src: yatesr.timezone - include: <path_to_requirements>/webserver.yml To install all the roles from both files, pass the root file, in this case requirements.yml on the command line, as follows: $ ansible-galaxy install -r requirements.yml Dependencies¶ namespace.role_name. You can also use the more complex format in requirements.yml, allowing you to provide src, scm, version, and name. depending on what tags and conditionals are applied. If the source of a role is Galaxy, specify the role in the format namespace.role_name: dependencies: - geerlingguy.apache - geerlingguy.ansible Alternately, you can specify the role dependencies in the complex form used in requirements.yml Galaxy expects all role dependencies to exist in Galaxy, and therefore dependencies to be specified in the namespace.role_name format. If you import a role with a dependency where the src value is a URL, the import process will fail. List installed roles¶¶ Use remove to delete a role from roles_path: $ ansible-galaxy remove namespace.role_name See also - Using collections - Sharable collections of modules, playbooks and roles - Roles - Reusable tasks, handlers, and other files in a known directory structure
https://docs.ansible.com/ansible/devel/galaxy/user_guide.html
2020-01-17T21:18:38
CC-MAIN-2020-05
1579250591234.15
[]
docs.ansible.com
While Class Definition Executes a contained activity while a condition evaluates to true. public ref class While sealed : System::Activities::NativeActivity [System.Windows.Markup.ContentProperty("Body")] public sealed class While : System.Activities.NativeActivity type While = class inherit NativeActivity Public NotInheritable Class While Inherits NativeActivity - Inheritance - While - Attributes - Examples The following code sample demonstrates creating } },
https://docs.microsoft.com/en-gb/dotnet/api/system.activities.statements.while?view=netframework-4.8
2020-01-17T21:52:09
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
Units¶ HOOMD-blue stores and computes all values in a system of generic, fully self-consistent set of units. No conversion factors need to be applied to values at every step. For example, a value with units of force comes from dividing energy by distance. Fundamental Units¶ The three fundamental units are: - distance - \(\mathcal{D}\) - energy - \(\mathcal{E}\) - mass - \(\mathcal{M}\) All other units that appear in HOOMD-blue are derived from these. Values can be converted into any other system of units by assigning the desired units to \(\mathcal{D}\), \(\mathcal{E}\), and \(\mathcal{M}\) and then multiplying by the appropriate conversion factors. The standard Lennard-Jones symbols \(\sigma\) and \(\epsilon\) are intentionally not referred to here. When you assign a value to \(\epsilon\) in hoomd, for example, you are assigning it in units of energy: \(\epsilon = 5 \mathcal{E}\). \(\epsilon\) is NOT the unit of energy - it is a value with units of energy. Temperature (thermal energy)¶ HOOMD-blue accepts all temperature inputs and provides all temperature output values in units of energy: \(k T\), where \(k\) is Boltzmann’s constant. When using physical units, the value \(k_\mathrm{B}\) is determined by the choices for distance, energy, and mass. In reduced units, one usually reports the value \(T^* = \frac{k T}{\mathcal{E}}\). Most of the argument inputs in HOOMD take the argument name kT to make it explicit. A few areas of the code may still refer to this as temperature. Charge¶ The unit of charge used in HOOMD-blue is also reduced, but is not represented using just the 3 fundamental units - the permittivity of free space \(\varepsilon_0\) is also present. The units of charge are: \((4 \pi \varepsilon_0 \mathcal{D} \mathcal{E})^{1/2}\). Divide a given charge by this quantity to convert it into an input value for HOOMD-blue. Common derived units¶ Here are some commonly used derived units: - time - \(\tau = \sqrt{\frac{\mathcal{M} \mathcal{D}^2}{\mathcal{E}}}\) - volume - \(\mathcal{D}^3\) - velocity - \(\frac{\mathcal{D}}{\tau}\) - momentum - \(\mathcal{M} \frac{\mathcal{D}}{\tau}\) - acceleration - \(\frac{\mathcal{D}}{\tau^2}\) - force - \(\frac{\mathcal{E}}{\mathcal{D}}\) - pressure - \(\frac{\mathcal{E}}{\mathcal{D}^3}\) Example physical units¶ There are many possible choices of physical units that one can assign. One common choice is: - distance - \(\mathcal{D} = \mathrm{nm}\) - energy - \(\mathcal{E} = \mathrm{kJ/mol}\) - mass - \(\mathcal{M} = \mathrm{amu}\) Derived units / values in this system: - time - picoseconds - velocity - nm/picosecond - k = 0.00831445986144858 kJ/mol/Kelvin
https://hoomd-blue.readthedocs.io/en/stable/units.html
2020-01-17T23:02:55
CC-MAIN-2020-05
1579250591234.15
[]
hoomd-blue.readthedocs.io
How to create facets Once the Posterno search forms extension has been installed and activated, a "Search forms" menu will appear under the "Listings" menu of your WordPress admin panel. Add new facet To add a new facet, click on the "Add new facet" button. You'll then be taken to a new page where you can configure your new facet. Once you've selected the type of facet you wish to create, additional settings will appear within the "Configuration" metabox. Each facet type has different settings. The data source option The data source setting is what tells the facet where to grab the data from. Posterno provides 4 groups of sources: listing data, metadata, taxonomies and custom fields. - Listing data: the listing data group allows the facet to grab data from the title, published data, last modified date and listing author. - Metadata: the metadata group allows the facet to grab data from private data that is stored in the background by Posterno. This currently only provides access to the listing's coordinates. - Taxonomies: this group allows you to create facets capable of filtering by using all the taxonomies registered for the listings post type. The default taxonomies are type, category, location and tags. Developers can register custom taxonomies and they'll be automatically available for filtering through the form. - Custom fields: the majority of listings custom fields that you create through Posterno can be searched through facets. Index update After your create, delete or change certain settings of some facet types, you'll be required to update the database index. When needed, you'll see a message at the top of the screen notifying you. Navigate to the "Indexer" tab and then press the "re-index" button.
https://docs.posterno.com/article/599-how-to-create-facets
2020-01-17T22:00:35
CC-MAIN-2020-05
1579250591234.15
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55911f12e4b01a224b42f596/images/5d82087e2c7d3a7e9ae14bf5/file-xFxkmA2H31.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55911f12e4b01a224b42f596/images/5d82097f2c7d3a7e9ae14c03/file-3bAEhG3Cwp.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55911f12e4b01a224b42f596/images/5d820aba04286364bc8f3ff6/file-xxTR9L6SwM.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55911f12e4b01a224b42f596/images/5d8212992c7d3a7e9ae14c61/file-qv8bzMVaik.png', None], dtype=object) ]
docs.posterno.com
Using DockerUsing Docker From time to time you may need to have a look at the internal workings of wolkenkit. As wolkenkit is built on Docker it helps to be familiar with it. Anyway, there are a few recurring situations you will probably find yourself in, so it is useful to have the following commands at hand. Listing containersListing containers If you need to verify the status of the containers of an application, e.g. to verify whether they are actually running, run: docker psdocker ps Monitoring containersMonitoring containers If you want to continuously monitor which containers are being started and stopped use one of the following commands, depending on your operating system. On macOSOn macOS Run the following command: while :; do clear; docker ps; sleep 1; donewhile :; do clear; docker ps; sleep 1; done On LinuxOn Linux Run the following command: watch -n 1 docker pswatch -n 1 docker ps Entering a running containerEntering a running container Sometimes it is useful to be able to enter a running container, e.g. to inspect the files within the container. Depending on which container you want to enter, run one of the following commands: Write model docker exec -it <application>-core sh Read model docker exec -it <application>-broker sh Flows docker exec -it <application>-flows shWrite model docker exec -it <application>-core sh Read model docker exec -it <application>-broker sh Flows docker exec -it <application>-flows sh To exit from a container, just type exit. Freeing disk spaceFreeing disk space If you are using Docker on a virtual machine, the virtual machine may run out of disk space eventually. To free disk space run the following command: docker system prunedocker system prune
https://docs.wolkenkit.io/latest/reference/debugging-an-application/using-docker/
2020-01-17T21:05:59
CC-MAIN-2020-05
1579250591234.15
[]
docs.wolkenkit.io
Tailon¶ Tailon is a self-hosted web application for looking at and searching through log files. It is little more than a fancy web wrapper around the following commands: tail -f tail -f | grep tail -f | awk tail -f | sed Installation¶ The latest stable version of tailon can be installed from pypi: $ pip install tailon The development version is available on github and can also be installed with the help of pip: $ pip install git+git://github.com/gvalkov/tailon.git Tailon works with Python 2.7 and newer. Using it with Python >= 3.3 is encouraged. Quick start¶ Tailon¶ Tailon is a command-line tool that spawns a local http server that serves your logfiles. It can be configured entirely from its command-line interface or through the convenience of a yaml config file. To get started, run tailon with the list of files that you wish to monitor: $ tailon -f /var/log/nginx/* /var/log/apache/{access,error}.log If at least one of the specified files is readable, tailon will start listening on. Tailon’s server-side functionality is summarized entirely in its help message: Usage: tailon [-c path] [-f path [path ...]] [-h] [-d] [-v] [--output-encoding enc] [--input-encoding enc] [-b addr:port] [-r path] [-p type] [-u user:pass] [-a] [-f] [-t num] [-m [cmd [cmd ...]]] [--no-wrap-lines] Tailon is a web app for looking at and searching through log files. Required options: -c, --config path yaml config file -f, --files path [path ...] list of files or file wildcards to expose General options: -h, --help show this help message and exit -d, --debug show debug messages -v, --version show program's version number and exit --output-encoding enc encoding for output --input-encoding enc encoding for input and output (default utf8) Server options: -b, --bind addr:port listen on the specified address and port -r, --relative-root path web app root path -p, --http-auth type enable http authentication (digest or basic) -u, --user user:pass http authentication username and password -a, --allow-transfers allow log file downloads -F, --follow-names allow tailing of not-yet-existent files -t, --tail-lines num number of lines to tail initially -m, --commands [cmd [cmd ...]] allowed commands (default: tail grep awk) User-interface options: --no-wrap-lines initial line-wrapping state (default: true) Example config file: bind: 0.0.0.0:8080 # address and port to bind on allow-transfers: true # allow log file downloads follow-names: false # allow tailing of not-yet-existent files relative-root: /tailon # web app root path (default: '') commands: [tail, grep] # allowed commands tail-lines: 10 # number of lines to tail initially wrap-lines: true # initial line-wrapping state files: - '/var/log/messages' - '/var/log/nginx/*.log' - '/var/log/xorg.[0-10].log' - '/var/log/nginx/' # all files in this directory - 'cron': # it's possible to add sub-sections - '/var/log/cron*' http-auth: basic # enable authentication (optional) users: # password access (optional) user1: pass1 Example command-line: tailon -f /var/log/messages /var/log/debug -m tail tailon -f '/var/log/cron*' -a -b localhost:8080 tailon -f /var/log/ -p basic -u user1:pass1 -u user2:pass2 tailon -c config.yaml -d Please note that if the file list includes wildcard characters, they will be expanded only once at server-start time. Reverse proxy configuration¶ Nginx¶ - Run tailon, binding it to localhost and specifiying a relative root of your liking. For example: $ tailon -f /var/log/nginx/* -b localhost:8084 -r '/tailon/' - Add the following location directives to nginx.conf: location /tailon/ws { proxy_pass; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /tailon { proxy_pass; } Security¶ Tailon runs commands on the server it is installed on. While commands that accept a script argument (such as awk, sed and grep) should be invulnerable to shell injection, they may still allow for arbitrary command execution and unrestricted access to the filesystem. To clarify this point, consider the following input to the sed command: s/a/b'; cat /etc/secrets This will result in an error, as tailon does not invoke commands through a shell. On the other hand, the following command is a perfectly valid sed script that has the same effect as the above attempt for shell injection: r /etc/secrets The default set of enabled commands - tail, grep and awk - should be safe to use. GNU awk is run in sandbox mode, which prevents scripts from accessing your system, either through the system() builtin or by using input redirection. By default, tailon is accessible to anyone who knows the server address and port. One way to restrict access is by using the built-in basic and digest http authentication. This can be enabled on the command-line with: $ tailon -p basic -u joe:secret1 -u bob:secret2 $ tailon -p digest -u joe:secret1 -u bob:secret2 Development¶ Code, bug reports and feature requests are kindly accepted on tailon’s github page. Please refer to the development document for more information on developing tailon. License¶ Tailon is released under the terms of the Revised BSD License.
https://tailon.readthedocs.io/en/latest/
2020-01-17T21:28:44
CC-MAIN-2020-05
1579250591234.15
[]
tailon.readthedocs.io
removeIngestPoint Removes an RTMP ingest point. This function has the following parameters: Example: API Call: removeIngestPoint privateStreamName=theIngestPoint JSON Response: { "data":{ "privateStreamName":"theIngestPoint", "publicStreamName":"useMeToViewStream" }, "description":"Ingest point created remvoed", "status":"SUCCESS" } The JSON response contains the following details: - data – The data to parse - privateStreamName – The privateStreamName of the deleted Ingest Point - publicStreamName – The publicStreamName of the deleted Ingest Point description – Describes the result of parsing/executing the command - status – SUCCESSif the command was parsed and executed successfully, FAILif not.
http://docs.evostream.com/ems_api_definition/removeingestpoint
2020-01-17T22:00:55
CC-MAIN-2020-05
1579250591234.15
[]
docs.evostream.com
Start memcached using another port The default port for accessing memcached is 11211 and is automatically configured as an allowed port for remote connections. You can change it using these commands: $ cd /opt/bitnami/memcached/scripts/ $ sudo sed -i 's/11211/11212/g' ctl.sh In this example, 11211 is your original Memcached port and 11212 (example) is the new port to use. Once done, restart the memcached server for the change to take effect. $ sudo /opt/bitnami/ctlscript.sh restart
https://docs.bitnami.com/google/infrastructure/memcached/configuration/change-memcached-port/
2020-01-17T21:50:43
CC-MAIN-2020-05
1579250591234.15
[]
docs.bitnami.com
Greenturtle Mag Pro is simple, clean and elegant WordPress Theme for your News or Magazine site. This theme comes with unique widgets, promo section, copyright options and social options. In addition this theme has added custom widget for recent post, author and social menu. Use this awesome WordPress theme for your blog site, you will never look for alternative. “Need Help with installation” or new to wordpress. Our PRO version includes unlimited support while you build your site.Buy Pro Now You can install theme either with Admin Panel or using FTP clients like FileZilla. After successfully installing the Greenturtle Mag. First of all Make Sure recommended plugins are installed and activated. It may take few minutes . Be patient .That's. Please follow the below instructions to get the home page like our demo site.
https://docs.greenturtlelab.com/greenturtle-mag-pro/
2020-01-17T21:21:37
CC-MAIN-2020-05
1579250591234.15
[]
docs.greenturtlelab.com
Transaction Builder Setup Options This document summarizes the setup options for the transaction builder, JustOn's mechanism to convert usage data to billable items. The options to control the transaction builder's behavior include: - Defining fields on custom objects - Using transaction and transaction details - Aggregating transactions by criterion - Periodically aggregating transaction details - Aggregating additional fields - Displaying transaction records on invoices - Generating CSV attachments for transaction records - Enabling transaction table rebuild - Assigning itemized data to specific subscriptions - Enabling usage data billing to multiple parties - Building transactions Info For an introduction to the transaction builder and details about how to set up usage data billing, see Usage Data Billing. For further help with setting up usage data billing in general or the transaction builder in particular, contact JustOn Support. Fields on Custom Objects The transaction builder requires a number of ON fields on objects that are to be converted to transactions. There are two types of required fields: Depending on your business use case, you can also add (multiple) price and quantity fields. Controlling Fields Controlling fields hold meta data for assignments or process control. Note JustOn checks the controlling fields when starting the transaction builder. If a required field is missing, it aborts the transaction building process. Info Depending on your use case and its requirements, you can add more fields in addition to the fields listed above. Using the ON Field Mechanism, JustOn fills and copies them to the generated invoices or invoice line items. Data Fields Data fields hold the actual usage data. Info Depending on your use case and its requirements, you can add more fields in addition to the fields listed above. Using the ON Field Mechanism, JustOn fills and copies them to the generated invoices or invoice line items. Note Be aware that although JustOn does create "tangible" transaction records, it still uses transactions to itemize the usage data. That is, any used data fields must be available on the custom object and on the transaction. Data fields on the custom object must have the same API name as the target field on the transaction, prefixed with ON_. Make sure that the data types are compatible. For an overview of available fields on the transaction, see the Transaction object reference. If there is a field available for the piece of data you require, you must not create a new ON field on the transaction or the invoice line item, only on the custom object. Example: Pass custom usage information to invoice This example assumes that you want to display a custom piece of information ("SKU") retrieved from your usage data in the invoice line item table. To this end, you - create the field ON_SKUon your custom usage data object, the Transaction object and the Invoice Line Item object - set the field ON_SKUto be aggregated via the field Transaction Aggregation Fieldsof the corresponding subscription item ( {"ON_SKU":"LAST"}) - add the field ON_SKUto the field Table Columnsof the invoice template to be used Doing so - copies the value from ON_SKUon the custom usage data object to ON_SKUon the (temporary) transaction - aggregates the values of ON_SKUof all transactions - copies the aggregated value to ON_SKUon the invoice line item - prints the value of ON_SKUof the invoice line item to the invoice line item table. Info If there are no field names defined on the item, the transaction builder falls back to the standard ON_Quantity and ON_Price fields on the source object. The following subscription item fields are available to specify individual price or quantity source fields for usage data: Note You must use the prefix transaction. when referring to the transaction fields. Using individual price and quantity fields Assume you want to use customized price and quantity information for usage data. If your custom usage data object holds this data in ON_ClickRevenue__c and ON_Impressions__c, respectively, the Transaction object requires the fields ClickRevenue__c and Impressions__c. Consequently, the corresponding item fields must specify the following values:. Info The field ON_Type, which is required on the usage data source object, controls whether to produce a transaction or a transaction detail, setting either Transaction or Detail, respectively. For aggregating values, JustOn makes use of roll-up summary fields that sum up specific fields over all transactions details that belong to a specific transaction. The field Transaction on the transaction detail shows the parent transaction. The following transaction fields are relevant for the transaction detail aggregation: Note If Is Aggregated is true, the roll-up fields (suffixed (A)) are used during the invoice run. The values of the original fields Date, Price, Price Tier Quantity, Quantity, Service Period Start and Service Period End are ignored in this case. For a complete list of available transaction and transaction detail fields, see Transaction and Transaction Detail. Aggregating Transactions by Criterion Certain business use cases require transactions to be aggregated by a common criterion. To this end, add the field ON_Criterion to the custom object. This can be a Formula (with the return type Text) or a Text field, which is then used to hold the actual distinguishing criterion. Later, JustOn aggregates all usage data records with the same criterion to the same transaction and creates a single invoice line item. Info You can explicitly disable the aggregation feature for a subscription item by selecting the checkbox Ignore Item Criterion. When using price tiers, JustOn by default determines the price tier individually for each invoice line item as created on the basis of a specific transaction criterion. Selecting the checkbox Ignore Criterion For Price Tier Quantity, however, forces JustOn to ignore the criterion when determining the price tier. Example: Price tiers for transactional item, split by criterion: Periodically Aggregating Transaction Details When using transactions and transaction details, JustOn by default adds the transaction details to the newest transaction that has not yet been billed. Certain business use cases, however, require transactions to aggregate the transaction details of a specific period. To this end, you can define an aggregation period on a subscription. - Open the subscription for which you want to set an aggregation period. - Click Edit. Set the aggregation period as necessary. The following options are available: Weekly: Aggregation period Monday - Sunday Bi-Weekly: Aggregation periods 1st of month - 15th of month and 16th of month - last of month Monthly: Aggregation period 1st of month - last of month Click Save. This enables the periodical aggregation of transaction details. JustOn creates a transaction for every period in which transaction details have occurred, adds transaction details until the period end, and creates a new transaction after a new period has started. Aggregating Additional Fields Certain business use cases require additional data fields on the source object (see Data Fields), which are then to be aggregated for the transaction. You can define the aggregation to take place using the field Transaction Aggregation Fields on the subscription item. Info Make sure that the fields to be aggregated are available with the same name on the source object, the transaction and the invoice line item. The field value aggregation is to be defined as a JSON expression using the following pattern: {"FIELDNAME1":"FUNCTION","FIELDNAME2":"FUNCTION"} The following aggregation functions are available: - Open the subscription item for which you want to define the additional data aggregation. - Click Edit. - In the field Transaction Aggregation Fields, define the fields to be aggregated and the function to use as necessary. Click Save. This enables the additional field aggregation during the transaction generation. Displaying Transaction Records on Invoices You can configure JustOn to print a table of transaction records on an invoice. To this end, provide a corresponding configuration (in JSON notation) in the template field Transaction Records. - Open the template to be edited. In the Additional Content section, double-click the Transaction Recordsfield and specify the transaction table configuration as required. Alternatively, you can click Edit in the detail view to edit the field. Click Save. Once set up as required, this triggers JustOn to create the transaction table and print it to the PDF when executing a batch finalization. For details about regenerating the transaction table, see How to create a new transaction table. Note Make sure that the checkbox fields ON_AddToTransactionTableon the source object (see Controlling Fields) and DisplayTransactionTableson the invoice are selected. Configuration Options The following configuration records are possible: Configuration Examples Show all records of the type Transaction__c that are related to the current invoice: [{ "active" : true, "title" : "Billed Transactions", "name" : "Transaction__c", "fields" : [ "Date__c", "ExternalId__c", "Description__c", "Quantity__c" ], "calculationFields" : [ "Quantity__c" ], "order" : "Type__c, Date__c" }] Show all records of the type Transaction__c that have set the type Foo, and hide records of the type Bar: [{ "active" : true, "title" : "Billed Transactions of type Foo", "name" : "Transaction__c", "fields" : [ "Date__c", "ExternalId__c", "Description__c", "Quantity__c" ], "order" : "Date__c", "conditions" : [ "Type__c = 'Foo'" ] }, { "active" : false, "title" : "Billed Transactions of type Bar", "name" : "Transaction__c", "fields" : [ "Date__c", "ExternalId__c", "Description__c", "Quantity__c" ], "order" : "Date__c", "conditions" : [ "Type__c = 'Bar'" ] }] Show all transaction detail records that refer to the invoice via the field ONB2__Transaction__r.ONB2__Invoice__c: [{ "active" : true, "title" : "Billed Transaction Details", "order" : "Date__c", "name" : "TransactionDetail__c", "fields" : [ "Date__c", "Quantity__c", "Price__c" ], "invoiceFieldName" : "Invoice__c" }] Display a subtotal after each change of Description__c and Title__c: [{ "active" : true, "title" : "Billed Transactions", "name" : "Transaction__c", "fields" : [ "Date__c", "ExternalId__c", "Description__c", "Quantity__c" ], "calculationFields" : [ "Quantity__c" ], "order" : "Type__c, Date__c", "subtotal" : { "Description__c" : ["Quantity__c", "Price__c"], "Title__c" : ["SingleIndividualCount__c"]} }] When invoicing objects in a parent-child relation, show the child records in the transaction table: This example assumes you invoice Case records (= source parent object), with Product Consumed records (= source child object) that represent the actual usage data, as outlined in Best Practice: JustOn for Field Service. Now you want the date, name and quantity of the products consumed to be displayed in the transaction table on the invoice PDF. Make sure to select ON_AddToTransactionTable on both the parent and the child records. [{ "active" : true, "title" : "Products Consumed", "name" : "ProductConsumed__c", "invoiceFieldName" : "ON_Invoice__c", "invoiceRelationshipFieldName" : "Case__r.ON_Invoice__r", "fields" : [ "ON_Date__c", "ON_Title__c", "ON_Quantity__c" ] }] Attaching Transaction CSV to Invoices You can configure JustOn to generate CSV files for transaction records to be attached to invoices. Use this functionality if the transaction tables as printed directly on invoices become too large or if you need the CSV files for a specific purpose. To generate transaction CSV files, provide a corresponding configuration (in JSON notation) in the template field Transaction CSV. - Open the template to be edited. In the Additional Content section, double-click the Transaction CSVfield and specify the transaction CSV file configuration as required. Alternatively, you can click Edit in the detail view to edit the field. Click Save. Once set up as required, this triggers JustOn to create the transaction CSV file and attach it to the invoice when executing a batch finalization. For details about regenerating the transaction CSV, see How to create a new transaction table. Info At the size of 262 kB, JustOn splits the CSV attachment into a new file. So if there are many transaction records, JustOn produces multiple CSV files, putting a numbering to the file name like 2019-01515_Orders_All_1.csv 2019-01515_Orders_All_2.csv Note Make sure that the checkbox field ON_AddToCsv on the source object (see Controlling Fields) is selected. Configuration Options The following configuration records are possible: Configuration Example [{ "active" : true, "name" : "Transaction__c", "order" : "Type__c, Date__c", "invoiceFieldName" : "Invoice__c", "conditions" : [ ], "config" : { "useASCII" : false, "rows" : { "headerRow" : [ "Date", "ExternalId", "Description", "Quantity" ], "recordRow" : [ "Date__c", "ExternalId__c", "Description__c", "Quantity__c" ] }, "rowOrder" : [ "recordRow" ], "options" : { "timeFormat" : "hh:mm a", "language" : "en", "groupingSeparator" : "", "decimalSeparator" : ".", "dateFormat" : "yyyy/MM/dd" }, "decimalPlaces" : { }, "columnSeparator" : "," } }] Enabling Transaction Table Rebuild Under certain circumstances, a user may need to rebuild the transaction tables for an invoice. JustOn provides the option to do so. However, the button to trigger this operation may not be visible in the UI by default. To enable the transaction table rebuild: - Navigate to the object management settings of the Invoice object. - Click Search Layouts. - Click Edit in the row of the Invoices List View. - In the Custom Buttons section, move Rebuild Transaction Tablesto the Selected Buttons column. - Click Save to save the modified page layout. For help about modifying page layouts, see Managing Pages. Assigning Itemized Data to Specific Subscriptions JustOn usually assigns the itemized usage data to items of the best matching subscription. Certain business use cases, however, require distributing the usage data to more than one subscription (using ON_ParentAccount, for example). You can explicitly define a subscription as the target for the usage data to prevent unwanted results in this case. Using the checkbox field ON_ConversionTarget on the subscription, you can control this behavior. - Navigate to the fields list of the Subscription object. Create the following new field. For help about creating fields, see Managing Object Fields. Info If the field ON_ConversionTarget is missing or deselected on all subscriptions, JustOn assigns the usage data to the first matching subscription. Enabling Usage Data Billing to Multiple Parties Generally, the transaction builder is set up to bill usage data to one recipient, usually via the account associated to the corresponding subscription. Certain business use cases, however, require billing transactions to multiple parties. To cover these needs, you apply the configuration explained in Enabling Multiple Party Billing, namely In addition, enabling usage data billing to multiple recipients involves the following tasks: - Creating individual subscriptions for multiple recipients - Configuring transaction record display for multiple recipients - Optionally, configuring multiple currency billing Creating Individual Subscriptions To support multiple billing, you must create recipient-specific subscriptions with individual items. Create two subscriptions as required - one for the merchant, and one for the buyer. For details, see Creating Subscriptions. Add the items to the two subscriptions as required. For details, see Adding Items. Note Make sure that each item's value of the OrderNo field matches one of the possible values of the custom object's ON_OrderNo field (see Configuring Custom Object). Configuring Transaction Record Display To support multiple billing, you must configure two recipient-specific invoice templates with individually configured Transaction Records fields. To find the correct records, you must use the invoiceFieldName variable in the transaction table configuration. Create and configure two recipient-specific templates - one for the merchant, and one for the buyer. For details, see Customizing Templates. For each template, specify the transaction table configurations as required. For details, see Displaying Transaction Records on Invoices. Merchant-specific transaction table configuration: [{ "active" : true, "title" : "Billed Orders", "name" : "Order", "fields" : [ "EffectiveDate", "TotalAmount" ], "invoiceFieldName" : [ "ON_InvoiceMerch__c" ] }] Buyer-specific transaction table configuration: [{ "active" : true, "title" : "Billed Orders", "name" : "Order", "fields" : [ "EffectiveDate", "TotalAmount" ], "invoiceFieldName" : [ "ON_InvoiceBuy__c" ] }] Building Transactions In addition to the continuous invoice run, which creates invoices and invoice line items directly out of objects that hold usage data, there are two ways to invoke the transaction builder. The two approaches generate "tangible" transaction records: - Selecting a transaction filter when executing an invoice run - Using a transaction build job Building Transactions Before Invoice Run When executing an invoice run, you are prompted to select a transaction filter. If you choose to do so, JustOn runs the transaction build process before executing the invoice run. Create a transaction filter as necessary. For details, see Creating Transaction Filter. Perform the invoice run. When prompted, select the intended transaction filter. For details, see Executing an Invoice Run. This generates both transaction records and draft invoices. Info You can also define a transaction filter as a parameter when setting up a scheduled parameterized invoice run. Building Transactions Using a Scheduled Job You can set up a job in order to have transactions built automatically on a regular basis. This involves the following subtasks: - Creating a transaction filter - Creating a batch parameters custom setting to include the transaction filter - Scheduling the transaction build job Creating Transaction Filter Create a transaction filter as necessary. Note that the filter name (for example, transaction1) is passed as a parameter to the batch parameters setting. For details, see Creating Transaction Filter. Creating Batch Parameters Setting To pass the transaction filter to the transaction build process, you create a specific batch parameters setting. This setting combines the batch chain to be executed ( TransactionBuilderChain) with the filter as a specific execution argument. In Setup, open Custom Settings. In Salesforce Lightning, navigate to Custom Code > Custom Settings. In Salesforce Classic, navigate to Develop > Custom Settings. Click Manage in the row of Batch Parameters. - Click New. Specify the details as required. - Name: The name for the batch parameters setting, must match the name of the transaction build job, for example buildtransactions - Batch Chain: TransactionBuilderChain Parameter 1: A parameter to be passed to the batch chain (pattern parameter = value). Assuming the transaction filter name is transaction1, set transactionfilter = transactions1 Click Save. Scheduling Transaction Build Job To schedule the transaction build process, you can use either JustOn's Scheduled Jobs page or Salesforce's Schedule Apex functionality. For details, see Scheduling a Job. Via JustOn's Scheduled Jobs page: Open the Scheduled Jobs page. Use the following URL, or, if you are already logged in, append apex/apex/ONB2__JobsSetupto your org's domain name. As of JustOn 2.52, you can access the Scheduled Jobs page via the JustOn configuration app (App Launcher > JustOn Configuration > Jobs Setup). From the Apex Jobdrop-down list, select Batch Chain Job. - In the Job Namefield, specify the name of the batch parameters setting, for example buildtransactions. - From the Start Timedrop-down list, select the preferred execution time. - Optionally, edit the displayed cron expression to adjust the execution time. Click Schedule. This sets up the transaction build process to be executed at the specified time. Info From the Scheduled Jobs page, you can also run the batch chain immediately. batch parameters setting, for example buildtransactions - Apex Class: ScheduledBatchChain - Frequency: Weeklyor Monthly(with an according weekday or day of month setting) - Start - End - Preferred Start Time Click Save. This sets up the transaction build process to be executed at the specified time. For more details about job scheduling, see Scheduling a Job in the JustOn documentation and Schedule Apex in the Salesforce Help. Return to JustOn Administration.
https://docs.juston.com/en/jo_admin_ref_transa_build_opt/
2020-01-17T21:36:34
CC-MAIN-2020-05
1579250591234.15
[]
docs.juston.com
- Coveo Classic Insight Panel Pro and Enterprise editions only A Coveo Classic Insight Panel automatically pulls information related to the Salesforce object that is currently selected, thus automatically providing contextually relevant information. This article describes how to add an Insight Panel in a Salesforce Support Console as a sub tab component in the right sidebar. It is the Classic equivalent of the Coveo Lightning Insight Panel (see Integrating a Coveo Lightning Insight Panel). If not already done, log in to Salesforce using an account that has the permissions to set up a Salesforce Console for Service. If you do not yet have a console, create one (see Setting Up a Salesforce Console for Service). From the Force.com App Menu, select the console in which you want to create the Insight Panel. Select a Case item, and then click the Edit Layout icon ( ). In the Case Layout editing box at the top of the page, on the right of the Case Layout bar, click the Custom Console Components link. In the Custom Console Components page, configure a Coveo Insight Panel in the same type of tab (Sub tab or a Primary Tab) as the one in which cases open in your console. You can see how your console is configured to open cases from your console configuration page (Setup > App Setup > Create > Apps, and clicking Edit on the console line) by looking at the Choose How Records Display section. Under Primary Tab Components or Sub tab Components > Right Sidebar or Right Sidebar (Knowledge Sidebar): In Style, select Stack, the recommended option. In Width (Pixel), enter a desired default sidebar width. A value of 400px is recommended. In Type, select Visualforce Page. In Component, type PanelForCases, and then click the look up icon ( ) to find and select the CoveoV2Insight Panel for Cases component. The Coveo for Salesforce V2 package currently only includes the PanelForCases Visualforce component. A developer can however easily duplicate this component and adapt it for other Salesforce object types, such as Account. In Label, you can leave the box empty when you want to save the panel real estate, or enter a title such as Coveo Insight Panelto clearly identify the purpose of the panel. In Height %, enter the recommended value, which is 100%. At the top of the page, click Save. In the Case Layout editing page, click Save. You might have to refresh your case to see the Insight Panel. In the new panel, when the Insight Panel is not yet configured, click Set Up a Search Page in the box that appears to initiate the search page creation process. In the Setup Search Page page that proposes search tabs for the detected index content: When you do not want all the proposed tabs (All Content, Chatter Content, Salesforce, and Salesforce Cases in the example), deselect the one(s) you do not want. When you want to immediately create more tabs, even for content types that are not yet indexed, click More Tabs, and then select the desired tabs. Click Create Page. The Insight Panel appears, showing content pulled from the index that is related to the currently selected case. Select a different case to see the Insight Panel content changes. What’s Next? You can customize the Insight Panel with the Interface Editor (see Interface Editor). You can also create a custom Insight Panel without using the Interface Editor (see Creating a Custom Classic Insight Panel). If you initially installed the application for administrators only, you need to give permissions to your Salesforce users once your Insight Panel is completed (see Granting Salesforce Users Access to the Coveo Classic Insight Panels).
https://docs.coveo.com/en/1087/
2020-01-17T22:34:47
CC-MAIN-2020-05
1579250591234.15
[]
docs.coveo.com
In order to benefit from native RDP automation support, a few steps need to be performed. When they are done, you can use Studio on the client machine to build processes with the help of UIAutomation activities and wizards. Setting up the Client - In Studio, go to the Tools page from the Studio Backstage View. Available extensions are displayed there. - Click the Windows Remote Desktop Extension button to install the UiPath Windows Remote Desktop Extension. A confirmation dialogue box is displayed. Click the OK button. The extension is now installed. The Windows Remote Desktop Extension can also be installed from the Command Prompt. You can find out more details on this page. Setting up the Remote Desktop Machine - Download the UiPathRemoteRuntime.msiinstaller. You can request it from here. The download link for the UiPath Remote Runtime is then sent to you via email. The UiPathRemoteRuntime.msiis also included in the UiPathPlatformInstaller.exe. - Install UiPathRemoteRuntime.msion the target Remote Desktop machine that you wish to automate. Please note that the UiPath Remote Runtime component can only be installed on RDP clients that use mstsc.exe. More details about the UiPath Remote Runtime component can be found here. Once installation finishes, you’re good to go on the Remote Desktop machine. Updated 6 months ago
https://docs.uipath.com/studio/docs/rdp-configuration-steps
2020-01-17T22:53:02
CC-MAIN-2020-05
1579250591234.15
[]
docs.uipath.com
Deleting a Kubernetes Cluster To delete you Kubernetes cluster, just click on the Delete Cluster menu option. The cluster will then shut down and all the cluster components will be removed. For Nirmata managed Host Groups, the cloud instances will automatically be deleted for you. For Direct Connect Host Groups, you can manually delete your VMs once the cluster is deleted. Note: It is not recommended to reuse the VMs/Hosts to deploy another cluster to avoid any installation related to settings that remain on the VMs (e.g. data, NAT rules, etc.). You should shut down the VMs and deploy new VMs for your cluster. To manually cleanup your hosts/VMs, run these commands on each host : # Stop and remove any running containers sudo docker stop $(sudo docker ps | grep “flannel” | gawk '{print $1}') sudo docker stop $(sudo docker ps | grep "nirmata" | gawk '{print $1}') sudo docker stop $(sudo docker ps | grep "kube" | gawk '{print $1}') sudo docker rm $(sudo docker ps -a | grep "Exit" |gawk '{print $1}') # Remove any cni plugins sudo rm -rf /etc/cni/* sudo rm -rf /opt/cni/* # Clear IP Tables sudo iptables --flush sudo iptables -tnat --flush # Restart Docker sudo systemctl stop docker sudo systemctl start docker sudo docker ps # Deletes the CNI interface sudo ifconfig cni0 down sudo brctl delbr cni0 sudo ifconfig flannel.1 down sudo ip link delete cni0 sudo ip link delete flannel.1 # Remove cluster database sudo rm -rf /data
https://release-2-3-0.docs.nirmata.io/clusters/deleting_a_kubernetes_cluster/
2020-01-17T21:48:26
CC-MAIN-2020-05
1579250591234.15
[]
release-2-3-0.docs.nirmata.io
CeContentWiz a great tool to create Components for the Windows Embedded CE catalog If you are a Windows Embedded CE developer, you might be familiar with powertoys such as CEFilewiz and CEDriverwiz that are available on Codeplex. David Jones has created an enhnaced version of the CEFilewiz with some interesting added features, like the placement of shortcuts in folders such as My Documents and Start Menu. It also allows to merge registry files. For those who are not familiar with Windows Embedded CE and Windows Embedded Compact development, here is some more explanation on what this is about: Windows Embedded CE is a toolkit provided to Embedded OEMs that you could compare to a box of Legos. The developer will have a catalog of features (OS components, drivers, applications,…) that he will select to constitute his OS, then he will compile the OS, download, debug, … There is a specific set of files with specific file formats that describe a component of the catalog, indicating the toolkit where the files seat, what and how to compile the binaries, where to put the files in the OS image, what type of registry settings to add to the image if the component is selected, what other components should be added by dependencies... It can take some time to manually create your own component (which is very useful if you want to redistribute a feature to someone else or to a customer), reason why such tools are very valuable. You can access the tool on codeplex here. Thanks David!
https://docs.microsoft.com/en-us/archive/blogs/obloch/cecontentwiz-a-great-tool-to-create-components-for-the-windows-embedded-ce-catalog
2020-01-17T23:30:38
CC-MAIN-2020-05
1579250591234.15
[]
docs.microsoft.com
do not support forward compatibility. Projects created with newer versions of Studio might not work with older Robots. For example, a project created in Studio v2018.3 might not work with a v2017.1 Robot. Technical Compatibility Matrix The following table lists what version of Robot and Studio works with what version of Orchestrator. Patches are implicitly supported in this matrix unless specifically mentioned here.. Robot 2019.10.x 1 1 1 Robot 2019FT.x 1 1 1 Robot 2018.4.x 1 1 1 Robot 2018.3.x Robot 2018.2.x Robot 2018.1.x Robot 2017.1.x Robot 2016.2.x 1 - Please note that if the Scalability.SignalR.AuthenticationEnabled parameter is set to true, you can only use v2018.4.3 Robots or above. For more information, see this page. Important! Updated 2 months ago
https://docs.uipath.com/robot/docs/about-backward-and-forward-compatibility
2020-01-17T23:02:05
CC-MAIN-2020-05
1579250591234.15
[array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/no_right.png', 'no_right'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) array(['https://documentationpicturerepo.blob.core.windows.net/screenshots/screenshots/Orchestrator_2016.2/check_mark.png', 'check_mark'], dtype=object) ]
docs.uipath.com
_vetoconfiguration succeeds. ZODB database connections are automatically joined to the transaction, as well as SQLAlchemy connections which are configured with the ZopeTransactionExtension extension from the zope.sqlalchemy package. Custom Transaction Managers¶ By default pyramid_tm will use the default transaction manager which uses thread locals. The current transaction manager being used for any particular request can always be accessed on the request as request.tm. Adding an Activation Hook¶ It may not always be desireable. Retrying¶. -.. Explicit Tween Configuration¶..
https://docs.pylonsproject.org/projects/pyramid_tm/en/latest/
2016-05-24T15:45:30
CC-MAIN-2016-22
1464049272349.32
[]
docs.pylonsproject.org
Compute bit-wise inversion, or bit-wise NOT, element-wise. When calculating the bit-wise NOT of an element x, each element is first converted to its binary representation (which works just like the decimal system, only now we’re using 2 instead of 10): where W is the bit-width of the type (i.e., 8 for a byte or uint8), and each is either 0 or 1. For example, 13 is represented as 00001101, which translates to . The bit-wise operator is the result of where is the NOT operator, which yields 1 whenever is 0 and yields 0 whenever is 1. For signed integer inputs, the two’s complement is returned. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [40]. A N-bit two’s-complement system can represent every integer in the range to . See also bitwise_and, bitwise_or, bitwise_xor, logical_not Notes bitwise_not is an alias for invert: >>> np.bitwise_not is np.invert True References Examples We’ve seen that 13 is represented by 00001101. The invert or bit-wise NOT of 13 is then: >>> np.invert(np.array([13], dtype=uint8)) array([242], dtype=uint8) >>> np.binary_repr(x, width=8) '00001101' >>> np.binary_repr(242, width=8) '11110010' The result depends on the bit-width: >>> np.invert(np.array([13], dtype=uint16)) array([65522], dtype=uint16) >>> np.binary_repr(x, width=16) '0000000000001101' >>> np.binary_repr(65522, width=16) '1111111111110010' When using signed integer types the result is the two’s complement of the result for the unsigned type: >>> np.invert(np.array([13], dtype=int8)) array([-14], dtype=int8) >>> np.binary_repr(-14, width=8) '11110010' Booleans are accepted as well: >>> np.invert(array([True, False])) array([False, True], dtype=bool)
http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.invert.html
2016-05-24T15:44:09
CC-MAIN-2016-22
1464049272349.32
[]
docs.scipy.org
CHOC Children’s strives to minimize anxiety about anesthetizing children before procedures, Dr. Vivian Tanaka, an anesthesiologist, tells “American Health Journal.” CHOC anesthesiologists work with child life specialists to help calm fears, and also encourage parents to ask questions, Dr. Tanaka says. Anesthesiologists might also use premedication to ease anxiety and smooth the injection of anesthesia, she adds. Learn more about children and anesthesia, including common side effects,. Vivian Tanaka, M.D., attended medical school at Tufts University School of Medicine. She served a transitional internship at Newton-Wellesley Hospital in Newton, Mass., and her residency at Beth Israel Deaconess Medical Center in Boston. Get more information about referring patients to CHOC, including a referral information directory, services directory and referral guidelines.
https://docs.chocchildrens.org/dr-vivian-tanaka-covers-anesthesia-children/
2016-04-28T21:49:47
CC-MAIN-2016-18
1461860109830.69
[]
docs.chocchildrens.org
Extensions Module Manager Edit From Joomla! Documentation Revision as of 09:35,. -. <translate> - Ordering: Up-Down Arrows User specified ordering, default is order of item creation. When active, drag and drop ordering by 'click and hold' on the bars icon then 'release' in desired position.</translate> -> - Module Description. A summary of the the Module type with a description.. Create New When creating a new Module, you will be presented with a modal pop up window. Choose the module type by clicking on the module name to be taken to the 'edit' details screen. Toolbar Edit Module For existing Modules, edit functions: > New Module For creating a New Module, new functions: <translate> At the top left you will see the toolbar:</translate> [[Image:Help30-Save-SaveClose-SaveNew-Cancel> - Cancel/Close: Closes the current screen and returns to the previous screen without saving any modifications you may have made.</translate> <translate> - Help: Opens this help screen.</translate>
https://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Edit&oldid=81539
2016-04-28T23:34:12
CC-MAIN-2016-18
1461860109830.69
[array(['/images/3/31/Help30-Extensions-Module-Manager-create-new-popup.png', 'Help30-Extensions-Module-Manager-create-new-popup.png'], dtype=object) ]
docs.joomla.org
Information for "Design appearance using Menus and Modules: Joomla! 1.5" Basic information Display titleJ1.5:Design appearance using Menus and Modules: Joomla! 1.5 Default sort keyDesign appearance using Menus and Modules: Joomla! 1.5 Page length (in bytes)9,861 Page ID1145905:06, 22 January 2011 Latest editorWilsonge (Talk | contribs) Date of latest edit07:20, 6 June 2013 Total number of edits61 Total number of distinct authors4 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (9)Templates used on this page: DesignAim (view source) Getting Started Page Index/1.5 (view source) (protected)Template:AmboxNew (view source) (protected)Template:Icon (view source) Template:JVer (view source) (semi-protected)Template:Navbox (view source) Template:Tip (view source) Template:Version/tutor (view source) J1.5:Getting Started Page Index/1.5 (view source) (protected) Retrieved from ‘’
https://docs.joomla.org/index.php?title=J1.5:Design_appearance_using_Menus_and_Modules:_Joomla!_1.5&action=info
2016-04-28T22:35:09
CC-MAIN-2016-18
1461860109830.69
[]
docs.joomla.org
Developing an MVC Component/Adding language management From Joomla! Documentation J3.x:Developing an MVC ComponentRevision as of 15:53, 21 . Contents - 1 Introduction - 2 Adding language translation in the public site - 3 Adding language translation when managing the component - 4 Adding language translation when managing the menus in the backend - 5 Language File Location Options - 6 Adding translation when installing the component - 7 Packaging the component - 8 Contributors With your favorite file manager and editor, put a file site/language/en-GB/en-GB.com_helloworld.ini. This file will contain translation for the public part. For the moment, this file is empty site/language/en-GB/en-GB.com_helloworld.ini For the moment, there are no translations strings in this file. Adding language translation when managing the component With your favorite file manager and editor, put a file admin/language/en-GB/en-GB.com_helloworld.ini. This file will contain translation for the backend part. admin/language/en-GB/en-GB.com_helloworld.ini COM_HELLOWORLD_HELLOWORLD_FIELD_GREETING_DESC="This message will be displayed" COM_HELLOWORLD_HELLOWORLD_FIELD_GREETING_LABEL="Message" COM_HELLOWORLD_HELLOWORLD_HEADING_GREETING="Greeting" COM_HELLOWORLD_HELLOWORLD_HEADING_ID="Id" With your favorite file manager and editor, put a file admin/language/en-GB/en-GB.com_helloworld.sys.ini. This file will contain translation for the backend part. admin/language/en-GB/en-GB.com_helloworld.sys.ini COM_HELLOWORLD="Hello World!" COM_HELLOWORLD_DESCRIPTION="This is the Hello World description" COM_HELLOWORLD_HELLOWORLD_VIEW_DEFAULT_TITLE="Hello World" COM_HELLOWORLD_HELLOWORLD_VIEW_DEFAULT_DESC="This view displays a selected message" COM_HELLOWORLD_MENU="Hello World!" Language File Location Options Starting with version 1.7 there are 2 ways to install language files for an extension. One can use one or the other or a combination of both. The 1.5 way will install the files in the CORE language folders (ROOT/administrator/language/ and ROOT/language/ ). Since 1.6— includes the files in a "language" folder installed at the root of the extension. Therefore an extension can include a language folder with a .sys.ini different from the one installed in joomla core language folders (this last one not being included in that language folder but in root or any other folder not installed). This let's display 2 different descriptions: one from the sys.ini in the "language" folder is used as a message displayed when install has been done, the other (from the .ini) is used for "normal" operation, i.e. when the extension is edited in back-end. This can be extremely useful when installing also uses some scripts and requires a different value for the description. The sys.ini file is also used to translate the name of the extensions in some back-end Managers and to provide menu translation for components. Therefore, the xml would include since 1.6 <files> <[...] <folder>language</folder> // This folder HAS to include the right subfolders, i.e. language/en-GB/ ... language/fr-FR/ <filename>whatever</filename> [...] </files> and/or then (1.5 way): <languages folder="joomlacorelanguagefolders"> // if using another language folder for cleanliness (any folder name will fit) <language tag="en-GB">en-GB/en-GB.whatever.ini</language> // or <language tag="en-GB">en-GB.whatever.ini</language> if no tagged subfolder <language tag="en-GB">en-GB/en-GB.whatever.sys.ini</language> // or <language tag="en-GB">en-GB.whatever.sys.ini</language> if no tagged subfolder </languages> or simply in ROOT <languages> <language tag="en-GB">en-GB.whatever.ini</language> <language tag="en-GB">en-GB.whatever.sys.ini</language> </languages> Language file used by the install script during install of a component(the first install not upgrade) obeys specific rules described in the article, Specification of language files. During the first install, only the language file included in the component folder (/administrator/components/com_helloworld/language) is used when present. If this file is only provided in the CORE language folder (/administrator/language), no translation occurs. This also applies to KEYs used in the manifest file. When upgrading or uninstalling the extension (not installing), it is the sys.ini file present in the extension root language folder which will display the result of the install from the description key/value. Thereafter, if present, the sys.ini as well as the ini installed in CORE language folder will have priority over the files present in the root language folder of the extension. One advantage of installing the files in the extension "language" folder is that these are not touched when updating a language pack.The other advantage is that this folder can include multiple languages (en-GB always, fr-FR, it-IT, etc.) not requiring the user to install the corresponding language pack. This is handy as they are available if, later on, a user installs the corresponding pack. Adding translation when installing the component See: Language File Location Options With your favorite file manager and editor, put a file EXTENSIONROOT/language/en-GB/en-GB.myextension.sys.ini. This file will contain translation for the install. language/en-GB/en-GB.com_helloworld.sys.ini COM_HELLOWORLD="Hello World!" COM_HELLOWORLD_DESCRIPTION="This is the Hello World description" The COM_HELLOWORLD_DESCRIPTION can be used in the helloworld.xml file Packaging the component Content of your code directory - helloworld.xml - site/helloworld.php - site/index.html - - Create a compressed file of this directory or directly download the not_available_yet archive and install it using the extension manager of Joomla. You can add a menu item of this component using the menu manager in the backend. helloworld.xml <?xml version="1.0" encoding="utf-8"?> <extension type="component" version="3.2.0" method="upgrade"> <name>COM_HELLOWORLD</name> <!-- The following elements are optional and free of formatting constraints --> <creationDate>January 2014</creationDate> <author>John Doe</author> <authorEmail>[email protected]</authorEmail> <authorUrl></authorUrl> <copyright>Copyright Info</copyright> <license>License Info</license> <!-- The version string is recorded in the components table --> <version>0.0.8</version> <!-- The description is optional and defaults to the name --> <description>COM_HELLOWORLD_DESCRIPTION</description> <install> <!-- Runs on install --> <sql> <file driver="mysql" charset="utf8">sql/install.mysql.utf8.sql</file> </sql> </install> <uninstall> <!-- Runs on uninstall --> <sql> <file driver="mysql" charset="utf8">sql/uninstall.mysql.utf8.sql</file> </sql> </uninstall> > <folder>language</folder> </files> <administration> <!-- Administration Menu Section --> <menu>COM_HELLOWORLD_MENU<> <filename>controller.php</filename> <!-- SQL files section --> <folder>sql</folder> <!-- tables files section --> <folder>tables</folder> <!-- models files section --> <folder>models</folder> <!-- views files section --> <folder>views</folder> <!-- admin languages files section --> <folder>language</folder> </files> </administration> </extension> In this helloworld.xml file, languages are installed in: -
https://docs.joomla.org/index.php?title=J3.2:Developing_a_MVC_Component/Adding_language_management&oldid=107172
2016-04-28T22:26:19
CC-MAIN-2016-18
1461860109830.69
[]
docs.joomla.org
Difference between revisions of "Image naming guidelines" From Joomla! Documentation Latest revision as of 20:03, 19 October-en.png]] or [[File:Name of the image-en Suggested Formats For non-Joomla Images Common sense and a descriptive summary should be the guide for non-Joomla images. Again, use the underscore _ or hyphen - to separate the summary. For images of people, please use the person's name. For images of software or website's, please include them within reason. Non-Joomla Image Examples. Localisation of Image Names
https://docs.joomla.org/index.php?title=JDOC:Image_naming_guidelines&diff=cur&oldid=101644
2016-04-28T22:15:37
CC-MAIN-2016-18
1461860109830.69
[]
docs.joomla.org
Quick Start Contents Appcelerator Platform Services Native SDKs to get started with integrating Appcelerator Platform services with your application. Appcelerator Platform The Appcelerator Platform consists of several components: Appcelerator Studio and Appcelerator CLI: tools to create and develop your mobile and cloud applications. Alloy Framework and the Titanium SDK: an MVC framework and cross-platform SDK to help you rapidly develop mobile applications from a single code base. Appcelerator Arrow: an opinionated framework to build and deploy APIs to the cloud that can be consumed by multiple clients. Appcelerator Dashboard: a web portal used to monitor your application's heath and usage. This dashboard is aimed for technical users. Appcelerator Platform Services: set of features that includes free analytics services, and the performance management and automated testing services, which require an Enterprise subscription. Appcelerator Insights: an application used to monitor your application's metrics, which requires a Team subscription. This application is aim for business users. To start using the Appcelerator Platform, you need an account to log in to the Appcelerator Dashboard and Appcelerator Studio, or Appcelerator Insights. or Insights Node.ACS. event to the Funnel. Drag the <platform>-profile-addBookmark-clicked event Appcelerator Platform Appcelerator Platform.
http://docs.appcelerator.com/titanium/latest/?_escaped_fragment_=/guide/Quick_Start-section-37538717_QuickStart-DownloadandInstallTitaniumStudio
2016-04-28T23:39:04
CC-MAIN-2016-18
1461860109830.69
[array(['images/download/attachments/37540095/PlatformConfigDialog.png', 'images/download/attachments/37540095/PlatformConfigDialog.png'], dtype=object) array(['images/download/attachments/43299610/analytics.png', 'images/download/attachments/43299610/analytics.png'], dtype=object) array(['images/download/attachments/43299610/funnelresults.png', 'images/download/attachments/43299610/funnelresults.png'], dtype=object) ]
docs.appcelerator.com
scipy.signal.welch¶ - scipy.signal.welch(x, fs=1.0, window='hanning', nperseg=256, noverlap=None, nfft=None, detrend='constant', return_onesided=True, scaling='density', axis=-1)[source]¶ Estimate power spectral density using Welch’s method. Welch’s method [R134] computes an estimate of the power spectral density by dividing the data into overlapping segments, computing a modified periodogram for each segment and averaging the periodograms. See also - periodogram - Simple, optionally modified periodogram - lombscargle - Lomb-Scargle periodogram for unevenly sampled data135]. New in version 0.12.0. References Examples >>> from scipy import signal >>> import matplotlib.pyplot as plt Generate a test signal, a 2 Vrms sine wave at 1234 Hz, corrupted by 0.001 V**2/Hz of white noise sampled at 10 kHz. >>> fs = 10e3 >>> N = 1e5 >>> amp = 2*np.sqrt(2) >>> freq = 1234.0 >>> noise_power = 0.001 * fs / 2 >>> time = np.arange(N) / fs >>> x = amp*np.sin(2*np.pi*freq*time) >>> x += np.random.normal(scale=np.sqrt(noise_power), size=time.shape) Compute and plot the power spectral density. >>> f, Pxx_den = signal.welch(x, fs, 1024) >>> plt.semilogy(f, Pxx_den) >>> plt.xlabel('frequency [Hz]') >>> plt.ylabel('PSD [V**2/Hz]') >>> plt.show() If we average the last half of the spectral density, to exclude the peak, we can recover the noise power on the signal. >>> np.mean(Pxx_den[256:]) 0.0009924865443739191 Now compute and plot the power spectrum. >>> f, Pxx_spec = signal.welch(x, fs, 'flattop', 1024, scaling='spectrum') >>> plt.figure() >>> plt.semilogy(f, np.sqrt(Pxx_spec)) >>> plt.xlabel('frequency [Hz]') >>> plt.ylabel('Linear spectrum [V RMS]') >>> plt.show() The peak height in the power spectrum is an estimate of the RMS amplitude. >>> np.sqrt(Pxx_spec.max()) 2.0077340678640727
http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.signal.welch.html
2016-04-28T21:51:36
CC-MAIN-2016-18
1461860109830.69
[]
docs.scipy.org
Difference between revisions of "Updating from an existing version" From Joomla! Documentation Revision as of 16:36, 27 January 2012 Contents - 1 Overview - 2 Extension Manager: Update Method - 3 Extension Manager: Install Method - 4 Checking Your Site - 5 Troubleshooting Update Problems. - Note: Do not simply unzip the new programs in the same folder as the existing Joomla installation. This method does not update the database or delete unused programs. - Check that the update was successful, using the steps outlined in #Checking Your Site...
https://docs.joomla.org/index.php?title=Upgrading_from_an_existing_version&diff=64529&oldid=64468
2016-04-28T23:35:03
CC-MAIN-2016-18
1461860109830.69
[array(['/images/d/da/Compat_icon_1_6.png', 'Joomla 1.6'], dtype=object) array(['/images/8/87/Compat_icon_1_7.png', 'Joomla 1.7'], dtype=object)]
docs.joomla.org
Uninstall or repair Enterprise Client installation The process to uninstall or repair the Enterprise Client. Uninstall To uninstall, go to Control Panel → Programs and Features. Select the desired Enterprise Client file and click Uninstall. - Select Remove. - Click Next. Do not shut down the machine immediately after uninstalling the Enterprise client. The uninstall process takes up to a couple of minutes to uninstall the Automation Anywhere language pack after successfully uninstalling the Enterprise Client. When uninstalling the Enterprise Client from a Citrix or a terminal server environment, the Automation Anywhere language packs (User Document, Public Document, Linguify Language pack, and Linguify applications) might not be uninstalled by the uninstall process. You must uninstall them manually from the Control Panel. Repair Use the repair option to reinstall all the program features that were installed during the initial setup run. - Launch the Automation Anywhere Setup Wizard and select the Repair option. - Click Next.
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/install/remove-or-repair-installation.html
2022-09-24T19:03:27
CC-MAIN-2022-40
1664030333455.97
[]
docs.automationanywhere.com
, edits, or deletes tags for a health check or a hosted zone. For information about using tags for cost allocation, see Using Cost Allocation Tags in the Billing and Cost Management User Guide . See also: AWS API Documentation change-tags-for-resource --resource-type <value> --resource-id <value> [--add-tags <value>] [--remove-tag-keys >] --resource-type (string) The type of the resource. - The resource type for health checks is healthcheck. - The resource type for hosted zones is hostedzone. Possible values: - healthcheck - hostedzone --resource-id (string) The ID of the resource for which you want to add, change, or delete tags. --add-tags (list). (structure) A complex type that contains information about a tag that you want to add or edit for the specified health check or hosted zone. Key -> (string) The value of Keydepends -> (string) The value of Valuedepends on the operation that you want to perform: - Add a tag to a health check or hosted zone : Valueis the value that you want to give the new tag. - Edit a tag : Valueis the new value that you want to assign the tag. Shorthand Syntax: Key=string,Value=string ... JSON Syntax: [ { "Key": "string", "Value": "string" } ... ] --remove-tag-keys (list) A complex type that contains a list of the tags that you want to delete from the specified health check or hosted zone. You can specify up to 10 . The following command adds a tag named owner to a healthcheck resource specified by ID: aws route53 change-tags-for-resource --resource-type healthcheck --resource-id 6233434j-18c1-34433-ba8e-3443434 --add-tags Key=owner,Value=myboss The following command removes a tag named owner from a hosted zone resource specified by ID: aws route53 change-tags-for-resource --resource-type hostedzone --resource-id Z1523434445 --remove-tag-keys owner
https://docs.aws.amazon.com/cli/latest/reference/route53/change-tags-for-resource.html
2022-09-24T20:04:31
CC-MAIN-2022-40
1664030333455.97
[]
docs.aws.amazon.com
Terminology A Meta Field consists of a Meta Key, which defines the property you're defining, and a Meta Value, which is the value of the property. For example, the Meta Key "gs1_template" may have a Meta Value of "Default". Why would I use them? You could use Meta Values to store arbitrary data Whiplash doesn't need to ship Orders, but you need on the Packing Slip or some other resource. They can also be used in Search Filters, which can be applied to Rules, Bulk Actions, or reporting. Access We don't accept arbitrary Meta Keys. They must first be validated and created by an Admin. Right now, Admins can access Manage → Meta Keys to create and adjust them. Adding Meta Values to Orders Meta Values can be added via CSV Order upload or the API. How can I use them in Templates? How can I use them in Search and Rules? There's a Search Filter for Meta Fields which accepts a Key: Value pair like "embroidery: 1" or "oms-reference: ABC123". Regex is not currently accepted, only exact searches. Notes: Notes: - For best results, combine the Meta Fields with another search filter like Status. - True / False values accept 1 and 0, respectively.
https://docs.getwhiplash.com/pages/meta-fields
2022-09-24T18:52:31
CC-MAIN-2022-40
1664030333455.97
[array(['https://s3.amazonaws.com/wl-uploads-dev/whiplash-docs/1597326524413-Screen Shot 2020-08-13 at 9.48.39 AM.png', None], dtype=object) ]
docs.getwhiplash.com
Payment Addressing (Payer Validation) title: bsvalias Payment Addressing (Payer Validation) author: andy (nChain) version: 1 brfc: 6745385c3fc0 In this extension specification to the Basic Address Resolution, the receiver's paymail service, in response to a request from a sender, performs a Public Key Infrastructure lookup against the sender, to resolve their public key. The receiver's service then verifies the message signature (which is mandatory under this specification), verifies the timestamp of the receiver's request to limit the scope of message replay attacks, and signs the returned output script to prevent message tampering. Capability DiscoveryCapability Discovery The .well-known/bsvalias document is updated to include a declaration of sender validation enforcement: { "bsvalias": "1.0", "capabilities": { "6745385c3fc0": true } } The capabilities.6745385c3fc0 path is set to true to indicate that sender validation is in force. Any value other than true must be considered equivalent to false and indicates that Sender Validation is not in force. Changes to Basic Address Resolution:Changes to Basic Address Resolution: - Additional capability added to receiver's .well-known/bsvalias - Sender clients MUST include a digital signature in the payment destination request message. This changes the signaturefield from optional, under the Basic Address Resolution specification, to mandatory - Receiver services MUST perform a PKI lookup of the sender's paymail handle included in the request senderHandlefield. If no public key can be resolved, the request MUST fail with HTTP response code 401(Unauthorized) - Receiver services MUST verify that the signature over the payment destination request message is valid. If an invalid signature is present, or no signature is present at all, the request MUST fail with HTTP response code 401(Unauthorized) - Receiver services MUST verify that the declared date/time in the payment destination request message dtfield is within two minutes of the receiver service's own clock, in order to limit the scope of replay request attacks. If the value of the dtfield in the request exceeds the allowed time window, the request MUST fail with HTTP response code 401(Unauthorized)_r: GET PKI\n+cache hints activate svc_r svc_r -> svc_e: Public Key Sender RequestSender Request The body of the POST request is unchanged, however the signature is now a mandatory field: { "senderName": "FirstName LastName", "senderHandle": "<alias>@<domain.tld>", "dt": "<ISO-8601 timestamp>", "amount": 550, "purpose": "message to receiver", "signature": "<compact Bitcoin message signature>" } Receiver ResponseReceiver Response 200 OK200 OK { "output": "...", "signature": "<compact Bitcoin message signature>" } The output field is unchanged from Basic Address Resolution. The signature field is added and MUST contain a valid Bitcoin message signature over the UTF8 byte string content of the output field that senders MUST validate against the receiver's public key. The message digest process and signature encoding scheme is unchanged from that defined in Basic Address Resolution. 401 Unauthorised401 Unauthorised This response type is returned when any of the following conditions are true: - No signatureis included in the receiver request - The signature included in the receiver request does not validate for the public key returned from the receiver's paymail PKI service - The timestamp in the dtfield is more than two minutes away from the sender's paymail service view of the current time
https://docs.moneybutton.com/docs/paymail/paymail-04-02-sender-validation.html
2022-09-24T19:15:57
CC-MAIN-2022-40
1664030333455.97
[]
docs.moneybutton.com
How Yearn Boosts Yield This is an overview of how Yearn investment strategies takes advantage of CRV vote locking on Curve Finance in order to increase yield. CRV vote locking Vote locking, "Boosties", or "Vote boosting" is a Curve Finance feature where CRV is locked into the Curve DAO. Vote locking CRV rewards yields veCRV (voting escrow CRV tokens). The longer time period that CRV is locked for, the more veCRVs are received. The minimum locking period is 1 week and the maximum period is 4 years. veCRV enables its holders to: - vote in Curve governance - stake to earn trading fees and - boost rewards from liquidity provided Voting Once CRV holders vote-lock their CRV, changing it into veCRV, they can then vote on various DAO proposals and pool parameter changes which are proposed, or propose their own changes. It is worth noting that native veCRV cannot be transferred, and the only way to obtain it is by vote-locking CRV. You can stake CRV on Curve.fi and actively manage your boosts for Liquidity Pools yourself, or you can let Yearn take care of CRV staking for you with our dedicated vaults: yveCRV, and yvBOOST. Also our yVault tokens are tradeable and transferable unlike staking CRV at Curve.fi. Staking veCRV (staked CRV), receives a share of trading fees from the Curve protocol (50% of all trading fees generated, from Sept. 19, 2020 - onwards). Those fees are collected and used to buy 3CRV, the LP token for the TriPool (DAI+USDC+USDT), which are then distributed to veCRV holders. Earning Trading fees Based on Yearn's share of the total veCRV, 50% of trading fees will be claimed as CRV, out of which 10% will in turn be locked into the Curve DAO for more veCRV. Boosting Beyond staking, another major incentive for CRV is the ability to boost your rewards on provided liquidity. Vote locking CRV allows you to acquire voting power to participate in the DAO and direct CRV reward allocations towards selected pools, earning a boost of up to 2.5x on the liquidity you are providing. The actual boost multiplier is determined by a formula that depends on the current pool gauge liquidity as a fraction out of the total amount of liquidity provided in Curve pools by Yearn, and the fraction of voting power that the veCRV constitutes out of the total (i.e. its share of the current total of veCRV issued). A "Yearn boost" tool displaying Yearn's current active and potential boost is available here. See the Curve Guide for more details on the formula and its calculation. The yveCRV yVault Earn CRV with a better boost When a user deposits CRV into the vault, that CRV is locked on the Curve.fi platform as veCRV and the user is returned a tokenized version of veCRV, yveCRV. This vault earns you a continuous share of Curve’s trading fees. Every week, these rewards can be claimed as 3Crv (Curve’s 3pool LP token). You could do this yourself directly on the Curve.fi, but there is a very good reason one would prefer to use the yveCRV yVault: more rewards! How much more? Your rewards through this vault can be more than double! Yearn achieves this because it periodically donates 10% of the farmed CRV it earns through all Curve.fi based strategies to this yveCRV vault and allows yveCRV vault depositors to claim Yearn’s share of Curve protocol fees. This means we give all of Yearn’s rewards, which we could have claimed for the protocol, to yveCRV depositors, boosting their weekly rewards. Locking your CRV tokens into the vault means that you delegate your Curve.fi voting power to yearn. Yearn constantly runs simulations to optimize its voting allocations which maximizes yield across all vaults, benefiting your deposits in other vaults! You can now claim your rewards and spend that money on mojitos while you enjoy retirement. Though, another option you might want instead is to add your rewards back into the vault to compound your gains and you can even find a “Restake” button to do just that. You could very well do this manually, but Yearn has you covered here with … The yvBoost yVault Earn boosted CRV with compounding The yvBOOST yVault is a fully automated and compounding version of the yveCRV yVault explained above. To put it simply, this vault claims your weekly 3CRV rewards automatically and uses them to acquire more yveCRV (either via market-buy or mint, depending on which is most efficient at time of harvest). Once deposited, just as in the yveCRV yVault, your CRV tokens’s voting power is handled and optimized by Yearn. You do not need to worry about claiming Curve.fi’s weekly protocol fees, the vault does this for you! This is a “set-and-forget” vault where your CRV tokens grow exponentially, harnessing the power of compound interest! Now you might be wondering how one would extract any gains made from your CRV tokens in the vault, when as mentioned earlier, any CRV deposited into either the yveCRV or the yvBOOST are locked. While you cannot withdraw from the yveCRV vault, you can actually swap both of these vault tokens on Sushiswap. This is because Yearn and its users provide liquidity on Sushiswap to allow swapping of your yveCRV and yvBOOST tokens for ETH (or anything, really). A little alpha. Yearn buys yvBOOST from the market, unwraps it into yveCRV, and donates that yveCRV into the yvBOOST vault, increasing the underlying value of yvBOOST. Locking CRV for veCRV 10% of all CRV earned by the strategies are locked up for 4 years in the Curve DAO in order to get the maximum reward ratio of 1:1 CRV:veCRV. Actual veCRV distribution has not yet begun, with a date for this still to be announced by Curve Finance. CRV Vote Locking in Yearn Staking your CRV directly on the Curve.fi platform means locking your CRV token in exchange for a non-transferrable veCRV token that allows you to manually claim a share of the protocol’s fee (3CRV). You can use this veCRV token to manually rebalance your votes to obtain a boost on your provided liquidity to the Curve.fi platform. Yearn deploys a single CRV vote locking strategy that is shared across its general Curve strategies: - StrategyCurveYBUSDVoterProxy - StrategyCurveBTCVoterProxy - StrategyCurveYVoterProxy - StrategyCurve3CrvVoterProxy Enter Yearn’s yveCRV and yveBOOST vaults Both of these Yearn vaults reward CRV stakers with a share of the CRV locked by Yearn, making it an ideal destination for those who wish to stake CRV whilst remaining liquid: - Earn a share of trading fees from the Curve.fi protocol (3Crv), automatically reinvested (for the yvBOOST vault). - Earn a share of Yearn’s claim of Curve.fi protocol fees, on top of your own rewards (more 3CRV!), automatically reinvested (for the yvBOOST vault). - The collective voting power of the veCRV tokens is optimized and rebalanced automatically to maximize rewards in all of Yearn’s Curve Pool vaults. - Receive yveCRV or yvBOOST tokens for your deposited CRV, allowing you to easily extract profit or exit your staked CRV position Yearn’s work to automate the yield generation and rebalancing of your crypto assets is especially true in the case of your CRV holdings, and Yearn’s yveCRV or yvBOOST offers a powerful, compounding, “set-and-forget” place to stake your CRV! More information - curve.fi webpage - Curve Guide for staking CRV - Curve Guide for vote locking - Curve FAQ - deFinn Infographic on CRV Voting Boost and formula - Boost calculator - Yearn CurveDAO proxy strategy diagram
https://docs.yearn.finance/getting-started/guides/how-boost-works
2022-09-24T18:57:42
CC-MAIN-2022-40
1664030333455.97
[array(['https://miro.medium.com/max/115/0*OsdD6266-e0jWcVH.png', None], dtype=object) array(['https://miro.medium.com/max/115/0*Xr6RMWyDc6gmZnKw.png', None], dtype=object) ]
docs.yearn.finance
role Metamodel::PrivateMethodContainer Metaobject that supports private methods In Perl 6, classes, roles and grammars can have private methods, that is, methods that are only callable from within the class, and are not inherited by types derived by inheritance. For the purposes of dispatching and scoping, private methods are closer to subroutines than to methods. However they share access to self and attributes with methods. Methods method add_private_method method add_private_method(Metamodel::PrivateMethodContainer: , , ) Adds a private method $code with name $name. method private_method_table method private_method_table(Metamodel::PrivateMethodContainer: ) Returns a hash of name => &method_object Type Graph Metamodel::PrivateMethodContainer
https://docs-stage.perl6.org/type/Metamodel::PrivateMethodContainer
2022-09-24T18:59:10
CC-MAIN-2022-40
1664030333455.97
[]
docs-stage.perl6.org
Acconeer documentation pages# New to Acconeer radar technology? The Getting started page helps you get up and running to explore the next sense! Already up and running and want to learn more? The Handbook provides in-depth information on a wide range of topics. The Exploration Tool page covers information related to that project, such as API and file format reference. Looking for software downloads and additional resources? Head over to the developer site.
https://docs.acconeer.com/en/latest/index.html
2022-09-24T20:45:30
CC-MAIN-2022-40
1664030333455.97
[]
docs.acconeer.com
operation is the destination AWS Region for the DB snapshot copy. This command doesn't apply to RDS Custom. For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - CopyTags A value that indicates whether to copy all tags from the source DB snapshot to the target DB snapshot. By default, tags are not copied. Type: Boolean Required: No - KmsKeyId The AWS KMS key identifier for an encrypted DB snapshot. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this parameter to encrypt the copy with a new KMS key. If you don't specify a value for this parameter, then the copy of the DB snapshot is encrypted with the same AWS an AWS KMS key identifier for the destination AWS Region. KMS keys are specific to the AWS Region that they are created in, and you can't use KMS When you are copying a snapshot from one AWS GovCloud (US) Region to another, the URL that contains a Signature Version 4 signed request for the CopyDBSnapshotAPI operation in the source AWS Region that contains the source DB snapshot to copy. This setting applies only to AWS GovCloud (US) Regions. It's ignored in other AWS Regions. You must specify this parameter when you copy an encrypted DB snapshot from another AWS Region by using the Amazon RDS API. Don't specify PreSignedUrlwhen you are copying an encrypted DB snapshot in the same AWS Region. The presigned URL must be a valid request for the CopyDBClusterSnapshotAPI operation that can run in the source AWS Region that contains the encrypted DB cluster snapshot to copy. The presigned URL request must contain the following parameter values: DestinationRegion- The AWS Region that the encrypted DB snapshot is copied to. This AWS Region is the same one where the CopyDBSnapshotoperation is called that contains this presigned URL. For example, if you copy an encrypted DB snapshot from the us-west-2 AWS Region to the us-east-1 AWS Region, then you call the CopyDBSnapshotoperation in the us-east-1 AWS Region and provide a presigned URL that contains a call to the CopyDBSnapshotoperation in the us-west-2 AWS Region. For this example, the DestinationRegionin the presigned URL must be set to the us-east-1 AWS Region. KmsKeyId- The AWS KMS key identifier for the KMS key to use to encrypt the copy of the DB snapshot in the destination AWS Region. This is the same identifier for both the CopyDBSnapshotoperation that is called in the destination AWS Region, and the operation run in the source AWS Region.CustomAvailabilityZone The external custom Availability Zone (CAZ) identifier for the target CAZ. Example: rds-caz-aiqhTgQv. Type: Stringaction. Type: DBSnapshot object Errors For information about the errors that are common to all actions, see Common Errors. - CustomAvailabilityZoneNotFound CustomAvailabilityZoneIddoesn't refer to an existing custom Availability Zone identifier. HTTP Status Code: 404 - DBSnapshotAlreadyExists DBSnapshotIdentifieris already used by an existing snapshot. HTTP Status Code: 400 - DBSnapshotNotFound DBSnapshotIdentifierdoesn Examples Example This example illustrates one usage of CopyDBSnapshot. Sample Request ?Action=CopyDBSnapshot &SignatureMethod=HmacSHA256 &SignatureVersion=4 &SourceDBSnapshotIdentifier=arn%3Aaws%3Ards%3Aus-east-1%3A123456789012%3Asnapshot%3Ards%3Amysqldb-2021-04-27-08-16 &TargetDBSnapshotIdentifier=mysqldb-copy &Version=2014-10-31 &X-Amz-Algorithm=AWS4-HMAC-SHA256 &X-Amz-Credential=AKIADQKE4SARGYLE/20140429/us-east-1/rds/aws4_request &X-Amz-Date=20210629.44</EngineVersion> <DBInstanceIdentifier>mysqldb</DBInstanceIdentifier> <DBSnapshotIdentifier>mysqldb-copy</DBSnapshotIdentifier> <SnapshotCreateTime>2021-05-11T06:02:03.422Z</SnapshotCreateTime> <OriginalSnapshotCreateTime>2021-04-27T08:16:05.356Z</OriginalSnapshotCreateTime> <AvailabilityZone>us-east-1a</AvailabilityZone> <InstanceCreateTime>2021-04-21T22:24:26.573Z</InstanceCreateTime> <PercentProgress>100</PercentProgress> <AllocatedStorage>100</AllocatedStorage> <MasterUsername>admin<:
https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/APIReference/API_CopyDBSnapshot.html
2022-09-24T21:00:07
CC-MAIN-2022-40
1664030333455.97
[]
docs.aws.amazon.com
Action Panel Services The Action Panel calls services with names based on the Element’s panel name and category. Your plugin implements them to add panel elements to the panels. Builder services To build the action panel, the Action Panel plugin will call a service named after the panel’s name, std:action_panel:[panel], where [panel] is the name from the Element definition. It’s called with display and builder arguments, for example: P.implementService("std:action_panel:person", function(display, builder) { // implementation here } ); builder is a PanelBuilder object for building the user interface. display is an object with the following properties: key object If the Action Panel Element is being rendered on an object page, the StoreObject. key path The path of the page being requested. key options The options from the Element definition. While not entirely recommended, you can include additional options there which are relevant to your plugins, but do use your own namespace. Multiple panels For clarity of user interface, remember to split the UI into logical groupings. Use the panel() function on the PanelBuilder. Category service In addition, if the Element has a category defined, a service named after the category, std:action_panel:category:[category] will be called with the same arguments. Global service There is also a global service, std:action_panel:*, called for every action panel. This should only be needed when the implementing plugin doesn’t know the name or category of the panel it should add to. Be careful when using this service to ensure you only affect the panels you intend to. It should only be necessary very rarely. Examples With the definitions from the action panel elements examples, to add a link to the home page for Administrator users: P.implementService("std:action_panel:home_links", function(display, builder) { // Use normal API for checking current user permissions if(O.currentUser.isMemberOf(Group.Administrators)) { // Use PanelBuilder functions to add UI builder.link("default", "/do/example/admin", "Example Admin"); } } ); To add a link to a per-person ‘overview’ page, shown to every user: P.implementService("std:action_panel:person", function(display, builder) { builder.link("default", "/do/example/person-overview"+display.object.ref, "Overview") } } ); Other services The Action Panel plugin provides some utility services which may be needed occasionally. std:action_panel_priorities The PanelBuilder uses numeric and named priorities for ordering. You can use the std:action_panel_priorities service to add additional named priorities. P.implementService("std:action_panel_priorities", function(priorities) { priorities["example:special"] = 150; priorities["example:boring"] = 5000; } ); Use this sparingly. It’s often just as easy to use numbers. Your names should be namespaced to your organisation or plugin, as they’re shared between all plugins in the application. std_action_panel:build_panel The std_action_panel:build_panel service will create a PanelBuilder, call the various services to populate it, and return it. It takes two arguments, the panel name and a display object. The display object will be filled out with display.options.panel if you don’t specify it. For example, to build a PanelBuilder for a person object: var personRef = /* obtain ref to object somehow */ var person = personRef.load(); var builder = O.service("std_action_panel:build_panel", "person", { object: person }); // builder is a populated PanelBuilder for that person
https://docs.haplo.org/standard/action-panel/services
2022-09-24T19:45:59
CC-MAIN-2022-40
1664030333455.97
[]
docs.haplo.org
. this. For more information about configuring KPI severity levels, see Configure KPI thresholds in ITSI in the Service Insights manual. Overview of deep dives in ITSI in this manual. Investigate a service with poor health page, the service or KPI that is selected.11.6, 4.12.0 Cloud only, 4.12.1 Cloud only, 4.12.2 Cloud only, 4.13.0, 4.13.1, 4.14.0 Cloud only, 4.14.1 Cloud only Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ITSI/4.14.0/SI/SITile
2022-09-24T20:05:06
CC-MAIN-2022-40
1664030333455.97
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Goal TrackingGoal Tracking Conversions goals help you measure the performance of aspects of your site. With Altis Native Analytics you can use the client-side API to track any events you wish, however, some goal tracking is integrated into Altis Optimization Framework features like A/B Testing and Experience Blocks. Conversion goals allow us to determine whether an A/B test variant has been successful or not and whether some personalised content is out performing others. The goal conversion rate is generally calculated as the number of conversions / number of impressions. Registering GoalsRegistering Goals Registered goals are available to select in the admin when configuring experience blocks. A few are provided out of the box however additional goals can be added either in PHP, JavaScript, or both. PHPPHP Altis\Analytics\Experiments\register_goal( string $name, array $args = [] ) $nameis a string ID you can use to reference the goal later. $argsis an array of options: string $args['label']: A human readable label for the goal. Defaults to $name. string $args['event']: The JS event to trigger on. Defaults to $name. string $args['selector']: An optional CSS selector to scope the event to. string $args['closest']: An optional CSS selector for finding the closest matching parent to bind the event to. string $args['validation_message']An optional message to show if the variant is missing the selector. JavaScriptJavaScript Goals registered client side behave a little differently. They can be completely custom goals that are not registered in PHP but they can also be used to enhance server-side registered goals by defining the callback used to bind the event listener. To define a own goal handlers in JavaScript use the following function: Altis.Analytics.Experiments.registerGoalHandler( name <string>, callback <function>, closest <array> ) If name matches a server-side registered goal the default callback and optionally closest value for binding the event listener are overridden. The callback receives the following parameters: element <HTMLElement>: Target node for the event. record <function>: Receives the target element and a callback to log the conversion. Altis.Analytics.onReady( function () { Altis.Analytics.Experiments.registerGoalHandler( 'scrollIntoView', function ( element, record ) { var listener = function ( event ) { // Check element has come into view or not. if ( element.getBoundingClientRect().top > window.innerHeight ) { return; } // Remove event listener immediately. window.removeEventListener( 'scroll', listener ); // Record event. record( event ); }; // Start listening to scroll events. window.addEventListener( 'scroll', listener ); } ); } ); Note: This JavaScript should be enqueued in the <head> via the wp_enqueue_scripts action. While it is possible to record an event directly in the goal handler it's primary purpose is to handle binding the event listener and calling the record() function that is passed to it. This is because the place where the record() function is defined may have access to useful data that should be tracked with the event. Scoped Event HandlingScoped Event Handling When goals are processed they are typically bound to an element in the current scope, for instance with Experience Blocks it would be bound to the block DOM node itself. To gain more control over the elements that the event is bound to you can specify the selector and closest parameters depending on your use-case. For example, the built in goal for "Click on any link" is registered like so with the selector a: namespace Altis\Analytics\Experiments; register_goal( 'click_any_link', [ 'label' => __( 'Click any link' ), 'event' => 'click', 'selector' => 'a', ] ); For the built titles A/B test feature the selector is empty but closest is set to a because it is replacing only the post title text and not the markup around it.
https://docs.altis-dxp.com/v12/analytics/optimization-framework/goal-tracking/
2022-09-24T19:22:52
CC-MAIN-2022-40
1664030333455.97
[]
docs.altis-dxp.com
Altis ConsentAltis Consent The Altis Consent API provides a website administrator (you) with a centralized and consistent way of obtaining data privacy consent from your website's visitors. Data privacy consent refers to how user data is tracked, collected, or stored on a website. Web CookiesWeb Cookies A website uses cookies to track user behaviour on the website. Cookies can be thought of as belonging to one of two broad types: first-party cookies, set by the server or code on the domain itself, and third-party cookies served by scripts included from other domains or services (such as Google Analytics). Depending on their function, these cookies can further be categorized into functional or operational cookies, marketing or personalization cookies, or cookies used for statistical data. While functional cookies are necessary for a smooth user experience, other cookies, such as marketing cookies that track user preferences, are not essential for a website's performance. General Data Protection Regulation (GDPR) governs user data privacy and protection and stipulates that businesses require appropriate consent from website visitors before they are served any such cookies. We created this framework to help you readily obtain consent from your website users for various cookies that will be served on your website. With the features available in the API, a website's visitor will be able to control the types of data or cookies that can be stored and shared if you choose to offer that level of control. Why should I use it?Why should I use it? To keep up with the GDPR regulations and your company's privacy and cookie policies. GDPR regulations mandate that businesses need user consent before they can collect or track any visitor data. This tool helps you create a cookie consent banner out of the box. With the banner options, a user can choose to select the cookies (data) that can be collected by the website. The framework lets you manage first party and third party cookies. How does it work?How does it work? Altis Consent is an API that is designed to help you manage when to load scripts that set cookies and other types of data stored locally on a user's computer. It allows you to subscribe to changes in a user's consent and use those events to trigger Google Tag Manager tags, or to lazily load other JavaScript that sets cookies. What does it do?What does it do? Out of the box, the Consent API supports Altis Analytics and Google Tag Manager. Using the controls provided on the Privacy page in the WP admin you can link your website's Privacy Policy and Cookie storage Policy page. There are options to control whether to grant the user a choice to select the types of cookies they want to consent to, or an option to allow all cookies or only functional cookies. You may also easily add a cookie consent banner message in the admin settings. In addition, a robust templating system and dozens of filters allow development teams to fully customize the options or the display of the banner, or only customize certain specific elements, based on a site's needs. What about third-party cookies?What about third-party cookies? The API allows management of third party cookies if it is configured so. See the JavaScript Reference to see how this can be done. Can I manage user data tracked through Altis' personalization features?Can I manage user data tracked through Altis' personalization features? Altis' personalization features track user data using first-party cookies and are supported by default. Does the API help categorize the type of cookies?Does the API help categorize the type of cookies? Altis does not do any automatic categorisation of your cookies. It provides the means for users to control their consent and for you to respond to changes in that consent. Determining which scripts fall into which category needs to be manually determined and can vary by geographical location. Out of the box, five categories are provided: functional, marketing, statistics, statistics-anonymous and preferences. More categories can be added via a filter. Note: functional and statistics-anonymous categories are always allowed by default. This can be modified using the altis.consent.always_allow_categories filter. ConfigurationConfiguration Altis Consent is active by default. However, if your project is already using an alternative consent system and you would like to disable Altis Consent, this can be done in the project's composer.json file: { "extra": { "altis": { "modules": { "privacy": { "consent": false } } } } } Alternatively, you can use the altis.consent.should_display_banner filter or the admin "Display Cookie Consent Banner" setting to disable the banner. Note that neither of these two options will completely disable the Consent API itself, and all associated JavaScript will still load on the page.
https://docs.altis-dxp.com/v12/privacy/consent/
2022-09-24T20:27:41
CC-MAIN-2022-40
1664030333455.97
[]
docs.altis-dxp.com
Working with Amazon S3 objects by using the AWS Toolkit for JetBrains Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. Topics Viewing an object in an Amazon S3 bucket This procedure opens the S3 Bucket Viewer. You can use it to view, upload, download, and delete objects grouped by folders in an Amazon S3 bucket. Open AWS Explorer, if it isn't already open. To view a bucket's objects, do one of the following: Double-click the name of the bucket. Right-click the name of the bucket, and then choose View Bucket. The S3 Bucket Viewer displays information about the bucket's name, Amazon Resource Name (ARN), and creation date. The objects and folders in the bucket are available in the pane beneath. Opening an object in the IDE If the object in an Amazon S3 bucket is a file type recognized by the IDE, you can download a read-only copy and open it in the IDE. To find an object to download, open the S3 Bucket Viewer (see Viewing an object in an Amazon S3 bucket). Double-click the name of the object. The file opens in the default IDE window for that file type. Uploading an object To find the folder you want to upload objects to, open the S3 Bucket Viewer (see Viewing an object in an Amazon S3 bucket). Right-click the folder, and then choose Upload. In the dialog box, select the files to upload. Note You can upload multiple files at once. You can't upload directories. Choose OK. Downloading an object To find a folder to download objects from, open the S3 Bucket Viewer (see Viewing an object in an Amazon S3 bucket). Choose a folder to display its objects. Right-click an object, and then choose Download. In the dialog box, select the download location. Note If you're downloading multiple files, ensure you select the path name instead of the folder. You can't download directories. Choose OK. Note If a file already exists in the download location, you can overwrite it or leave it in place by skipping the download. Deleting an object To find the object to delete, open the S3 Bucket Viewer (see Viewing an object in an Amazon S3 bucket). After you select the object, delete it by doing one of the following: Press Delete. Right-click, and then choose Delete. Note You can select and delete multiple objects at once. To confirm the deletion, choose Delete.
https://docs.aws.amazon.com/toolkit-for-jetbrains/latest/userguide/work-with-S3-objects.html
2022-09-24T19:54:35
CC-MAIN-2022-40
1664030333455.97
[]
docs.aws.amazon.com
Import data into a database You can import, export, or backup files of a specific Redis Enterprise Software database to restore data. You can either import from a single file or from multiple files, such as when you want to import from a backup of a clustered database. Import data into a database To import data into a database: - Sign in to the admin console and then select Databases from the main menu. - Select the target database from the list. - From the Configuration tab, locate and then select the Import button. - Acknowledge the warning to continue the operation. - From the Import dialog, enter the details for the import data. These vary according to the storage location. - To receive email notifications, place a checkmark in the Receive email notification on success/failure option. - Select Import. Supported storage services You can import data from a variety of services, ranging from local servers to cloud services. Earlier versions of Redis Enterprise Software supported OpenStack Swift a storage location; however, that support ended 30 November 2020. As a result, that option is no longer available. HTTP server To import RDB files from an HTTP server, enter the path to the files. You must enter each path on a separate line. FTP server Before you specify to import from an FTP server, make sure that: - The Redis Enterprise cluster has network connectivity to the FTP server. - The user that you specify in the FTP server location has read privileges. To import an RDB file from an FTP server, enter the FTP server location in the format::[email protected]<:custom_port>/path/filename.rdb For example::[email protected]/home/backups/<filename>.rdb SFTP server Before you specify to import from an SFTP server, make sure that: - The Redis Enterprise cluster has network connectivity to the SFTP server. - The user that you specify in the SFTP server location has read privileges. -:[email protected]<:custom_port>/path/filename.rdb For example: s:[email protected]/home/backups/<filename>.rdb AWS S3 Before you import from Amazon S3, make sure that you have: - Path in the format: s3://bucketname/path/<filename>.rdb - Access key ID - Secret access key You can also connect to a storage service that uses the S3 protocol but is not hosted by Amazon AWS. The storage service must have a valid SSL certificate. To connect to an S3-compatible storage location, run: rladmin cluster config s3_url <url> Local mount point Before you specify to import from a local mount point, make sure that: - The node has network connectivity to the destination server of the mount point. - The redislabs:redislabsuser has read privileges synchronized./<filename>.rdb Azure Blob Storage Before you choose to import from Azure Blob Storage, make sure that you have: - Storage location path in the format: /container_name/[path/]/<filename>.rdb - Account name - An authentication token, either an account key or an Azure shared access signature (SAS). Azure SAS support requires Redis Software version 6.0.20. To learn more about Azure SAS, see Grant limited access to Azure Storage resources using shared access signatures. Google Cloud Storage Before you choose to import from Google Cloud Storage, make sure that you have: - Storage location path in the format: /bucket_name/[path/]/<filename>.rdb - Client ID - Client email - Private key ID - Private key Importing into an Active-Active database When importing data into an Active-Active database, there are two options: - Use flushallto delete all data from the Active-Active database, then import the data into the database. - Import data but merge it into the existing data or add new data from the import file. Because Active-Active databases have a numeric counter data type, when you merge the imported data into the existing data RS increments counters by the value that is in the imported data. The import through the Redis Enterprise admin console handles these data types for you. You can import data into an Active-Active database from the admin console. When you import data into an Active-Active database, there is a special prompt.
https://docs.redis.com/latest/rs/databases/import-export/import-data/
2022-09-24T20:00:59
CC-MAIN-2022-40
1664030333455.97
[array(['../../../../images/rs/import-to-active-active-warning.png', 'Import into an Active-Active database'], dtype=object) ]
docs.redis.com
pyiron pyiron - an integrated development environment (IDE) for computational materials science. It combines several tools in a common platform: Atomic structure objects – compatible to the Atomic Simulation Environment (ASE). Atomistic simulation codes – like LAMMPS and VASP. Feedback Loops – to construct dynamic simulation life cycles. Hierarchical data management – interfacing with storage resources like SQL and HDF5. Integrated visualization – based on NGLview. Interactive simulation protocols - based on Jupyter notebooks. Object oriented job management – for scaling complex simulation protocols from single jobs to high-throughput simulations. pyiron (called pyron) is developed in the Computational Materials Design department of Joerg Neugebauer at the Max Planck Institut für Eisenforschung (Max Planck Institute for iron research). While its original focus was to provide a framework to develop and run complex simulation protocols as needed for ab initio thermodynamics it quickly evolved into a versatile tool to manage a wide variety of simulation tasks. In 2016 the Interdisciplinary Centre for Advanced Materials Simulation (ICAMS) joined the development of the framework with a specific focus on high throughput applications. In 2018 pyiron was released as open-source project. Note pyiron_atomistics: This is the documentation page for the basic infrastructure moduls of pyiron. If you’re new to pyiron and want to get an overview head over to pyiron. If you’re looking for the API docs of pyiron_base check pyiron_base. Explore pyiron We provide various options to install, explore and run pyiron: Workstation Installation (recommeded): for Windows, Linux or Mac OS X workstations (interface for local VASP executable, support for the latest jupyterlab based GUI) Mybinder.org (beta): test pyiron directly in your browser (no VASP license, no visualization, only temporary data storage) Docker (for demonstration): requires Docker installation (no VASP license, only temporary data storage) Join the development Please contact us if you are interested in using pyiron: to interface your simulation code or method implementing high-throughput approaches based on atomistic codes to learn more about method development and Big Data in material science. Please also check out the pyiron contributing guidelines Citing If you use pyiron in your research, please consider citing the following work: @article{pyiron-paper, title = {pyiron: An integrated development environment for computational materials science}, journal = {Computational Materials Science}, volume = {163}, pages = {24 - 36}, year = {2019}, issn = {0927-0256}, doi = {}, url = {}, author = {Jan Janssen and Sudarsan Surendralal and Yury Lysogorskiy and Mira Todorova and Tilmann Hickel and Ralf Drautz and Jörg Neugebauer}, keywords = {Modelling workflow, Integrated development environment, Complex simulation protocols}, } Read more about citing individual modules/ plugins of pyiron and the implemented simulation codes.
https://pyiron-atomistics.readthedocs.io/en/latest/
2022-09-24T18:43:25
CC-MAIN-2022-40
1664030333455.97
[array(['_images/screenshots.png', 'Screenshot of pyiron running inside jupyterlab.'], dtype=object)]
pyiron-atomistics.readthedocs.io
Using CSV Transfer¶ This document provides basic instructions on installing and using the CSV-Transfer Utility to import data files into BMON. Note that there is a newer utility that also can be used to transfer data from files of any format into BMON: file-to-bmon. Examine the features of the file-to-bmon utility before choosing the csv-tranfser utility. The instructions provided are written based on having performed a local BMON installation but should work similarly if you have installed BMON on a Webfaction server based on these instructions. The skills needed for installation are primarily: - Linux command line skills - it is assumed you are familiar so explanations of basic linux command line operations are not included in this documentation. Installing Dropbox¶ Note The steps in this section are patterned after the general instructions from Digital Ocean’s How To Install Dropbox Client as a Service on Ubuntu 14.04 uname -a x86_64 means 64 bit, x86 means 32 bit, this is important when installing dropbox, the wrong version won’t work curl -Lo dropbox-linux-x86_64.tar.gz curl -Lo dropbox-linux-x86.tar.gz sudo mkdir -p /opt/dropbox sudo tar xzfv dropbox-linux-x86_64.tar.gz --strip 1 -C /opt/dropbox /opt/dropbox/dropboxd Wait for the message This computer is now linked to Dropbox. Welcome USERNAME to appear on your bmon server Kill the process by pressing Ctrl-C sudo curl -o /etc/init.d/dropbox sudo chmod +x /etc/init.d/dropbox sudo nano /etc/default/dropbox example file DROPBOX_USERS=”USERNAME” where USERNAME is your linux username (not to be confused with your Dropbox username) sudo systemctl daemon-reload sudo systemctl start dropbox sudo update-rc.d dropbox defaults Installing Required Packages¶ cd $home Install the Required Packages Note If you followed the instructions from How to Install BMON on a Local Web Server you may have already installed these packages. However, they were installed in the virtual environment which is encapsulated in its own entity. You will need to install these packages again to the non-virtual environment so the csv transfer tool can access them. sudo pip install pyyaml sudo pip install requests sudo pip install pytz Installing CSV Transfer¶ cd $home sudo git clone cd csv-transfer sudo cp sample_config.yaml config.yaml The README file provided with csv-transfer includes more thorough documentation on explaining all of the parameters in the config file, see it for more information before proceeding either via sudo nano README or by visiting the csv-transfer github repository online. Some basic tips are provided below to aid in properly configuring and running csv transfer successfully. You will need to edit the config.yaml file to instruct it where and how to read your files, do this by running sudo nano config.yaml An example from a test config.yaml file: file_glob:indicates the path where your files are stored in your Dropbox folder, wild-cards (*.csv) are accepted if all of your files are in the same directory and will upload all files meeting that criteria header_rows:the number of rows in the beginning of your file to be considered header or which do not contain data you wish to upload (see csv example below) name_row:indication of which row (within the header count) contains the column names of your data, a 2 here means that of the 4 header rows, the second row contains column names (see csv example below) field_map:is optional, in the example above field_map: “lambda nm: ‘_’.join(nm.split(‘_’)[:2])” strips the final two underscores of the column name ex. SOLAR_TundertankONEFOOT_F_Avg would become SOLAR_TundertankONEFOOT, remove this line if you do not wish to have your column names altered ts_tz:enter the appropriate timezone for your area and/or the area the data is being generated exclude_fields:if you have arbitrary fields, like record numbers, you can enter them here to have them omitted from the import poster_id:enter a unique id bmon_store_url:is the full URL to the storage function of the BMON server, this will include IP OR URL/readingdb/reading/store, the only information to be changed is the portion immediately following http:// bmon_store_key:each BMON server has a unique and secret storage key string; providing this string is required for storing data on the BMON server, copy this from your bmon settings.py file sudo ./csv-transfer.py config.yaml Incorporating Your Imported Data Into BMON¶ Follow the Adding Sensors instructions to add sensors to BMON if you haven’t done so already. The data structure within the SQLite database that BMON runs on is simple. The data from each sensor occupies its own table. The name of the table is the Sensor ID in our case it’s the column name from our csv file. An example .csv file Troubleshooting¶ If you run the csv transfer tool and receive InsecurePlatformWarning or InsecureRequestWarning messages, do the following: sudo nano /csv-transfer/consumers/httpPoster2.py comment out the following lines by adding a # character at the beginning of each line from requests.packages.urllib3.exceptions import InsecureRequestWarning, InsecurePlatformWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) requests.packages.urllib3.disable_warnings(InsecurePlatformWarning) to #from requests.packages.urllib3.exceptions import InsecureRequestWarning, InsecurePlatformWarning #requests.packages.urllib3.disable_warnings(InsecureRequestWarning) #requests.packages.urllib3.disable_warnings(InsecurePlatformWarning)
https://bmon-documentation.readthedocs.io/en/latest/using-csv-transfer.html
2022-09-24T19:05:20
CC-MAIN-2022-40
1664030333455.97
[]
bmon-documentation.readthedocs.io
Enterprise 11: Enable bots to run on other computers Botdevelopers must consider how bots function with the Bot Runner to allow bots to run on other computers. During the development of new bots, developers often point to local copies of files and attachments. This works great if the bot is only run on the developers' local computers. A local path looks something like this: C:\Users\UserName\Documents… Bot Runners are unattended computers (physical or virtual computers) whose job is to run the tasks presented to them. Because these Bot Runners have their own account login credentials, localized paths do not work. The system variable $AAApplicationPath$ resolves this problem. Local path during development: Relative path that works in Bot Runner:Relative path that works in Bot Runner: C:\Users\UserName\Documents\Automation Anywhere Files\Automation Anywhere\My Docs\accounts.xlsx $AAApplicationPath$\Automation Anywhere\My Docs\accounts.xlsx Not only does this make the path shorter, but it also makes the bot portable. When preparing a TaskBot to work with Bot Runner or a co-worker's computer, use the $AAApplicationPath$ anywhere that points to a local file. Tip: Create variables, for example, "vPath," then use the Variable Operation with $AAApplicationPath$ to make the paths short and manageable.
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-developer/aae-enable-bots-to-run-on-other-computers_2.html
2022-09-24T19:32:44
CC-MAIN-2022-40
1664030333455.97
[]
docs.automationanywhere.com
OEE and the KPI Model Automatic KPI Integration Site objects in the ISA-95 Model can be configured to automatically make their subtree accessible via the KPI Model. To prepare for this, you need to create a Root Node in the KPI Model for the Site subtree. So right click into the empty area in the KPI Model Panel and select. Give the new object as suitable name, then click Create. The KPI Model will now show the newly created Enterprise object. Next, go to the ISA-95 Model, open the Properties Panel for the Site object in the Common section and click the … button for the KPI Root Node property. In the Edit KPI Root Node dialog, set Scope to 'KPI Model'. Then select the object which you created previously in the KPI Model as the Root Node. Click OK to confirm your selection and close the Edit KPI Root Node dialog. Then click Apply for the properties. Now, as long as the path to the KPI Root Node is set, OEE indices are made available via the KPI Model. When applying a new path as KPI Root Note for the first time, a new tree structure is created in the KPI Model, representing the Site subtree from the ISA-95 Model. The OEE Equipment Monitor object isn’t created since it is neither a KPI object nor a structural element. KPI in VisualKPI In Visual KPI the inmation KPI Model is displayed in the KPIs section. When the Automatic KPI Integration is used, the OEE indices will also be displayed in Visual KPI. To learn more about VisualKPI and how it is integrated with system:inmation, see the Visual KPI Jumpstart.
https://docs.inmation.com/jumpstarts/1.84/using-oee/oee-kpi-model.html
2022-09-24T20:32:58
CC-MAIN-2022-40
1664030333455.97
[array(['_images/oee-kpi-new-root-node.png', 'Creating a New Enterprise as Root Node in the KPI Model'], dtype=object) array(['_images/oee-kpi-new-root-node-wizard.png', 'Create Enterprise Wizard'], dtype=object) array(['_images/oee-kpi-new-root-node-in-model.png', 'The New Root Node in the KPI Model'], dtype=object) array(['_images/oee-kpi-site-properties.png', "Opening the 'Edit KPI Root Node' Dialog"], dtype=object) array(['_images/oee-kpi-new-root-node-picker-for-kpi.png', 'Setting the KPI Models as Scope'], dtype=object) array(['_images/oee-kpi-new-root-node-picker.png', 'Selecting the KPI Root Node'], dtype=object) array(['_images/oee-kpi-and-isa95.png', 'KPI Model and ISA-95 Model Side by Side'], dtype=object) array(['_images/oee-in-visualkpi.png', 'OEE in VisualKPI'], dtype=object)]
docs.inmation.com
JPath::isOwnerPath::isOwner[edit] Description[edit] Method to determine if script owns the path. public static function isOwner ($path) - Returns boolean True if the php script owns the path passed - Defined on line 188 of libraries/joomla/filesystem/path.php - Since See also[edit] JPath::isOwner source code on BitBucket Class JPath Subpackage Filesystem - Other versions of JPath::isOwner User contributed notes[edit] Code Examples[edit]
https://docs.joomla.org/API17:JPath::isOwner
2022-09-24T20:07:39
CC-MAIN-2022-40
1664030333455.97
[]
docs.joomla.org
Multisig How it works The Multisig is implemented by a 6-of-9 multi-signature wallet. The members of the multi-signature wallet were voted in by YFI holders and are subject to change from future governance votes. Specific powers are delegated to the governance multisig, as defined by Governance 2.0. More information about Yearn governance and how it interacts with the multisig can be found on and on the FAQ. The multisig is implemented as a Gnosis Safe. The multisig's assets, transactions, and signers can be viewed using Gnosis's Web UI. If there is a need to trustlessly audit Yearn's multisig (without trusting the Gnosis site), the Gnosis Safe web app source code can be found on Github here. Multisig membership can be validated from the Gnosis UI here. Cryptographic membership attestations can be validated against the PGP keys in the yearn-security repository. 0x0Cec743b8CE4Ef8802cAc0e5df18a180ed8402A7 - Milkyklim (Yearn Finance) Membership attestation - Etherscan 0x48f2bd7513da5Bb9F7BfD54Ea37c41650Fd5f3a3 - Devops199fan (Saddle Finance, eGirl Capital, Venture DAO) Membership attestation - Etherscan 0x6E83d6f57012D74e0F131753f8B5Ab557824507D - Vasily Shapovalov (p2p.org) Membership attestation - Etherscan 0x6F2A8Ee9452ba7d336b3fba03caC27f7818AeAD6 - Mariano Conti (nanexcool.com, prev. MakerDAO) Membership attestation - Etherscan 0x7321ED86B0Eb914b789D6A4CcBDd3bB10f367153 - Leo Cheng (C.R.E.A.M. Finance) Membership Attestation - Etherscan 0x74630370197b4c4795bFEeF6645ee14F8cf8997D - cp287 (cp0x.com) Membership attestation - Etherscan 0x757280Bd46fC5B1C8b85628E800c443525Afc09b - Ryan Watkins (Messari) Etherscan 0x7A1057E6e9093DA9C1D4C1D049609B6889fC4c67 - Banteg (Yearn Finance) Membership attestation - Etherscan 0x99BC02c239025E431D5741cC1DbA8CE77fc51CE3 - Daryl Lau (Not3Lau Capital) Membership attestation - Etherscan History - May 2021 - YIP-62: Change Two Multisig Signers - April 2021 - YIP-61: Governance 2.0 - August 2020 - YIP-41: Temporarily Empower Multisig - August 2020 - YIP-40: Replace inactive multisig signers
https://docs.yearn.finance/security/multisig
2022-09-24T20:32:55
CC-MAIN-2022-40
1664030333455.97
[]
docs.yearn.finance
User manual Click on any of the links below to learn more about the different sections that make up Reconmap. This is a collaborative user manual, if you spot any typo, or just want to help expand the notes head out to the Github repo where this is hosted and either send a pull request or create an issue so that we can fix it. Thanks! Table of contents - Browser requirements - CLI requirements - Clients - Command line interface - Commands - Dashboard - Documents - Keyboard shortcuts - Notes - Organisation - Projects - Reports - Search - System - Tasks - Users - Vulnerabilities
https://docs.reconmap.com/user-manual/
2022-09-24T19:09:29
CC-MAIN-2022-40
1664030333455.97
[]
docs.reconmap.com
Datasets # Features # Create and manage better, and more robust datasets in a standardised manner. - manage your: - datasets - dataset versions - dataset splits - labels - annotations - dataset items (images, text) - import/upload your existing datasets, while preserving: - splits - labels - annotations - items - Simply upload images for specific labels in the browser - browse your data on the web and mobile - capture (un)labelled images on mobile, even offline, to capture data faster - annotate your dataset items for: - image classifcation - object detection - text classification - named entity recognition - download/export a specific dataset version
https://docs.seeme.ai/datasets/
2022-09-24T18:30:15
CC-MAIN-2022-40
1664030333455.97
[]
docs.seeme.ai
Control search concurrency on search head clusters Controlling search concurrency on search head clusters requires an understanding of two aspects of job allocation: - How the captain allocates saved search jobs to cluster members - How the cluster enforces quotas for concurrent searches How the captain allocates saved searches. For details of your cluster's scheduler delegation process, view the Search Head Clustering: Scheduler Delegation dashboard in the monitoring console. See Use the monitoring console to view search head cluster status. How the cluster handles concurrent search limits and in the Reporting Manual. Cluster-wide. Note that a search running on a member will also fail if srchJobsQuota or srchDiskQuota is exceeded for the user on that member. Cluster-wide!
https://docs.splunk.com/Documentation/Splunk/8.1.2/DistSearch/SHCjobscheduling
2022-09-24T19:09:20
CC-MAIN-2022-40
1664030333455.97
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Setting up your Vincere integration In order to complete the configuration of your Vincere integration with idibu, you will need to have 'Administrator' status in both Vincere and idibu. There are two sides to the integration set up, and we'll walk through both in this article. Note *The CRM Handoff is the idibu tool we use to forward your candidates into Vincere. Setting up your idibu tab inside Vincere In order to connect idibu and Vincere, you will firstly need to generate your unique ID and Secret tokens inside your idibu account. This is actually very simple and we explain each step in full here. 1. Inside Vincere, go to 'Settings' and click on 'Marketplace'. 2. Click on the idibu panel and the idibu app settings will open. 3. Click on the 'enable idibu' button, and then paste in the idibu ID and secret tokens mentioned above. 4. Click 'Save' at the bottom of the page. 5. When you or your users next open one of your jobs inside Vincere, you will now see the 'idibu' tab. What do I do if I can't see the idibu tab? Your Vincere job needs to be public in order for the idibu tab to appear. To do this: 1. Inside your job, click the 'Actions' button in the top right of the page: 2. Click 'post' in the drop down list, and select the option to make your vacancy public. The idibu tab will then appear. Setting up the idibu CRM handoff Our CRM handoff is the tool that allows you to forward your applicants into Vincere. It also allows you to define the triggers and parameters that you use to forward your applicants. In order to configure our CRM handoff for Vincere, you will need some information from your Vincere account, so we'd advise logging into both Vincere and idibu on separate tabs in your browser before you start. 1. Inside idibu click on 'Settings' and select 'CRM handoff' from the drop down list. 2. Inside the CRM handoff area, select Vincere from the drop down list at the top of the page. 3. Every Vincere account has a unique reference in the URL, and the first step is to input this reference into the field inside idibu. Your unique reference is the text that appears in this space In the example below, the unique reference for our account is 'idibudemo'. When you have identified your unique reference, type it into the 'Vincere domain' field in idibu, as per the second screenshot. 4. Inside Vincere, go to 'Settings' and click on 'Marketplace'. 5. Click on the API tab at the top of the page where this information is held. This is where you will access the information required for idibu. 6. In the 'API Key' area, highlight and copy your 'API token' from Vincere. 7. Now paste this into the corresponding API field in idibu. 8. Do the same with your Client ID. 9. Now we need to work the other way round, taking some data from idibu and pasting it into Vincere. Inside the idibu CRM handoff page, highlight and copy the value in the grey 'Callback URL' field. (Make sure you scroll right the way across to highlight all the characters before copying.) 10. Paste this into same field in the Vincere API page. 11. Back in your idibu CRM handoff page, click on 'log into Vincere'. 12. You'll be taken to a new page where you can log in with your admin credentials. Note You will need to type in your password manually (autofill services are not recognised here). 13. You can now complete the rest of the CRM handoff settings to define the triggers for sending your applicants from idibu into Vincere. 14. When you have completed this process, click 'Save Settings'.
https://v3-docs.idibu.com/article/731-setting-up-your-vincere-integration
2022-09-24T19:55:47
CC-MAIN-2022-40
1664030333455.97
[array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/2893b210-edaf-43e7-aa32-e553d3222fbe/2018-11-02_11-16-34.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/6c801e16-fe36-4de0-83eb-baba54b494b0/2018-11-02_11-29-38.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/b2eb3a2d-d508-4295-8d98-a2904c22745c/2018-11-02_11-36-03.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/35391083-59f3-4f80-b0cb-24f5e0893455/2018-11-02_11-40-34.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/5097ed43-8657-447e-bfa1-137d44532b68/2018-11-02_11-42-29.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/3edc2105-e1c1-4ce1-8cc1-5329caeb9939/2018-11-02_11-46-32.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/990a47de-2980-467a-9f67-52cb0c38bbe5/2018-11-02_11-51-28.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/d292e04e-f6ba-49e8-9693-ccbafc3d9558/2018-11-02_11-54-02.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/f3b3df06-6dfe-4049-8b0e-9938e6c4a8d8/2018-11-02_11-56-39.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/2b6dc724-a4ae-4e27-a142-15f9ec57112b/2018-11-02_11-54-02.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/2893b210-edaf-43e7-aa32-e553d3222fbe/2018-11-02_11-16-34.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/18698067-1c09-4b77-9ca5-cdff9ffd7fdd/2018-11-02_12-21-58.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/19503220-2d2f-4944-a2bc-4adeccbb3164/2018-11-02_12-23-46.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/92769cb2-1b9a-4af0-89dc-126ffd5aa906/2018-11-02_12-26-28.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/db9dc02f-a46f-4ccd-8f5f-9db6934b2f7e/2018-11-02_12-29-01.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/1cc084f5-24ca-45dc-9d84-4669d88dfd1f/2018-11-02_12-30-11.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/b51af091-0e86-402d-83c7-893aa382693c/2018-11-02_12-36-46.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/4d24d7a6-1d3e-4cde-95b8-c4c3bacb2cc0/2018-11-02_12-29-01.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/7f7b7200-f84e-441e-b2d5-fe749cdc8b6b/2018-11-02_12-47-59.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/ed7ae118-020d-4c43-9219-a3ffdc4a52df/Screen%20Shot%202018-10-30%20at%205.57.23%20pm.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/9a8597c2-7c7b-4b0b-9259-b8b31ba6969b/2018-11-02_12-45-15.png', None], dtype=object) array(['https://content.screencast.com/users/timidibu/folders/Snagit/media/57338070-c8b4-4570-9a73-c45763b9f7d0/2018-11-02_12-49-37.png', None], dtype=object) ]
v3-docs.idibu.com
Advanced MetaBot summary and best practices In this task 2 DLLs were integrated into a MetaBot and Logic was developed to perform operations with an external library and REST API. Why use MetaBots? - Create independent, highly reusable automation blueprints of applications, DLLs, and commands facilitated by automation Logic. - Leverage the MetaBot library to rapidly standardize org-wide automation. - Ensure systematic, accelerated automation return on investment (ROI). - Eliminate common navigational errors in complex automation tasks. - Automate without requiring access to live application. - Calibrate newer versions of applications used in a MetaBot to ensure compatibility..
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-developer/aae-meta-bot-two-summary-and-best-practices.html
2022-09-24T19:18:36
CC-MAIN-2022-40
1664030333455.97
[]
docs.automationanywhere.com
Welcome to Chai Welcome to the Chai.xyz Documentation Welcome to Chai! Focused on user experience, security, and innovation, our mission is to build a comprehensive DeFi Hub for the Aurora ecosystem . Chai is backed by Stealth Capital, the NEAR Foundation, and over 15 institutional and private investors. Our team consists of 15 experienced professionals led by co-founder and CEO Marin Zvo, co-founding Chairman Dean Thomas, and co-founding Advisor Alex Ng, who collectively bring more than 20 years of experience in the tech and blockchain spaces. Chai provides users an affordable, secure and fully EVM compatible medium to effortlessly swap, lend, borrow, and earn yield on their digital assets. Powered by NEAR protocol's Aurora, Chai allows for ~10X higher throughput compared to traditional L1s, lightning-fast transactions, affordable gas fees at an average execution cost of $0.01, and much more! Core Values — Security and Integrity At a point in time rampant with hacks and uncertainty resulting in the loss of millions of dollars for DeFi users, Chai values the security of investor funds above all else. Our experienced team of developers strives to provide users with a safe and easy-to-use platform. Our protocol has been rigorously audited by blockchain security companies Peckshield, DEKALabs and several experts. While trustless decentralization is the goal, we believe that acting with integrity gives our users peace of mind. This is why our team is public , we personally stand behind Chai and the product we create. The team at Chai pledges to act in the best interest of users and investors. We're not here to sell you a product, we're here to build a community. Product Suite Chai Money Market Users are able to lend out their cryptocurrencies by depositing them into Chai pools, and receive chTokens in return; these chTokens can be redeemed for the underlying coin at any time for more than initially deposited, this is how interest is paid. Users similarly take overcollateralized loans to borrow against their cryptoassets. Chai DEX One-stop decentralized trading, liquidity providing and farming on Aurora. Chai DEX is an automated market maker ( AMM ), and together with Chai Money Market, is focused on becoming a cornerstone DeFi product to drive platform growth. Chai StableSwap Our StableSwap is an AMM (like Chai DEX) but specializes in swapping between stablecoins and pegged assets at the best swap rates with virtually zero slippage. Unlike usual swap fees of 0.2% to 0.3%, our StableSwap offers the lowest swap fees on Aurora. At launch, the Chai StableSwap will support the most liquid stablecoins such as USDC, USDT and DAI. Similar to our DEX, LPs can earn $CHAI tokens by supplying stablecoins to the underlying pools. Roadmap Last modified 4mo ago Copy link Outline Core Values — Security and Integrity Product Suite
https://docs.chai.xyz/
2022-09-24T18:29:29
CC-MAIN-2022-40
1664030333455.97
[]
docs.chai.xyz
Joystick API Description Joystick API. Introduction The joystick driver is a platform board level software module that provides the functionality to initialize and read joystick position through the joystick hardware present on the Wireless Pro Kit (BRD4002A) Configuration Joystick driver allows configuring the rate of signal acquisition. This can be configured from the joystick component, available options are: - 100 samples/second - 1000 samples/second - 5000 samples/second - 10000 samples/second - 25000 samples/second Joystick driver also allows configuring the voltage values which correspond to a particular joystick position Usage Once the joystick handle of type sl_joystick_t is defined, joystick functions can be called being passed the defined handle. The functions include the following: sl_joystick_init must be called followed by sl_joystick_start before attempting to read the position of the joystick. The sl_joystick_get_position is used to update the position, and needs to be called from a tick function or similar by the user. Basic example code to show usage of Joystick driver Function Documentation ◆ sl_joystick_init() PROTOTYPES **********************************. Joystick init. This function should be called before calling any other Joystick function. Sets up the ADC. - Parameters - - Returns - Status Code: - SL_STATUS_OK Success - SL_STATUS_ALLOCATION_FAILED Bus Allocation error ◆ sl_joystick_get_position() Get joystick position. - Parameters - - Returns - Status Code: - SL_STATUS_OK Success - SL_STATUS_NOT_READY Joystick acquition not started error ◆ sl_joystick_start() Start Analog Joystick data acquisition. - Parameters - ◆ sl_joystick_stop() Stop Analog Joystick data acquisition. - Parameters - Macro Definition Documentation ◆ ENABLE_SECONDARY_DIRECTIONS DEFINES ************************************.
https://docs.silabs.com/gecko-platform/4.1/hardware-driver/api/group-joystick
2022-09-24T20:29:10
CC-MAIN-2022-40
1664030333455.97
[]
docs.silabs.com
Setting your profile picture avatar Avatars (or "Profile pictures") in Tuple are set using a service called Gravatar. If you're not familiar, configuring your Gravatar (for free) lets any service retrieve that avatar knowing only your email address. Setting an avatar makes it easier for your colleagues to find you, so we recommend taking the time to do it. As a bonus, services you sign up for in the future can pull in your avatar automatically, saving you steps later. Why not do it right now? Note: You will need to create the Gravatar account with the same email address as your Tuple account in order for Tuple to be able to sync the image.
https://docs.tuple.app/article/7-avatars
2022-09-24T18:45:45
CC-MAIN-2022-40
1664030333455.97
[]
docs.tuple.app
SchedulerDataStorage.AppointmentDeleting Event Allows you to cancel the deletion of an appointment. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v20.1.dll Declaration public event PersistentObjectCancelEventHandler AppointmentDeleting Public Event AppointmentDeleting As PersistentObjectCancelEventHandler Event Data The AppointmentDeleting event's data class is PersistentObjectCancelEventArgs. The following properties provide information specific to this event: Remarks The AppointmentDeleting event is raised before an appointment is deleted and allows you to cancel the deletion of an appointment. The event parameter's PersistentObjectEventArgs.Object property allows the processed appointment to be identified. To prevent it from being deleted, set the PersistentObjectCancelEventArgs.Cancel property to true. IMPORTANT Do not modify a persistent object for which the event is raised in the event handler's code.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraScheduler.SchedulerDataStorage.AppointmentDeleting
2020-09-18T11:14:36
CC-MAIN-2020-40
1600400187390.18
[]
docs.devexpress.com
How to Create a Facebook App In order to connect a Facebook profile or page to SkyePress you will first need to have a Facebook App for your website. To create a new App navigate to Facebook’s Developer website and log in. If you are not registered as a developer click on the “Register” button found in the top right corner of the page. You will be prompted with a message to accept Facebook’s policies. Toggle the switch to “Yes” in order to accept the policies and then click the “Next” button. If your account is not verified you will be prompted to verify it. Once you’ve entered the Confirmation Code, click on the “Register” button. Once registered as a developer you can start creating Facebook Apps. Click the “Create App” button to start the process. You will be prompted with a dialog box where you will be able to specify a name for your app and also a contact email. After completing the fields click the “Create App ID” button. Your Facebook App is now created and ready to use. Navigate to your app’s Dashboard page. Here you will find your App ID and Secret key which you can copy to SkyePress in order to connect a Facebook profile or page.
https://docs.devpups.com/skyepress/how-to-create-a-facebook-app/
2020-09-18T10:05:03
CC-MAIN-2020-40
1600400187390.18
[array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-1-1024x584.png', 'create-facebook-app-1'], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-2-1024x583.png', None], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-3-1024x583.png', None], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-4-1024x583.png', None], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-5-1024x584.png', 'create-facebook-app-5'], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-6-1024x584.png', 'create-facebook-app-6'], dtype=object) array(['https://docs.devpups.com/wp-content/uploads/2016/12/create-facebook-app-7-1024x584.png', 'create-facebook-app-7'], dtype=object) ]
docs.devpups.com
Catalog Menu The Catalog Menu provides easy access to product creation, category, and inventory management tools, as well as shared catalogs for custom pricing in B2B stores. Catalog Menu On the Admin sidebar, click Catalog. Products Create new products of every type and manage your inventory. See Products Grid. Categories Create the category structure that is the foundation of your store’s navigation. See Categories. Shared Catalogs Shared catalogs give you the ability to make custom pricing available to different companies.
https://docs.magento.com/user-guide/catalog/catalog-menu.html
2020-09-18T11:50:53
CC-MAIN-2020-40
1600400187390.18
[]
docs.magento.com
... Visual view Firewalls Yes. They will be there in a new future, since we are not focused on this part. The important thing is that everything is blocked when the pi fires up in production mode, and somehow, some base services only, will be allowed to connect to the DMZ-address.
https://docs.tornevall.net/pages/diffpagesbyversion.action?pageId=62423132&selectedPageVersions=11&selectedPageVersions=10
2020-09-18T10:40:36
CC-MAIN-2020-40
1600400187390.18
[]
docs.tornevall.net
What is an API? The term API stands for Application Programming Interface. It’s a communication protocol between a client and a server intended to simplify the implementation of each others data. The communication will be in a specified format or initiate a defined action, as to fit perfectly in each others software. An API Key is the unique identifier used to authenticate a user or a program that is calling upon the API to share its information with. What does the API mean for users of our Platform? It means you can use the API to easily send your own data to be implemented with the data already on our platform. This could be data deriven from your ERP or CRM system, or from your sensors and measurement systems. How to use our API and API key - Generate an API Key Since the API is in a public beta, the API-key has to be requested via mail at [email protected]. You can contact us by sending an email with the subject “Calculus API Key Request“. Let us know in the mail for which company you need the API key and what you will be using it forPlease note! If the API key were to get compromised, you can send us an email on the same address to revoke the key. - Use the API The API endpoint to use when posting data to our platform is ““. Calls are done using the Http-action “POST”. 2.1 Query parameters A typical call would be a POST call to ”” 2.2 Headers In the headers we send the API Key to the cloud to authenticate your device. 2.3 JSON-Body The body of the request is formatted in JSON. Using ‘check-deltas‘ and ‘reset-deltas‘ are used when values can overflow and reset to 0. If you want the correct cumulative values this can be used to calculate these in our backend. Contact us for more information on how to implement this at [email protected]. The following values can be set: Example JSON body: { "device":"prefix_TestDevice", "check-deltas":true, "reset-deltas":true, "timestamp":1573206752, "measurements":[ { "path": "measurement_1", "value": 1 }, { "path": "measurement-2", "stringValue":"OFF" }, { "path": "temp|measurement.3", "value": 9.4, "stringValue":"BROKEN" }, { "path": "temp|measurement.4", "value": 11.4, "stringValue":"OVERLOAD" } ] }
http://docs.calculus.group/knowledge-base/how-to-use-our-api/
2020-09-18T11:48:19
CC-MAIN-2020-40
1600400187390.18
[]
docs.calculus.group
TOPICS× Publishing and unpublishing forms and documents AEM Forms let you create, publish, and unpublish forms easily. For more information about AEM Forms, see Introduction to managing forms . The AEM Forms server provides two instances: Author and Publish. Author instance is for creating and managing form assets and resources. Publish instance is for keeping assets and related resources that are available for end users. You can import XDP and PDF forms in the Author mode. For more information, see Getting XDP and PDF documents in AEM Forms . Supported assets AEM Forms support the following types of assets: - Adaptive forms - Adaptive documents - Adaptive form fragments - Themes - Form templates (XFA forms) - PDF forms - Document (Flat PDF documents) - Form Sets - Resource (Images, Schemas, and Stylesheets) Initially, all the assets are available only in the Author instance. An administrator or form author can publish all the assets except resources. When you select a form and publish it, its related assets and resources are also published. However, dependent assets are not published. In this context, related assets and resources are assets that a published asset uses or refers to. Dependent assets are assets that refer to a published asset. Your: - Custom layouts - Custom appearances - CSS File - taken as input in Adaptive Form container properties dialog - Client Library Category - taken as input in Adaptive Form container properties dialog - Ay other client library which may been included as part of Adaptive Form template. - Design Paths Asset states.For Forms Manager, if the user does not have permission to publish the listed assets, the Publish action is disabled. An asset that requires extra permissions is shown in red.After an asset is published, metadata properties of the asset are copied to the Publish instance and the status of the asset is changed to Published. The status of dependent assets that are published is also changed to Published.After publishing an asset, you can use the Forms Portal to display all the assets on a web page. For more information, see Introduction to publishing forms on a portal . & Documents AEM Forms let you schedule asset publishing and unpublishing for Forms & Documents. You can specify the schedule in the Metadata Editor. For more information about managing form metadata, see Managing form metadata. Follow these steps to schedule the date and time of publishing and unpublishing Forms & Documents assets: - . - Do one of following and then tap unpublish: - If you are in the card view, tap Enter Selection , and tap the asset. The asset is selected. - If you are in the list view, hover over an asset and tap . The asset is selected. - Tap an asset to display its details. - Display an asset's properties by tapping View Properties . - When the Unpublish process starts, a confirmation dialog appears. Tap Unpublish .Only the selected asset is unpublished, and its child and referenced assets, if any, are not unpublished. Revert an asset or letter to the previously published version Everytime you publish an asset or letter after editing it, a version of the asset or letter is created. You can revert an asset or letter to a previously published version. You may need to do so if something goes wrong with the current version of the asset or document. .Only the selected asset is deleted, and the dependent assets and are not deleted. To check references of an asset, tap and then select an asset.If the asset you are attempting to delete is child asset of another asset, it is not deleted. To delete such an asset, remove references of this asset from other assets and then retry. Protected adaptive forms You can enable authentication for forms you want selected users to access. When you enable authentication for your forms, users see a login screen before accessing them. Only users with credentials that are authorized can access the forms. To enable authentication for your forms: - In your browser, open configMgr in the publish instance. URL: https://<hostname>:<PublishPort>/system/console/configMgr - In the Adobe Experience Manager Web Console Configuration, click Apache Sling Authentication Service to configure it. - In the Apache Sling Authentication Service dialog that appears, use the + button to add paths. When you add a path, the authentication service is enabled for forms in that path.
https://docs.adobe.com/content/help/en/experience-manager-65/forms/publish-process-aem-forms/publishing-unpublishing-forms.html
2020-09-18T11:55:51
CC-MAIN-2020-40
1600400187390.18
[]
docs.adobe.com
Introduction to Managing and Running Tests with Team System by Eric Lee In this video we see how you can manage all or your various test cases using the test management capabilities of Visual Studio Team System. We will also see how tests are run and configured. ▶ Watch video (8 minutes)
https://docs.microsoft.com/en-us/aspnet/web-forms/videos/vs-2005/introduction-to-managing-and-running-tests-with-team-system
2020-09-18T11:57:12
CC-MAIN-2020-40
1600400187390.18
[]
docs.microsoft.com
ModelNode¶ from panda3d.core import ModelNode - class ModelNode¶ This node is placed at key points within the scene graph to indicate the roots of “models”: subtrees that are conceptually to be treated as a single unit, like a car or a room, for instance. It doesn’t affect rendering or any other operations; it’s primarily useful as a high-level model indication. ModelNodes are created in response to a <Model> { 1 } flag within an egg file. Inheritance diagram - enum PreserveTransform¶ - static getClassType() → TypeHandle¶ getPreserveAttributes() → int¶ Returns the current setting of the preserve_attributes flag. See setPreserveAttributes(). getPreserveTransform() → PreserveTransform¶ Returns the current setting of the preserve_transform flag. See setPreserveTransform(). setPreserveAttributes(attrib_mask: int) → None¶ Sets the preserve_attributes flag. This restricts the ability of a flatten operation to affect the render attributes stored on this node. The value should be the union of bits from SceneGraphReducerAttribTypesthat represent the attributes that should not be changed. setPreserveTransform(preserve_transform: PreserveTransform) → None¶ Sets the preserve_transform flag. This restricts the ability of a flatten operation to affect the transform stored on this node, and/or the node itself. In the order from weakest to strongest restrictions, the possible flags are: PT_drop_node - This node should be removed at the next flatten call. PT_none - The transform may be adjusted at will. The node itself will not be removed. This is the default. PT_net - Preserve the net transform from the root, but it’s acceptable to modify the local transform stored on this particular node if necessary, so long as the net transform is not changed. This eliminates the need to drop an extra transform on the node above. PT_local - The local (and net) transform should not be changed in any way. If necessary, an extra transform will be left on the node above to guarantee this. This is a stronger restriction than PT_net. PT_no_touch - The local transform will not be changed, the node will not be removed, and furthermore any flatten operation will not continue below this node–this node and all descendents are protected from the effects of flatten.
https://docs.panda3d.org/1.10/python/reference/panda3d.core.ModelNode
2020-09-18T11:40:41
CC-MAIN-2020-40
1600400187390.18
[]
docs.panda3d.org
You can either follow the video or the below steps to configure this section in the Vilva Pro theme. Note: Make sure you have installed and activated the BlossomThemes Toolkit Plugin for this section to work as desired. Please follow the below steps to configure the Featured Area Section. - Go to Appearance > Customize > General Settings > Featured Area Section. - Click on “Add a Widget” and choose “Blossom: Image Text”. - Add Title for the featured section and click “Add Image Text”. - Upload the Image, Link Text and Featured Link. The Link Text won’t appear unless you add the Featured Link. - Click Apply. The featured area won’t appear unless you click Apply. - Click “Add Image Text” to add other images in a similar way and click “Add a Widget” to add other featured areas in a similar way. - Click Publish.
https://docs.blossomthemes.com/docs/vilva-pro/homepage-settings/how-to-configure-featured-area-section/
2020-09-18T10:36:00
CC-MAIN-2020-40
1600400187390.18
[array(['https://docs.blossomthemes.com/wp-content/uploads/2019/09/feature-pro.jpg', None], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/09/featured-area-section-widget.png', 'featured area section widget'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/09/add-image-text-featured-area-section.png', 'add image text - featured area section'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/09/featured-area-apply.png', 'featured area- apply'], dtype=object) ]
docs.blossomthemes.com
There are two possible ways of integrating Exponea with Facebook Ads. This article will guide you through both options. A. Using Exponea’s built-in integration to create a custom audience list in Facebook based on matching customer’s email address or phone numbers tracked in Exponea B. Placing Facebook pixel on the website via Exponea Tag Manager and enriching the cookie data with Exponea segmentations or recommended products Using built-in Facebook integration We recommend using built-in Facebook integration for: - Creating custom or sophisticated audiences for lookalike targeting - Retargeting customers who might not have visited your website for a while Step 1: Set up Facebook integration in Exponea You need a Facebook Business Manager account to enable this integration. a. In Exponea, go to Data & Assets> Integrations b. Click on Add new integration c. Choose Facebook Ads d. Click on Connect your Facebook account with Exponea - e. Authenticate with your Facebook Business Manager account Once you authenticate your Facebook account, the integration will require permission to access any Facebook Advertising Account linked to your profile. We recommend creating and using a dedicated Facebook user with permissions restricted to just one advertising account in order to prevent uploading audience data to a wrong advertising account. - f. Save integration Step 2: Set up Facebook retargeting nodes in Exponea scenarios a. Create business logic for assigning customers to various Facebook audiences using the Exponea scenario editor b. To assign a customer to a specific Facebook audience, add the retargeting node, double-click on it and select Facebook Ads c. Select the integration that you’ve created in Step 1 d. Select an audience from the list to add to / remove from or create a new one. When you create an audience in the retargeting node it will be available in Facebook Manager immediately, before the scenario has started for the first time. Furthermore, you can enrich the information Facebook receives about your audience based on the Value Based Audiences function that allows FB to calculate lookalike audiences (those that have similar characteristics) by providing a specific metric (value) to each customer sent into retargeting nodes. This value is usually connected to Customer Lifetime Value (CLV) or total revenue for a given customer. In other words, it has an impact on the Conversion rate% and Facebook Return On Investment (evaluate the efficiency of investment) - e. Click on "Advanced settings: Customer matching" to configure which Exponea attributes will be used to match the customers. You can use the customer's Email, Phone, Facebook ID, Mobile advertiser ID, and External ID (Beta feature). You can match customers in retargeting nodes based on Exponea cookie (this allows you to create retargeting audiences even for customers who are not yet identified with email or phone number). Retargeting nodes track events automatically. This enables simple evaluations of retargeting scenarios that contribute to the single customer view of a customer. External ID Matching (beta function) External ID matching requires its own custom setup. You need to track this ID to Facebook beforehand. Setup special tracking FB matching pixels that you can find in the tag manager pre-set. External ID matching is executed inside of Facebook platform and currently it's not working as expected, created audiences are too low to be included in the', '[[FacebookPixelID]]', { extern_id: '{{ customer_ids.cookie if customer_ids.cookie is string else customer_ids.cookie | last }}', }); fbq('track', 'PageView'); Remove customers from a FB audience In step 2 d) above, select the action to perform - "remove from the audience". In case the customer asks for the removal of all his data, there is an option in Facebook retargeting node to remove a customer from ALL Facebook Ads audiences. Consent policy Consent policy settings is part of the retargeting node, which simplifies the design of the retargeting scenarios and ensures that only people with proper consent will be pushed to Google/Facebook Ads audiences. Step 3: Check the received audience in your Facebook account When the matched number of customers is high, it might take a few hours before you see the segment being populated on Facebook. You might see a message “Low match, populating”. This is OK, it will populate eventually. Using the FB pixel via Exponea Tag Manager We recommend using Facebook Pixel with Exponea Tag Manager for: - Creating custom or sophisticated audiences for retargeting Step 1: Set up Main Facebook Pixel in Exponea Tag Manager a. In Exponea, go to Data & Assets > Tag Manager b. Create a new Custom HTML tag c. Copy and paste the Facebook Pixel code from your Facebook Ads account. The Pixel ID needs to be replaced in two places (in the <script> part and in the --> Step 2: Track an event for a custom segment via Tag Manager a. Add the following snippet as a separate tag <script> {% set score_segment = segmentations['YOUR-SEGMENTATION-ID-HERE'] | string %} fbq('trackCustom', 'addToSegment', { segment_name: '{{ score_segment | string }}' }); </script> b. Make sure that the custom tag you set up has lower priority than the Main Facebook Pixel c. Check your new event on your Facebook Ads account Facebook Ads tool uses different attribution model from Exponea's, and thus your Facebook advertising data may not match up with Exponea data. Exponea carries out the hashing when sending the data to Facebook according to SHA256 hash. Updated about a month ago
https://docs.exponea.com/docs/facebook-ads-integration
2020-09-18T09:50:07
CC-MAIN-2020-40
1600400187390.18
[array(['https://files.readme.io/6a33c96-image1.png', 'image1.png'], dtype=object) array(['https://files.readme.io/6a33c96-image1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/7701d27-Facebook_scenario.png', 'Facebook scenario.png'], dtype=object) array(['https://files.readme.io/7701d27-Facebook_scenario.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d209a4a-Lookalike_audience.png', 'Lookalike audience.png'], dtype=object) array(['https://files.readme.io/d209a4a-Lookalike_audience.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/42a3959-Customer_matching.png', 'Customer matching.png'], dtype=object) array(['https://files.readme.io/42a3959-Customer_matching.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/22f4c08-Remove_facebook.png', 'Remove facebook.png'], dtype=object) array(['https://files.readme.io/22f4c08-Remove_facebook.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2dbad27-Consent_facebook.png', '°Consent facebook.png'], dtype=object) array(['https://files.readme.io/2dbad27-Consent_facebook.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/4aac99c-Facebook_pixel.png', 'Facebook pixel.png'], dtype=object) array(['https://files.readme.io/4aac99c-Facebook_pixel.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/7198877-Facebook_Ads_1.png', 'Facebook Ads 1.png'], dtype=object) array(['https://files.readme.io/7198877-Facebook_Ads_1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/900b7cc-Facebook_Ads_2.png', 'Facebook Ads 2.png'], dtype=object) array(['https://files.readme.io/900b7cc-Facebook_Ads_2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3ee84db-Facebook_Ads_3.png', 'Facebook Ads 3.png'], dtype=object) array(['https://files.readme.io/3ee84db-Facebook_Ads_3.png', 'Click to close...'], dtype=object) ]
docs.exponea.com
The Transport States table specifies how long each member of the Transports Group spends in one of the Transport States, specified in the People Settings object. For more general information on People Statistics Tables, see the People Statistics Tables topic. At model time zero, one row is added for each member of the Transports Group. The Transport column does not change after time zero. All other columns update continuously at the model runs. At the warmup time, the state data on each Transport object is reset to zero. Consequently, this table will show zeros in all state-based columns at the warmup time.
https://docs.flexsim.com/en/20.0/Reference/PeopleObjects/PeopleTables/TransportStates/TransportStates.html
2020-09-18T10:11:44
CC-MAIN-2020-40
1600400187390.18
[]
docs.flexsim.com
Article sections We regularly release updates for Easy Social Share Buttons for WordPress. These updates add or improve features, correct work of plugin as of Social Network change or even fix issues with the plugin. In this article we will cover two update methods for Easy Social Share Buttons for WordPress Automatic Updates Once you have confirmed that your license is activated, go to Dashboard » Updates and check if there is an update available for Easy Social Share Buttons for WordPress. If there is an update, make sure to run it. That’s all! Simple as that. Manual Updates You can also update your Easy Social Share Buttons for WordPress by downloading it manually. You can download always up to date version from your Envato account. The steps you need to complete are the following: - Navigate to WordPress plugin list and locate Easy Social Share Buttons for WordPress - Deactivate and than remove the current version of plugin - Install the new downloaded version. All your settings will remain the same as they were before removing the old version. Theme Bundled Versions Update If you receive plugin bundled inside a theme you need to request the latest version from theme author or receive it as update with theme installation. The updated plugin version can be installed manually like described in above or using the theme automatic update mechanism for plugins.
https://docs.socialsharingplugin.com/knowledgebase/how-to-update-easy-social-share-buttons/
2020-09-18T09:47:15
CC-MAIN-2020-40
1600400187390.18
[]
docs.socialsharingplugin.com
Creating Your Factory¶ The Community Factory is provided by Foundries.io, and we define and maintain the content of the securable bootloader, Linux microPlatform image, and the Docker App Store. It allows you to download the public images and artifacts for a variety of common target boards, and will automatically update your device with updates to any of the components. However, if you would like to change any of these components, you will need to create a factory of your own. A factory will allow you to tailor the software to a specific device, use-case, or product. They are privately hosted, so only members of your factory can access the source code, Docker registry, and CI system. Below is a list of features a factory provides you. Define your own Docker App Store¶ Create your own containers, and Docker Apps. A build system is provided to build and publish your images for multiple architectures, and allows you to deliver them incrementally to any devices registered with your factory. Customize the Linux microPlatform¶ You are in control the of Linux microPlatform in your factory. Choose a different Linux kernel version, add drivers, packages, or even create a new target. Any builds produced from your factory will be presented to registered devices, allowing you to manage these deployments using OTA Device Tags at scale. Hardware Testing (Enterprise Edition Only)¶ Test images on your hardware before deploying them to a production environment.
https://docs.foundries.io/latest/community-factory/create-factory.html
2020-02-17T00:16:48
CC-MAIN-2020-10
1581875141460.64
[]
docs.foundries.io
Exam policies and FAQs Around the world, partners and customers look to Microsoft to deliver the highest quality exams and certifications. The Microsoft Certification exam policies have been developed to support the goals of the certification program, including: Security and retake policies Microsoft has specific policies in place that address the areas of security pertinent to Microsoft Certified Professional (MCP) exams. Candidate bans: - Falsifying score reports, by modifying and/or altering the original results/score reports for any exam record - Cheating during the exam (such as looking at the monitors of other exam takers, using an unauthorized device, or looking at notes) - - Misconduct as determined by statistical analysis - - Disclosing Microsoft intellectual property (IP) - Disseminating actual exam content - Using the exam content in any manner that violates applicable law - Violating the current exam retake policy - Violating the Microsoft non-disclosure agreement (NDA) in any way - Violating the agreement with the exam delivery provider in any way Candidate appeal process Candidates may appeal their ban by submitting an appeal to [email protected]. A candidate may appeal Certified Professional (MCP) exam retake policy - If a candidate does not achieve a passing score on an exam the first time, the candidate must wait at least 24 hours before retaking the exam. - If a candidate does not achieve a passing score the second time, the candidate must wait at least 14 days before retaking the exam a third time. - A 14-day waiting period is also imposed for the fourth and fifth subsequent exam retakes. A candidate may not take a given exam any more than five times per year (12 months). This 12-month period starts the day of the fifth unsuccessful exam retake. - If a candidate achieves a passing score on an exam, the candidate cannot take the exam again. Microsoft reserves the right to make some exams available for retake For a complete list of exams that can be retaken annually, click here. Microsoft Technology Associate (MTA) and Microsoft Certified Educator (MCE) exam retake policy - If a candidate does not achieve a passing score on an exam the first time, the candidate must wait 24 hours before retaking the exam. - If a candidate does not achieve a passing score the second time, the candidate must wait seven days before retaking the exam a third time. - A seven-day waiting period is imposed for each subsequent exam retake. - A candidate may not take a given exam any more than five times per year (12 months). This 12-month period starts the day of the fifth unsuccessful exam retake. The candidate is then eligible to retake the exam 12 months from that date. - If a candidate achieves a passing score on an MTA exam, the candidate cannot take it again. Microsoft Office Specialist (MOS) exam retake policy - Office exam, the candidate may take it again. MCP. Exam-specific retake policy exceptions Testing center closures due to security reasons - Microsoft may suspend testing at any location where we deem there is a security or integrity problem. - Microsoft may suspend testing at sites that are related to test sites that pose security risks. Testing center appeal process Testing center owners can appeal a site closure by submitting an appeal to [email protected]. Data forensics Microsoft will use data forensics as a basis for an enforcement action against a candidate. Statistical evidence may demonstrate diminished exam integrity and/or be used to corroborate evidence of improper activity. Exams and scores may be canceled, candidates may be banned, and testing centers may be closed, based on statistical evidence. Out-of-country testing To sit for a Microsoft Certification exam at a Pearson VUE testing center in India, China, or Pakistan, you must be a legitimate resident of that country. If you are a legitimate resident of that country, note the following: Testing centers in these three countries are required to confirm and record that each Microsoft Certification candidate has shown documented proof that he or she is a legitimate resident of that specific country. To verify country of residence status, the candidate is required to present two forms of original (no photocopies), valid (unexpired) IDs—one form as a primary ID (government issued with name, photo, and signature) and one form as a secondary ID (with name and signature). Important: If you are not a legitimate resident of India, China, or Pakistan, you will not be allowed to sit for a Microsoft Certification exam within that country. Candidate retesting at request of Microsoft - Microsoft reserves the right to ask any candidate to retest for any suspected fraudulent activity or anomalous testing patterns at any time. - Retesting will take place at a facility that is selected by Microsoft at a time agreed upon by Microsoft and the candidate. Revoking certifications. Right of exclusion Based on security and integrity concerns, Microsoft reserves the right to exclude specific regions, countries, and testing centers from the Microsoft Certification Program altogether. Beta exams Approximately 400 people can take the beta exam at a reduced rate. To take advantage of this reduced rate, you need a beta code that must be entered as part of your payment during registration. To obtain this code, you can: - Join our SME Profile database. Members whose skills align to the exam content area will receive an email containing this code; this code is unique for this group. Pearson VUE approximately 6 weeks after the exam goes live. These vouchers are provided by VUE and are sent to the email address that you used when you registered for the exam. If you do not receive your 25% discount voucher within 6 weeks of the exam live date, please send an email to [email protected]. Academic pricing on exams Academic pricing on Microsoft Certified Professional (MCP) exams is available in most countries (except India and China). You must verify your student status before scheduling your exam in order to be eligible for academic pricing. Applying student status through your account profile - Select Profile settings from the Account menu at the top of the page. - In the Job function menu, select, or ensure that you have selected, “Student.” - Look for the academic pricing notice that appears next to the Job function menu. If your student status has not yet been validated, click “Get verified” to verify your status. Applying student status when registering for an exam - On the exam for which you want to register, click Schedule exam. - On the Confirm your exam registration details page, ensure that the Job function field displays “Student – Verified.” If it does not, click “Get verified” to validate your status, or click Edit to change your status. Verifying your academic status Select the method you wish to use to verify your status. The methods include: - School-issued email account - School network credentials - International Student Identity Card (ISIC) - Verification code from a Microsoft representative or your institution’s administrator - Documentation Non-Disclosure Agreement. Microsoft Certification Program Agreement This Microsoft Certification Program Agreement ("Agreement") is a legal document between you ("you" or "your") and Microsoft ("Microsoft") regarding your participation in the Microsoft Certification Program (“Program”). The terms of this Agreement apply to (a) any Microsoft Certifications you have attained, and (b) your participation in the Program, including your access to and use of any Microsoft Certification Program benefit,. The current list of Microsoft Certifications is located at Microsoft Certifications. - "Microsoft Certification Credential" or "Credential" means the full or abbreviated title of a specific Microsoft Certification that is used to signify an individual’s compliance with the requirements for a specific Microsoft Certification. - "Microsoft Certification Exam" or "Exam" means a Microsoft certification exam designed to help validate an individual’s skills for a particular Microsoft technology, that is the subject of the Exam. MICROSOFT CERTIFICATION To Obtain and Maintain a Microsoft Certification: To obtain and maintain a Microsoft Certification, you must: - Pass all the required Exams and satisfied all certification and recertification requirements for the applicable Microsoft Certification, - Accept the terms and conditions in this Agreement, - Comply with the terms and conditions in the current version training and certification website or. Violation of Exam Agreement All Microsoft Certification Exams, including Exam questions and answers, constitute Microsoft confidential information and are protected by trade secret law and by the Non-Disclosure Agreement and General Terms of Use for Microsoft Certification Exams (“Exam Agreement”) and may not be disclosed to or discussed with others or posted or published in any forum or through any medium. If Microsoft believes you violated the Exam Agreement, or engaged in any fraudulent behavior or misconduct that could diminish or compromise the security or integrity of the Program in any way, you may be decertified and terminated from the Program and permanently ineligible to participate in the Program. PROGRAM BENEFITS - Program Benefits Provided by Third Parties: Some Program benefits may be provided by third parties. You understand and agree that your relationship with respect to those benefits is directly with the third-party and not with Microsoft. Microsoft is not responsible for any Program benefit provided by a third-party and Microsoft does not sponsor or endorse the third-party vendors or its services or products. - Additional Terms: Program benefits may have additional terms, conditions, and licenses. You must accept and comply with any additional terms associated with a Program benefit before you can use that benefit. You may not use a Program benefit if you do not comply with any applicable additional terms, conditions, and licenses. USE OF CERTIFICATION CREDENTIALS Grant of Rights: Subject to, and expressly conditioned upon, (a) your compliance with the terms and conditions of this Agreement and the Guidelines, (b) your successful completion of all current requirements for the Microsoft Certification, (c) your continued compliance with all current and applicable certification and recertification requirements, and (d) your acceptance of the Guidelines, Microsoft hereby grants you the right to use the Credential(s). MC ID Number: Microsoft will assign a unique Microsoft Certification ID (MC ID) number that will be used to identify you as a current Program member. This unique MC ID number belongs to Microsoft, and you may only use the MC ID number assigned to you if you are a current Microsoft Certification program member. PRIVACY Personal Information: You acknowledge and agree that Microsoft collects certain information about you to run the Program and that the Microsoft Certifications you have earned, and your Program activities, may be tracked and associated with your personal information. See the Microsoft Online Privacy Statement for more information on how we may collect and use your personal information. Use of Personal Information: You agree that Microsoft may occasionally contact you to invite you to participate in surveys and research. Disclosure of Personal Information: You grant Microsoft the right to share your name, contact information (including email address), employer’s company name, the Credentials you have earned and your status in the Program with (i) other Microsoft programs to verify your and your employer’s compliance with other Microsoft program requirements, and (ii) with Microsoft Affiliates and with the third-party exam delivery providers and testing centers that deliver Microsoft Certification Exams in connection with your participation in the Program. YOUR RESPONSIBILITIES Business Practices: You agree that you will (i) refrain from conduct that could harm the reputation of Microsoft; (ii) avoid deceptive, misleading, or unethical practices; (iii) not make any representations, warranties, or guarantees to customers on behalf of Microsoft; (iv) comply with all applicable U.S. export regulations and other applicable governmental laws and regulations; and (v) comply with copyright and other intellectual property and proprietary rights protections. You may not advertise, promote, imply or suggest in any manner that you are employed by, affiliated with, endorsed or sponsored by Microsoft except to state that you have successfully completed all requirements for the particular Credential(s) you have earned. During the term of this Agreement, you will insert the following language in each contract under which you provide services involving Microsoft technologies: "Microsoft is not a party to this agreement and Microsoft will have no liability whatsoever with respect to the services that are the subject of this contract. The Microsoft Certification Credential indicates that I have successfully completed the requirements for the corresponding Microsoft Certification Credential. The services I provide are not endorsed or sponsored by Microsoft." Some states and countries regulate the use of the term "engineer," and you should comply as applicable with any such laws in the event you attained any Credential related to Microsoft Certified Systems Engineer. Transcripts: You are responsible for reviewing your Microsoft Certification transcript to ensure it accurately reflects the Credentials you currently hold. If you believe your transcript is inaccurate, you have up to one (1) year from the date you passed the last Exam necessary to earn or maintain the Credential in question to submit a request to Microsoft to evaluate the fulfillment of any Credential you believed you currently hold that does not appear on your transcript. CHANGES Microsoft reserves the right to (a) update and change the Agreement and Guidelines, (b) change the Program or any aspect of it at any time, including the right to retire Credentials, change certification requirements, and change Program requirements and benefits, and (c) discontinue the Program. Microsoft will post changes on the Microsoft Certification website. You are responsible for checking Microsoft Certified Professional Websites regularly for changes. Changes are effective on the date the changes are posted. Changes do not apply retroactively. NO WARRANTIES MICROSOFT DOES NOT GUARANTEE YOUR SATISFACTION WITH THE PROGRAM OR YOUR RESULTS. MICROSOFT AND ITS AFFILIATES MAKE NO WARRANTIES REGARDING THE PROGRAM, CREDENTIALS, AND HEREBY DISCLAIMS ALL WARRANTIES THAT MIGHT OTHERWISE BE IMPLIED BY LAW. LIMITATION OF LIABILITY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL MICROSOFT OR ITS AFFILIATES BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, SPECIAL, OR EXEMPLARY DAMAGES ARISING OUT OF OR RELATED TO THE PROGRAM (WHETHER FOR PROGRAM BENEFITS, TERMINATION, OR OTHERWISE), YOUR MICROSOFT CERTIFICATION, FAILURE TO ACHIEVE A MICROSOFT CERTIFICATION, OR THE USE OF OR INABILITY TO USE THE CREDENTIALS. THIS EXCLUSION WILL APPLY REGARDLESS OF THE LEGAL THEORY UPON WHICH ANY CLAIM FOR SUCH DAMAGES IS BASED, EVEN IF THE PARTIES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. INDEMNIFICATION You agree to defend, indemnify, and hold Microsoft and its Affiliates harmless from and against any and all third-party claims, demands, costs, liabilities, judgments, losses, expenses, and damages (“Claim”) (including attorneys' costs and fees) arising out of, in connection with, or related to (a) your participation in the Program; (b) your use of any Credential in a manner which is in any way inconsistent with the terms of this Agreement; (c) the performance, promotion, sale, or distribution of your services; or (d) the termination of this Agreement by Microsoft pursuant to the terms in this Agreement. In the event Microsoft seeks indemnification from you under this provision, Microsoft will promptly notify you in writing of the Claim(s) brought against Microsoft for which it seeks indemnification and, at Microsoft’s discretion, permit you, through counsel acceptable to Microsoft to answer and defend such Claim. You may not settle any Claim on Microsoft’s behalf without first obtaining Microsoft’s written permission, which will not be unreasonably withheld, and you will not publicize the settlement without Microsoft’s prior written permission. Microsoft reserves the right, at its option, to assume full control of the defense of such Claim with legal counsel of its choice. If it so undertakes, any settlement of such Claim requiring payment from you will be subject to your prior written approval. You will reimburse Microsoft upon demand for any expenses reasonably incurred by Microsoft in defending such a Claim, including, without limitation, attorneys’ fees and costs, as well as any judgment on or settlement of the Claim in respect to which the foregoing relates. TERMINATION Termination Without Cause: Either party may terminate this Agreement at any time, without cause, on thirty (30) days’ prior written notice to the other party. Termination for Cause: Microsoft may immediately terminate this Agreement upon written notice on any of the following events: - You fail to comply with any applicable Certification or recertification requirements, - You fail to comply with any of the terms of the Agreement or the Guidelines, - You misrepresent your Credential(s), - You engage in misappropriation or unauthorized disclosure of any trade secret or confidential information of Microsoft, - You engage in activities prohibited by law, -, or if Microsoft cancels the Program. Effects of Termination: In all events of termination of this Agreement, all rights granted to you under the Program are immediately terminated. You will immediately: - Cease all activity relating to the Program, - Stop identifying yourself as a participant in the Program, - Cease all use of any Credential, and Program benefit, and - Destroy any associated materials that you have received as part of the Program. Survival: Sections 1 and all other definitions in this Agreement, 2.3, 4.3, 4.4, 8, 9, 10, 11.4, 11.5, and 12 will survive termination of this Agreement. You agree that Microsoft and its Affiliates and subsidiaries will not be liable to you or any third party for costs or damages of any sort resulting from (a) the termination of this Agreement in accordance with its terms, and (b) your suspension from or cancellation of the Program. MISCELLANEOUS Notices: Notices may be provided either by electronic or physical mail. All notices to Microsoft in connection with this Agreement will be sent to the Microsoft contracting entity identified in Section 12.4 below, to the attention of the Microsoft Certification Program. All notices to you in connection with this Agreement will be sent to you at the most recent email address provided by you. It is your responsibility to keep your contact address (email) information with Microsoft updated. No exclusivity: Your participation in this Program is voluntary. Nothing in this Agreement restricts you from supporting, promoting, distributing, or using non-Microsoft technology. Relationship: The parties are independent contractors. This Agreement does not create an employer-employee relationship, partnership, joint venture, or agency relationship, and does not create a franchise. You may not make any representation, warranty, or promise on Microsoft’s behalf. Microsoft Contracting Entity: The Microsoft contracting entity for this Agreement is determined by the country or region where you are located. See details below: The Microsoft entity for the following countries or regions is indicated below: Anguilla, Antigua and Barbuda, Argentina, Aruba, Bahamas, Barbados, Belize, Bermuda, Bolivia, Brazil, Canada, Cayman Islands, Chile, Colombia, Costa Rica, Curacao, Dominica, Dominican Republic, Ecuador, El Salvador, French Guiana, Grenada, Guam, Guatemala, Guyana, Haiti, Honduras, Jamaica, Martinique, Mexico, Montserrat, former, United States, Uruguay, Venezuela, Virgin Islands (British) and Virgin Islands (U.S.): Microsoft Corporation One Microsoft Way Redmond, WA 98052 USA The Microsoft entity for the following countries or regions is indicated below:, Democratic Republic of the Congo, and Zimbabwe: Microsoft Ireland Operations Limited The Atrium, Block B, Carmenhall Road Sandyford Industrial Estate Dublin, 18, Ireland The Microsoft entity for the following countries or regions is indicated below: Australia and its external territories, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Cook Islands, Fiji, French Polynesia, French Southern Territories, Hong Kong, India, Indonesia, Kiribati, Lao People's Democratic Republic, Macao,: Microsoft Regional Sales Corporation 438B Alexandra Road #04-09/12 Block B, Alexandra Technopark Singapore 119968 The Microsoft entity for Japan is: Microsoft Japan Company, Limited Shinagawa Grand Central Tower 2-16-3, 2 Konan, Minato-ku, Tokyo 108-0075 Japan The Microsoft entity for Taiwan is: Microsoft Taiwan Corporation 8F, No 7, Sungren Rd. Shinyi Chiu, Taipei Taiwan 110 The Microsoft entity for the People’s Republic of China is: Microsoft (China) Company Limited6F Sigma Center No. 49 Zhichun Road Haidian District Beijing 100080, P.R.C The Microsoft entity for the Republic of Korea is: Microsoft Korea, Inc. 5th Floor, West Wing POSCO Center 892 Daechi-Dong Gangnam-Gu Seoul, 135-777, Korea Applicable law. Applicable law, jurisdiction and venue for this Agreement are identified below.. - Generally: Except as provided in Section 12.5(b), the laws of the State of Washington govern this Agreement. If federal jurisdiction exists, the parties' consent to exclusive jurisdiction and venue in the federal courts in King County, Washington. If not, the parties’ consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington. - Other terms: If your principal place of business is in one of the countries or regions listed below, the corresponding provision applies and supersedes Section 12.5(a) to the extent that it is inconsistent: If your principal place of business is in Australia and its external territories, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Cook Islands, Fiji, French Polynesia, French Southern Territories, Hong Kong SAR, India, Indonesia, Kiribati, Lao People's Democratic Republic, Macao SAR,, this Agreement is construed and controlled by the laws of Singapore. - If your principal place of business is in Australia or its external territories, Brunei, Malaysia, New Zealand, or Singapore, you consent to the non-exclusive jurisdiction of the Singapore courts. - If your principal place of business is in Bangladesh, Bhutan, Cambodia, Cook Islands, Fiji, French Polynesia, French Southern Territories, Hong Kong SAR, India, Indonesia, Kiribati, Lao People's Democratic Republic, Macao SAR, Maldives, Marshall Islands, Mayotte, Micronesia, Nauru, Nepal, Niue, Northern Mariana Islands, Palau, Papua New Guinea, Philippines; Pitcairn, Samoa, Solomon Islands, Sri Lanka, Thailand, Timor Leste, Tokelau, Tonga, Tuvalu, Wallis and Futuna Islands, Vanuatu and Vietnam, any dispute related to this Agreement, including any question regarding its existence, validity or termination, will be referred to and finally resolved by arbitration in Singapore according to the Arbitration Rules of the Singapore International Arbitration Centre (“SIAC”). The SIAC Arbitration Rules are incorporated by this reference into the Agreement. The Tribunal will consist of one arbitrator appointed by the Chairman of SIAC. The language of the arbitration will be English. The arbitrator’s decision will be final, binding and incontestable and may be used as a basis for judgment thereon in Bangladesh, India, Indonesia, Philippines, Sri Lanka, Thailand or Vietnam (as appropriate), or elsewhere. If your principal place of business is in Japan, the following applies: The Agreement will be construed and controlled by the laws of Japan. You consent to exclusive original jurisdiction and venue in the Tokyo District Court. The prevailing party in any action related to this Agreement may recover its reasonable attorneys' fees, costs, and other expenses. If your principal place of business is in, Zimbabwe, the following applies: The Agreement is governed by and construed according to the laws of Ireland. You consent to the jurisdiction of and venue in the Irish courts in all disputes relating to this Agreement. If your principal place of business is in the People’s Republic of China, the following applies. For purpose of this Agreement, the People’s Republic of China does not include Hong Kong SAR, Macao SAR, or Taiwan: The Agreement will be construed and controlled by the laws of the People’s Republic of China. You consent to submit any dispute relating to the Agreement and any addendum to binding arbitration. The arbitration will be at the China International Economic and Trade Arbitration Commission in Beijing (“CIETAC”) according to its then current rules. If your principal place of business is in Colombia or Uruguay, the following applies: All disputes, claims, or proceedings between the parties relating to the validity, construction or performance of this Agreement will be settled by arbitration. The arbitration will be according to the UNCITRAL Arbitration Rules as presently in force. The appointing authority will be the International Chamber of Commerce (“ICC”) acting according to the rules adopted by the ICC for this purpose. The place of arbitration will be Seattle, Washington, U.S.A. There will only be one arbitrator. The award will be in law and not in equity and will be final and binding on the parties. The parties hereto irrevocably agree to submit all matters and disputes arising in connection with this Agreement to arbitration in Seattle, Washington, U.S.A. If your principal place of business is in Republic of Korea, the following applies: The Agreement will be construed and controlled by the laws of Republic of Korea. You consent to the exclusive original jurisdiction and venue in the Seoul Central District Court. The prevailing party in any action to enforce a right or remedy under this Agreement or to interpret a provision of this Agreement will be entitled to recover its reasonable attorneys' fees, costs and other expenses. If your principal place of business is in Taiwan, the following applies: The terms of this Agreement will be governed by and construed according to the laws of Taiwan. The parties hereby designate the Taipei District Court as the court of first instance having jurisdiction over any disputes arising out of or in connection with this Agreement. Attorneys’ fees: If either party employs attorneys to enforce any rights arising out of or relating to this Agreement, the prevailing party will be entitled to recover its reasonable attorney’s fees, costs, and other expenses, including the costs and fees incurred on appeal or in a bankruptcy or similar action. Severability: If any court of competent jurisdiction determines that any provision of this Agreement is illegal, invalid, or unenforceable, the remaining provisions will remain in full force and effect. No Waiver: Any delay or failure by Microsoft to exercise a right or remedy will not result in a waiver of that, or any other, right or remedy. Assignment: You will not assign, transfer, or sublicense this Agreement, or any right granted under this Agreement, in any manner and any attempted assignment, transfer, or sublicense, by operation of law or otherwise, will be null and void. Updated: December 2013 and Professional Programs:: Microsoft Azure DevOps Solutions Microsoft Professional Programs Microsoft Professional Program: Certificate in Artificial Intelligence Microsoft Professional Program: Certificate in Big Data Microsoft Professional Program: Certificate in Data Science Microsoft Professional Program: Certificate in DevOps Microsoft Professional Program: Certificate in Entry Level Software Development Microsoft Professional Program: Certificate in IT Support Microsoft Professional Program: Certificate in Data Analysis Microsoft Professional Program: Certificate in IoT Microsoft Professional Program: Certificate in Cybersecurity. Verification of MPP track completion certificate: Step 2: Sign in on your academy.microsoft.com dashboard Step 3: Copy and paste the MPP Track certificate URL to share a link to your official certificate Step 4: Paste the URL in an email and send to [email protected] along with two forms of identification (name and address or name and date of birth). See MPP FAQs if you have more questions about the process for MPP. For more information, visit: NCCRS credit for Microsoft certifications. Download: Transcript Service Application Challenging a Microsoft Certification exam item Each exam question is carefully reviewed by a panel of technical and job-function experts who review each question for technical accuracy, clarity, and relevance. If you believe that a specific question you saw on a Microsoft Certification exam is invalid, you may request an evaluation of the question by following the steps described below. You must submit an Exam Item Challenge form within 30 days of taking the exam. Note: Please do not use the Exam Item Challenge process to provide feedback about Beta exam questions. Beta exam questions are often modified in some way based on the feedback received during this process and might be removed from the question pool. If you have additional feedback about a Beta exam that you were unable to provide during the comment period, please send your feedback to [email protected]. Steps for challenging exam items - Post your exam item challenge to the Microsoft Certification Support Forum. - Within 1-2 business days, the forum agent will supply an Exam Item Challenge form for you to complete. The form includes specific information on how to submit your challenge. - Exam Item Challenges are only accepted if you submit the completed Exam Item Challenge form to the forum agent no later than 30 days after the exam has been taken. - Exam Item Challenges are not accepted for exams that will retire within 6 months. - Within 4-6 weeks, the forum agent will provide the results of your Exam Item challenge. The challenge process exists to help identify and correct problematic questions. In most cases, however, exams are not rescored because Microsoft must ensure that candidates who pass exams and earn our certifications have demonstrated the required proficiency level(s) across the skill domain(s). Even if a question is flawed in some way, we cannot assume that you would have answered it correctly if it had not been. In these cases, we provide candidates with the opportunity to retake the exam free of charge. We value and rely on your feedback to make Microsoft Certification exams as valid and relevant Policy: - There is no charge if you reschedule or cancel an exam appointment at least 6 business days prior to your appointment. - If you cancel or reschedule your exam within 5 business days of your registered exam time, a fee will be applied. - If you fail to show up for your exam appointment or you don’t reschedule or cancel your appointment at least 24 hours prior to your scheduled appointment, you forfeit your entire exam fee. Business days are Monday-Friday, not including Pearson VUE global holidays. much time will I have to complete the exam? Exam time varies based on the type of exam you take. ). How does Microsoft decide how many questions on a particular subject to include on the exam? The skills measured on an exam are identified by subject-matter experts external to Microsoft (in other words, they are not Microsoft employees). This list of skills, called the “objective domain,” is the basis for exam development. The number of questions that measure each skill area is determined through the blueprinting process; sections of the exam measuring critical and/or more frequently performed skills will contain more questions than those that assess less important or less frequently performed skills. Will the exam cover material that is not covered in the Microsoft training or Microsoft Press book I am using to prepare. - Print the Skills Measured section of the page. - Review the entire list. Think about each topic. If you are very knowledgeable on a specific topic, highlight the topic or cross it out. - Look at what is left. Now, start some targeted research. For each topic that you did not highlight, search the web for specific articles. - Use authoritative sources such as docs.microsoft.com, msdn.microsoft.com, technet.microsoft.com, and the Office 365 support center You might also want to ask others how they perform those tasks, read white papers, MSDN, or TechNet to get additional information about the tasks that are included on the exam, and/or explore the resources provided in the “Preparation materials” section on the Exam Details page which will link to any available online courses, microlearning, options and a portal where you can find instructor-led training options in your area. Additionally, Microsoft Official Practice Tests are available for some of our certification exams. These may provide more information about your specific strengths and weaknesses. However, passing a practice test is not a guarantee that you will pass the certification exam. If you have taken the exam and not passed, prioritize the skills that you should practice by focusing on the content areas where your exam performance was the weakest and the content areas that have the highest percentage of questions. How do I register for a Microsoft Certification exam? Visit the exam registration page to find complete instructions. Am I required to take an exam in English? Microsoft Certification exams are available in several languages. However, candidates who must take the exam in English rather than in their native language can request an accommodation for additional time. Approval for extra time is provided on a case-by-case basis. Request test accommodations from Pearson VUE or Certiport. What disability accommodations are available? Microsoft is dedicated to ensuring our exams are accessible to everyone, including people with disabilities. For a list of available accommodations, please visit the Accommodation page. course, complete the feedback form that appears when you finish the course. Microsoft does not review study materials developed by third parties and is not responsible for their content. If you have questions or comments about exam preparation materials developed by third parties, please contact the publishers directly. How can I submit feedback about an exam question or exam experience? If you have a concern about the technical accuracy of a particular item, please follow the Exam Item Challenge?. Why is the case study exam format used? The case study exam format uses complex scenarios that more accurately simulate what professionals do on the job. Scenario-based questions included in the case studies are designed to test your ability to identify the critical information needed to solve a problem and then analyze and synthesize it to make decisions. You can refer to scenario details as often as you’d like while you are working on questions in a case study. After I complete a case study, will I be able to review the questions? You may review the questions in a case study until you move to the next case or section of the exam. Once you leave a case study, however, you will not be able to review the questions associated with that case. When you complete a case study and its associated questions, a review screen will appear. This screen lets you review your answers and make changes before you move to the next case study.. Microsoft reserves the right to update content for any reason at any time to maintain the validity and relevance of our certifications. This includes, but is not limited to, incorporating functionality and features related to technology changes, changing skills needed for success within a job role, etc. How will I know if exam has been updated when a new feature or function is added or when something has changed in the associated technology? We update the exam details page to notify candidates if/when this occurs. We also include information about such updates in our newsletters, blogs, and through other appropriate communication channels, and we encourage you to sign up for such communications if you would like to know about these types of changes to the exam content. Because our primary communication with candidates about exam content is through the exam details page, we will update it as soon as we know that exam content will be updated for. Microsoft reserves the right to update content for any reason at any time to maintain the validity and relevance of our certifications. This includes, but is not limited to, incorporating functionality and features related to technology changes, changing skills needed for success within a job role, etc. candidates obtain these skills. The best way to prepare for an exam is to practice the skills listed in the “Skills measured” section of the exam details page. Hands-on experience with the technology is required to successfully pass Microsoft Certification exams. Microsoft does not review study materials developed by third parties and is not responsible for their content or for ensuring that they are updated to reflect product updates. If you have questions or comments about exam preparation materials developed by third parties, please contact the publishers directly. Why did an exam include material that was not covered in the Microsoft training or Microsoft Press book?. Microsoft works hard to ensure that some form of training material exists for all skills that will be measured on an exam. A list of these resources can be found on the exam details page. What is a short answer question? In the short answer question type, you solve a situation by writing several lines of code in the available text-entry area. You can choose from key word options which are provided for you to use in the code you write. Note that this is a general list and not specific to the commands required to solve the problem presented in the question. When you're done entering your code, you can check your syntax. The syntax checker validates your code entry for syntax errors but does not validate that your entry is correct. How are the short answer questions scored? We use exact string matches. Although we try to include all variations of formatting and code usage that could be considered correct, it’s possible that some are not included. This is why we include several ways to check your syntax. We provide a syntax checker that you can use to validate the syntax of your code (e.g., SQL commands) and values (e.g., table names and variable names) used in your solution. Any syntax errors appear in the window next to the Check Syntax button. You may change your code and re-validate the syntax as many times as you want. Note that the syntax checker does not validate whether you have answered the question correctly; it simply validates the accuracy of your syntax. We include a list of common command key words so that you can easily check your spelling. This is a general list provided for reference and is not limited to commands used in the question. We designed the questions for answering incorrectly. You simply don’t earn some or all of the possible points. For single-point items, you need to answer completely and correctly to be awarded the point. For multi-point items, you earn points for the parts of your response that are correct, and you don’t earn points for the parts of your response that are incorrect. We don’t deduct points from your score for incorrect answers. Am I able to review all of my answers before leaving a section or completing the exam? Before leaving a section or completing the exam, you can review your answers to most questions. However, there is a series of yes/no questions that describe a problem and a possible solution and then ask whether the solution solves the problem. Given the nature of these questions, you are not able to review your answers. In addition, after you move to the next question? You will receive notification of your pass or fail status within a few minutes of completing your exam. In addition, you will receive a printed report that provides your exam score and feedback on your performance on the skill areas measured. The exam delivery provider will forward your score to Microsoft within five business days. Beta exam results: Results for beta exams should be visible on your Microsoft transcript (if you’ve received a passing score) and on the exam delivery provider’s site within two weeks after the live exam is published. If you pass the beta exam, you earn credit for that exam and any resulting certification. You do not need to retake the exam in its live version if you pass the beta version. Who should I contact if I have questions concerning beta exam results? If you do not see your score report online or receive your score report within two weeks after the date when the final exam is published, contact the exam delivery provider for more information on when your results will be processed. If you have questions about your transcript, contact your Microsoft Regional Service Center. Who should I contact if I do not receive my 25% voucher for taking a beta exam? These vouchers are provided by VUE and are sent to the email address that you used when you registered for the exam. They are sent approximately 6 about 4 weeks after the exam goes live. If you do not receive your 25% discount voucher within 64 weeks of the exam live date, please send an email to [email protected].? The bar chart shows your performance on each section, or skill area, assessed on the exam. On the left of the graph, each section of the exam is listed along with the percentage of the exam that was devoted to it. The lengths of the bars provide information regarding your section-level performance and map to the percentage of items that you answered correctly in that content area. Bars that are further to the left reflect weaker performance, and bars that are further to the right reflect stronger performance. Because each section may contain a different number of questions, as represented by the percentages provided after the section name, the length of the bars cannot be used to calculate the number of questions answered correctly in that section or on the exam, nor can the bars be combined to determine the percent of questions that you answered correctly on the exam. This information is intended to help you understand areas of strength and weakness in the skill domain measured by the exam and to prioritize those skills that need improvement. Can I find out whether I answered a specific question correctly or whether how I answered a specific question affected my pass/fail status? No. Microsoft Certification exams are designed to measure candidates’ skills and abilities related to a given job role, not their ability to study or memorize specific questions that were on the exam. Qualified candidates will be able to pass this exam regardless of the questions asked. As a result, to protect the integrity of the certification process, Microsoft does not share information about the specific questions that were missed. Does the score report show a numerical score for each section? The score reports to pass the exam because the passing score is 700; however, this is a scaled score. The actual percentage of questions that you must answer correctly varies from exam to exam and may be more or less than 70 percent, depending on the input provided by the subject-matter experts who helped us set the cut score do to answer the question. If a question is worth more than one point, this information will be stated in the question. - There is no penalty for guessing. If you choose an incorrect answer, you simply do not earn the point(s) for that item. No points are deducted for incorrect answers. - Some questions on the exam may not be included in the calculation of your score. To gather data to update and improve the quality of each exam, we present new content to candidates without counting the results toward their score. However, as soon as we have the necessary data to evaluate the quality of the question, items that meet Microsoft’s psychometric standards will be scored. Microsoft will not inform candidates which questions are unscored; as a result, you should answer every question as if it will be scored. Note that this scoring system is subject to change as Microsoft continues to introduce new and innovative question types. Microsoft will indicate if a question is scored differently in the text of the question. How are exam scores calculated? After you complete your exam, the points you earned on each question are. The passing score is based on the knowledge and skills needed to demonstrate competence in the skill domain and the difficulty of the questions that are delivered to a candidate. like that used to set the cut score for Microsoft’s technical exams. Why does Microsoft report scaled scores? Microsoft reports scaled scores so that candidates who have to retake a certification exam can determine if their performance is improving. The actual cut score (the number of items you need to answer correctly) is based on input from a group of subject-matter experts who review the difficulty of the questions in relation to the expected skills of the target audience. As a result, the number of items that you have to answer correctly varies depending on the difficulty of the questions delivered when you take the exam; this ensures that regardless of the difficulty of items you see, the evaluation of skills is fair—if you see a more difficult set of questions during one administration, the number of correct answers needed to pass is less than if you see an easier set of questions. As a result, providing a simple percent correct wouldn't provide useful information to someone who had to take the exam multiple times and saw different combinations of questions with different levels of difficulty.? If you pass an exam, it simply means that you have demonstrated competence in the skill domain. In addition, scores of candidates who pass cannot be compared to determine if one candidate is more competent than another; higher passing scores do not mean higher levels of competence. The same is true of failing scores; lower failing scores do not mean lower levels of competence. If you pass the exam, you have demonstrated competence regardless of your score; if you fail, you have not demonstrated competence. Microsoft exams are designed so that the total test score can be used to make a pass/fail decision (in other words, to show whether the candidate has demonstrated competence in the skill domain measured by the exam). Our exams are not designed with the intent to provide diagnostic feedback about your skills, and steps are not taken during the exam development process to support that level of reporting. If I receive the same score every time I retake the same exam, does this imply an error in the results computation? No. Receiving the same score on multiple attempts does not indicate that the program computing the results is in error. It is not uncommon for candidates to obtain similar or identical scores on multiple attempts of an exam. This consistent result demonstrates the reliability of the exam in evaluating skills in this content domain. If this happens on multiple attempts, you may want to reconsider how you’re preparing for the exam and seek other opportunities to learn and practice the skills measured by the exam. I scored zero in one of the sections. How is this possible? The number of questions that measure each skill area is determined through the blueprinting process; sections of the exam measuring critical and/or more frequently performed skills will contain more questions than those that assess less important or less frequently performed skills. Given that the number of questions varies based on the criticality of skills measured, it is entirely possible for a candidate to answer all questions in a section with fewer questions incorrectly. I experienced significant delays between some of the questions. Was my response recorded? Was it scored correctly? A delay between questions does not impact the responses, scoring, or the time remaining to finish the exam. You should expect to experience a delay of up to a minute while your next question loads. The exam delivery provider’s software is designed to accommodate this. Your answers are recorded, and the exam will be scored correctly regardless of delays between questions. Do the responses that I provide to the survey at the beginning of the exam impact the questions that I see during the exam or how my exam is scored? No. The survey that you take at the beginning of the exam has no impact on the exam content or scoring. This is purely an evaluation tool that our exam psychometricians use to ensure the quality, validity, and rig that you must pay for each exam you retake and follow Microsoft’s retake policy. Where can I find additional information about my areas of weakness? Although Microsoft Certification exams provide feedback about the areas where examinees need to develop their skills further, the exams are not designed to provide detailed or diagnostic feedback. We encourage you to review the “Skills measured” section of the exam details page, and honestly evaluate your skills against what is being assessed on the exam. The best way to do this is to actually perform the tasks listed; and Study Groups, which can be found at the bottom of the individual exam details page. Finally, Microsoft Official Practice Tests are available for some of our certification exams. These may provide more information about your specific strengths and weaknesses. However, passing a practice test is not a guarantee that you will pass the certification exam. What is the exam retake policy? Please refer to the Security and retake policies to view the exam retake policy. Can I request a re-evaluation of my score? A re-evaluation of your score is unlikely to change your pass/fail status. Because Microsoft must ensure that candidates who pass exams and earn our certifications have demonstrated the required proficiency level(s) across the skill domain(s), the final result of an exam is rarely changed based on a re-evaluation of your exam results. Even if a question is flawed in some way, we cannot assume that you would have answered it correctly if it had not been. In these cases, we provide candidates with the opportunity to retake the exam free of charge. If you have a concern about the technical accuracy of a particular item, please submit an online request. An Item Challenge form will be sent to you. How can I challenge an exam question? If you believe a question on a Microsoft Certification exam is inaccurate, you can request an evaluation of that question using the Exam Item Challenge process within 30 calendar days of taking the exam. The evaluation process helps us identify and correct problematic questions and to update exams accordingly. Will a Microsoft employee review and evaluate the exam question I am challenging? A Microsoft employee will conduct an initial evaluation. If additional evaluation is needed, an independent subject-matter expert (technical and job-function expert) will also review and evaluate the challenge. What if I don't agree with the evaluator's decision? Can I appeal it? No. Microsoft applies a rigorous exam development process to ensure the technical accuracy, clarity, relevance, and objectivity of our exams. Furthermore, given the credentials of the independent subject-matter experts and the respect they garner from the IT community, we consider their evaluation final. Can I learn about the rationale for the decision? To help protect exam security, we keep the rationale for challenge decisions confidential although we will provide a general overview of the result. The evaluation remains a part of Microsoft records until the exam is retired. All feedback is compiled and carefully considered as Microsoft makes decisions on how to improve the overall quality of the exam. If I do not pass an exam, can I have a refund?. The NDA legally requires candidates to keep information related to exam content confidential. Requiring the acceptance of the NDA helps protect the security of Microsoft Certification exams and the integrity of the Microsoft Certification Program by legally discouraging piracy and/or unauthorized use of exam content. What is cheating? Cheating is any fraudulent activity that enables an unqualified candidate to pass an exam. This type of egregious misconduct negatively affects the integrity of the Microsoft Certification Program. What are falsified score reports? Falsified score reports are reports that Microsoft deems to be unauthentic or that deceive or defraud the Microsoft Certified Professional program in any way. What happens if a candidate falsifies a score report? If Microsoft determines that a candidate has falsified a score report, the candidate will be ineligible to take any future Microsoft exams and his or her certifications may be revoked. What is proxy testing? Proxy testing occurs when someone takes an exam for another candidate. In other words, the candidate has passed an exam without actually taking it. Engaging in proxy testing as either the test taker or the person who hired the test taker is a form of misconduct and fraud. How can I identify a proxy testing website or organization? The primary warning sign of a proxy testing website or organization is any guarantee that you will pass the exam without having to take it. Proxy testing sites indicate that they will provide a full credential if you send them your credit card information. Why should I avoid proxy testing? What are "brain dumps," and are they legal? A "brain dump," as it relates to the certification exams, is a source, such Microsoft intellectual property rights and nondisclosure agreements. Why should I be concerned about "brain dump" sites and material? If a candidate knowingly or unknowingly memorizes unauthorized content found in “brain dumps” in order to pass an exam, clearly, he or she will not have the requisite skills to effectively use and manage Microsoft software or systems. Eventually, his or her manager will identify this lack of technical knowledge and skill and take appropriate action.. Many "brain dump" providers are fairly blatant in their messaging, descriptions of their products, and the intended uses, while others are much more subtle in their messaging and practices. As a result, candidates should be cautious about using exam preparation material that seems too good to be true. If you think you have discovered a “brain dump” site with Microsoft content, please inform Microsoft by sending an email message to [email protected]. What kind of security should be at a testing center? The testing centers are provided with security policies that must be enforced in order to acquire and maintain testing center status. Ongoing inspections ensure that each testing center maintains the security outlined by Microsoft and the exam delivery provider. In addition, proctors at testing centers are authorized to immediately take appropriate measures against candidates who violate testing rules. For specific information about the expectations for candidates, please contact the exam delivery provider. If you have a concern about the security of your exam experience, please send an email message to [email protected]. What kinds of impropriety can occur on the part of the testing center? Testing center administrators act inappropriately when they fail to follow any security policies of Microsoft or the exam delivery provider.. What happens to a testing center that participates in fraudulent behavior? If Microsoft determines that a testing center has acted improperly or fraudulently, Microsoft has the right to cease delivery of all exams at that center. Whom should I contact if I find a website that discloses Microsoft Certification exam information, or if I discover an individual who cheats on or sells exam questions and answers? Send an email message to mlsecure websites have been terminated. Due to the volume of email we receive, you may not always receive a personal response. Don Field, our former senior director of Certification and Training, describes how a recent $13.5 million judgment resulted from efforts to protect the value of Microsoft Certifications. Read his post, and get ongoing updates on Microsoft certification and training on our Born to Learn blog. If I report a security concern, can I remain anonymous? Yes. All correspondence to [email protected] can remain anonymous and we will not share your contact information with anyone. If you wish to include your contact information so that we can follow up for more information we will ensure it remains secure. the MCP Support forum.. However, you may take these exam through online proctoring; see more details at.. Academic pricing on exams Can I get a refund if I initially registered and paid for an exam without verifying an academic discount, but I’m verified now? No. If you decide to register and pay for an exam before you have verified your student status, you must pay commercial pricing and will not receive a refund. You must ensure that you are verified before completing your registration and payment. Does my academic verification expire? Yes. Your academic verification is only valid for 12 months. After 12 months, you need to repeat the verification process. I’m an educator. Can I still receive an academic discount (in applicable countries) on my Microsoft Certification exam delivered with Pearson VUE? Yes. You need to follow the same process as a student. In other words, you must select “Student” as your job function in your Microsoft account profile, and you must verify your status through the same process as a student. Who do I contact for questions regarding the academic verification process for Microsoft Certification exams? If you have general questions related to exam registration, contact your Microsoft Regional Service Center. For questions specific to a new or pending verification request, you may contact.
https://docs.microsoft.com/en-us/learn/certifications/certification-exam-policies?cid=kerryherger
2020-02-17T01:21:16
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Privatix Network uses micro-payments, where Client instead of paying upfront for whole month, would pay only for small portion, such as 10 MegaBytes or 10 Minutes. In decentralized trustless environment such approach eliminate need for 3rd party, that somehow will guarantee chargeback, if service wasn't provided. To ensure that Client have enough funds to pay for service and to prevent double spending smart contract will create temporary account for each order/deal made between Client and Agent. These temporary accounts are also known as State channels. Making transaction in blockchain is costly. If we would create transaction in blockchain for each 10 MegaBytes of traffic it won't work for two reasons: each transaction may cost much more than payment for small portion of service frequency of billing will be too slow, if we will wait till transaction will be registered in blockchain For that reasons Cryptographic cheques are sent directly by Client to Agent. Agent can use this cheque anytime in blockchain transaction and withdraw owed amount from Client's blockchain account.
https://docs.privatix.network/privatix-core/core/payments
2020-02-17T00:07:57
CC-MAIN-2020-10
1581875141460.64
[]
docs.privatix.network
Process management The process list is used to indicate process steps, machines and operators. For that, go first to Process List within an entity. Then, add a line per resource (e.g. Oven). If the resource doesn't exist, it is still possible to create a new resource with the "Create a product" menu. When a resource contains parameters (e.g. cooking time), they appear in the parameters list of the product and can be indicated (e.g. 20 min). When going on the resource, it is also possible to add or delete parameters and to enter the cost of the resource. Each process line can have instructions. The parameters and instructions are taken back from the production sheet. Process steps and parameters are created in the administration. (beCPG menu --> Administration beCPG --> Characteristics).
http://docs.becpg.fr/en/utilization/process-management.html
2020-02-17T00:11:34
CC-MAIN-2020-10
1581875141460.64
[array(['images/05_process-management-1.png', None], dtype=object) array(['images/05_process-management-2.png', None], dtype=object)]
docs.becpg.fr