content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
gdaAppend¶ Examples¶ // Generate random x matrix x = rndn(100, 50); /* ** Create a GDA named `myfile` ** and overwrite existing `myfile` */ retcode1 = gdaCreate("myfile.gda", 1); /* ** Write x matrix to `myfile` GDA ** and name it x1 */ retcode2 = gdaWrite("myfile.gda", x, "x1"); // Generate random y matrix y = rndn(25, 50); /* ** Append y to existing x1 variable ** in `myfile.gda` */ retcode3 = gdaAppend("myfile.gda", y, "x1"); // Check orders of x1 gdaGetOrders("myfile.gda", "x1"); This prints : 125.00000 50.000000 Appending the y matrix to x1 adds \(25*50 = 1250\) elements to x1, making it a 125x50 matrix. Remarks¶ This command appends the data contained in x to the variable varname in filename. Both x and the variable referenced by varname must be the same data type, and they must both contain the same number of columns. Because gdaAppend() increases the size of the variable, it moves the variable to just after the last variable in the data file to make room for the added data, leaving empty bytes in the variable’s old location. It also moves the variable descriptor table, so it is not overwritten by the variable data. This does not change the index of the variable because variable indices are determined NOT by the order of the variable data in a GDA, but by the order of the variable descriptors. Call gdaPack() to pack the data in a GDA, so it contains no empty bytes. See also Functions gdaWriteSome(), gdaUpdate(), gdaWrite()
https://docs.aptech.com/gauss/gdaappend.html
2021-02-24T23:13:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.aptech.com
What does the word "circom" mean? Circom is a language and a circuit compiler for zero-knowledge proofs, and the name is an acronym that stands for the words circuit and compiler. Why did you create circom? The idea of designing a developer-friendly circuit language came up when applying zero-knowledge technology to specific projects. Although there were already tools that permitted the generation and validation of proofs, we missed a tool that gave flexibility and full control of the constraints to the programmer and at the same time abstracted the complexities of the zero-knowledge protocols. We thought a good idea was to decouple the design of circuits from the specific zero-knowledge protocol implementation and decided to develop a circuit language that somehow was more electronic circuit based. As a result, circuits in circomare built by combining components (templates) and wires (signals). You can listen to the whole story told by Jordi Baylina in the ZeroKnowledge podcast. Who is behind circom? Circomwas first conceived by Jordi Baylina and later developed in Iden3, a project working on a self-sovereign identity system that uses this tool in the core protocol. The implementation of the language, the compiler and the library of circomtemplates has been carried out with the help of the Ethereum Foundation Ecosystem Support and the collaboration of a group of researchers from the Complutense University of Madrid, Pompeu Fabra University and Polytechnic University of Catalonia. Can I do any operation in a circom circuit? Typically, in an arithmetic circuit you can only connect signals to addition and multiplication gates. However, in circomyou can do any operation as long as you are able to reflect that operation as a quadratic constraint. We recommend you to look at this example. Can I design my own circuits? Yes, you can design and compile your own circuits but you are also welcome to use the templates from circomlib, a library of circomcircuits that have been reviewed and tested by the community. What happens if I do not write the constraints of a circuit properly? One of the fundamentals of circomlanguage is that you must add your own constraints. If you write a constraint that does not have the right form (i.e. it is not quadratic), then you will get a compilation error, but if you missed writing a constraint or wrote the wrong one, the compiler will not change anything for you. We recommend you to review the Constraints generation section to get to know more about how to write constraints in circuits. What is the difference between the simple and the double arrow? Double arrows ( ==>and <==) gather two actions together: they assign a value to signal and also generate a constraint at the same time. Simple arrows ( -->and <--) only do the first action: they assign values to signals. Check out the section Assignment to signals to find out more details about the different circomoperators. What is the difference between a signal and a variable? A signal can be understood as a wire of the circuit, and as such, it can be either an input signal, an intermediate signal or an output signal. The connection of signals is done through addition and multiplication gates and constraints are used to capture these connections. The intermediate and output signals are calculated from the values assigned to the inputs, and once values are assigned to all signals, they can not be changed later on. This is why signals are considered immutable. On the contrary, variables are mutable elements of the code. Typically, variables are used in conditional and loop statements and may also be used as parameters of templates, allowing the creation of generic templates that depend on the values of certain variables (which would not be possible using signals). You can find more information about signals and variables here. Circom seems to depend on the order of BN254 curve, can it operate on any other curve? ZK-SNARK protocols make use of pairing-friendly elliptic curves to generate and validate proofs. In the case of Ethereum, this curve is alt_bn128 (also known as BN254). As a result, zero-knowledge on Ethereum can only be applied to arithmetic circuits that work modulo the order of this curve, which is this case, is the prime: p = 21888242871839275222246405745257275088548364400416034343698204186575808495617 By default, arithmetic circuits build with circomwork modulo this prime p, but it is possible to change it, without affecting the rest of the language, using the parameter GLOBAL_FIELD_P. As a result, circomis a generic circuit language that can work with other curves and even in other blockchains. What is the difference between Jubjub and Baby Jubjub curves? To generate and verify zk-SNARK proofs, it is necessary to use a pairing-friendly elliptic curve. In Ethereum, this curve is alt_bn128 (also referred as BN254) and in Zcash, this curve is BLS12-381. In order to implement in a circuit cryptographic primitives that make use of elliptic curves, such as the Pedersen Hash or the Edwards Digital Signature Algorithm (EdDSA), it is necessary to construct an embedded curve. In the case of Zcash, the embedded curve is called Jubjub, and in Ethereum, the curve is Baby Jubjub. This means that the field modulus of Jubjub is the same as the group order of BLS12-381 and the field modulus of Baby Jubjub is the same as the group order of BN-254. Is circom compatible with other software like libsnark, ZoKrates, etc.? We decoupled the process of generating zero-knowledge proofs into two different processes: on the one side, the definition and representation of arithmetic circuits, and on the other side, the computation of circuit witnesses and generation and validation of zero-knowledge proofs about those circuits. Circomserves the first purpose, and snarkJSthe second. Hence, tools that integrate both steps in a unique software are not compatible with our programs but it is possible to replace circomwith a circuit compiler and snarkJSwith a zk-SNARKs implementation that draws from an R1CS circuit representation. Currently, there may be format compatibility issues but there are many efforts from the zero-knowledge community put into standardizing formats, which will greatly improve interoperability. Is it possible to use other zero-knowledge systems, like Bulletproofs or STARKs, with circom? Right now, circomprovides an R1CS representation of circuits. So, any zero-knowledge system that draws from this representation, which is the case with most zk-SNARK protocols, is compatible with circom. Other protocols use different representations of circuits but in many cases, the conversion from R1CS to these formats is not difficult. We have not yet worked on this, but it is in our to-do list! :) Is circom already being used in production? Yes, a variety of projects such as Tornado Cash, Semaphore or Dark Forest are using circom. Can I add templates to circomlib? Yes, by submitting a pull request to the circomlibrepository in GitHub. The addition of new templates may take some time, as we need to review the security considerations of each specific circuit. So, if you have already submitted a pull request, we kindly ask you to have some patience :) If I spot a bug, who should I report it to? The best way to report a bug or to contribute with code, tests or documentation, is to open an issue or submit a pull request in the specific repository in GitHub: circom, circomlib, snarkjs. If you prefer, you can also let us know your finding through our Telegram group. In any case, we thank you for your help! My question does not appear in this list, what should I do? If you still have questions, remember you can contact us via our Telegram, Twitter and GitHub channels.
https://docs.circom.io/faq
2021-02-24T22:58:55
CC-MAIN-2021-10
1614178349708.2
[]
docs.circom.io
Use the Database Connection Properties dialog box to create or customize a database connection. Direct Connection Direct mode does not require an Oracle Client to be installed on your workstation. dbForge Schema Compare. Connection via the TNS The TNS connection type is an appropriate option in any of the following circumstances: A TNS connection uses an alias entry from a tnsnames.ora file. dbForge Schema Compare for Oracle uses only one tnsnames.ora file. You may have more than one on your local machine or want to use the tnsnames.ora file on a remote machine, so note that dbForge Schema Compare for Oracle looks sequentially for the tnsnames.ora file in the following locations: You need to create the TNS_ADMIN environment variable in the case when the tnsnames.ora file exists but dbForge Schema Compare for Oracle doesn’t use it. Note dbForge Schema Compare for Oracle uses connection via an TNS by default. Connection via TNS Using.
https://docs.devart.com/schema-compare-for-oracle/getting-started/connecting-to-db.html
2021-02-24T23:21:08
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
Copyright 2020 OpenStack Foundation This work is licensed under a Creative Commons Attribution 3.0 Unported License. Central Authentication Service¶ Now that OpenDev is entirely distinct from the OpenStack project, it’s a great time to revisit the single sign-on and central authentication topic in a new light. We’ve rehashed and debated this so very many times in the past 5+ years, but never really officially documented what we want, our operating constraints, and what options we seem to have. Problem Description¶ Our Web-based services (currently at least Gerrit and StoryBoard, but probably also MediaWiki, Zanata, maybe Askbot, RefStack, LimeSurvey, and perhaps soon the Zuul dashboard, Gitea, Mailman3/Hyperkitty or others) need logins, and we need to be able to associate accounts across different services with the same individual. We have traditionally used Launchpad/Ubuntu OpenID for this. With the addition of non-OpenStack projects, requiring people to have a UbuntuOne and OpenStackID accounts to use OpenDev services is less than ideal. However, it’s also important for OpenStack that some individuals can be connected to a corresponding OSF profile for affiliation and CCLA tracking. We want a central single sign-on system for OpenDev services We may want distinct realms for different Zuul tenants We do not want it to directly handle authentication credentials We want the SSO infrastructure operated within OpenDev We want OpenStackID to be one of the available federated IDPs We need to be able to migrate current Gerrit and StoryBoard IDs Once we’re at the finish line in the future, the system would look like: User wants to log into an OpenDev service Service login redirects them to something like opendevid.org The opendevid.org interface presents them a choice of identity providers Selecting one of these options bounces them through the corresponding IDP to authenticate Once authenticated, opendevid.org redirects the user back to the original service A user can always go to a URL such as opendevid.org/account and associate additional identities with their account, so that they’re not limited to just a single external IDP Optionally, OIDC tokens could be used for role/group claims to avoid ACL management within some services (this could come in handy for managing things like StoryBoard teams) It could be argued that an authentication service supporting a local account database offers additional flexibility so that users still have an avenue to log in if their external IDP(s) of choice are suddenly defunct, but we’ve gotten by for nearly a decade relying on external IDPs for our services (a mix of Launchpad/UbuntuOne SSO and OpenStackID), and would prefer not to incur the operational and legal overhead of securing a database full of user credentials. We expect to recommend to our users that they affiliate more than one external IDP with their OpenDev identity, to ensure continuity. Failing that, we can work with them individually on a best-effort basis to reestablish access for their accounts when necessary. We also get pushback from users already to the effect, “Why do I need yet another account for your services? I should just be able to authenticate with my existing provider of choice.” Proposed Change¶ Infra will run a central authentication service, which we’ll call OpenDevID. It will serve as the basis of SSO for all OpenDev services. OpenDevID will provide both OpenId and OAuth acting as a single source of authentication for other OpenDev services. The term OpenDevID and associated opendevid.org domains are used here as placeholders. If possible, we should attempt to come up with a more catchy name, one which is ideally also less likely to cause confusion with OpenStackID. OpenDevID will be a pure federated system and not a primary source of identity management itself. That means that a user will log into OpenDevID using another credential their choice: raw OpenID, Github, Google, Twitter, Ubuntu, OpenStackID, whatever. One will also be able to associate as many other systems as one desires. This will allow a user to log in to the OpenDevID service using an account, such as Launchpad/UbuntuOne, then associate accounts from other systems, such as OpenStackID, as desired. Over the years we’ve evaluated a number of options, some promising at the time but later left fallow or discovered to be harboring previously unforeseen impediments for our use case. The greatest challenge across these has been support for Launchpad/UbuntuOne, due to its reliance on OpenID v1 which has support in basically none of the options still available to us. This specification covers deploying Keycloak for our OpenDevID, with SimpleSAMLphp as a shim for handling legacy Launchpad/UbuntuOne integration (at least temporarily for purposes of transition) unless Keycloak gains upstream support for OpenID v1. Alternatives¶ Some of the options we’ve explored over the years are enumerated here. Keystone¶ Keystone is working on becoming a suitable standalone identity broker, but this is still in progress and likely won’t support OpenID v1. Extend an Existing Solution¶ Add/write a Launchpad/UbuntuOne specific OpenID driver for the broker of our choice, perhaps Dex. Preseed its database with the OpenID mappings we already have, and generate the appropriate reverse mapping to put into Gerrit. Then configure the broker to allow people to add other IDP identities to their account, including OpenStackID, and tell people they have six months to associate another IDP with their OpenDevID account before we shut off Launchpad/UbuntuOne as a viable IDP so that we’re not stuck maintaining it forever. This has the benefit of being all within our control, but also involves us writing a chunk of code which may or may not be in a language we find interesting. It also requires updating Gerrit et al to authenticate with the broker over something other than OpenID v1, but this is something we should be doing regardless once we’re off of Launchpad/UbuntuOne. OpenID v1 is pretty dead these days. For example, if we go with Dex, there is a Dex-specific plugin for Gerrit already; the OAuth2 plugin for Gerrit also has extensive support for many identity providers. A challenge with the coreos/golang broker was that adding new providers was not too bad, but there weren’t generic providers other than the openid-connect broker. If we’re adding OpenID v1 support along with a provider the amount of effort is higher. That said, the fact that there are also provider specific drivers means we could just take the easy route and implement a Launchpad/UbuntuOne driver and hardcode things. OpenID itself is pretty simple, so maybe it wouldn’t be that terrible. So far the most promising suggestion for the general OpenIDv1 problem has been to use simplesamlphp to make an OpenIDv1 shim (we have contributors who have already done basically that in the past). OpenStackID as a Proxy¶ Work with the OpenStackID maintainers to add the ability to associate Launchpad/UbuntuOne IDs with OpenStackIDs. Have OpenStackID require a Launchpad/UbuntuOne ID be associated with an OpenStackID account if the referrer is opendevid.org. Then write a tiny service that takes an OpenstackID+Launchpad/UbuntuOne ID pair as an input, that will write that info to the Gerrit and StoryBoard databases. Add code to OpenStackID which hits that service when someone logs in from opendevid.org so that any time someone identifies we collect the mapping and update the user account. After some time, allow other IDPs than OpenStackID. We won’t need to switch off openid v1 in gerrit if we go this route but we may still want to. This is super hacky, and puts OpenStackID in the critical path for a period of time, but has the benefit of the development work being shared with OpenStackID manitainers (other than the tiny little DB update service). The mapping work for OpenStackID to Launchpad/UbuntuOne may already be something the OpenStackID maintainers were looking at, as one of the things they want out of this is to be able to make that association and initially thought OpenStackID would do it. If this solution seems appealing, we should ask them for more info just to be sure this isn’t already accounted for. Map Identities via ETL¶ Do a behind the scenes OpenID mapping exchange with OpenStackID based on user E-mail. Pre-generate a set of OpenStackID identities for each account based on E-mail address. Put those accounts into the backend DB of opendevid.org. Go ahead and add other IDPs. If someone logs in from one of the other IDPs and it comes back with a known E-mail address we have associated with an OpenStackID, make them log into OpenStackID too proving they are that person, which will then just add the association. If they log in with not-OpenStackID, it’ll just create a new account. In this case we would map Launchpad/UbuntuOne to OpenStackID at a point in time, then rely on the OpenStackID E-mail matching any other accounts from that point forward. This would work with existing brokers because we don’t need to OpenID v1, and allows us to just do a hard cutover using, for example, the generic openid-connect support in Dex. The biggest question here is, do we believe that E-mail address matching for the intial mapping will be good enough that followup support issues will be reasonable to handle? This has the benefit of not needing to write any Launchpad/UbuntuOne OpenID support code, but has a drawback of the initial database mapping being potentially incomplete/inexact, so there might be rectifications that need to be done. Help Improve Launchpad/UbuntuOne¶ Development seems to have stalled years ago, so we would likely be left carrying a fork best case. Also, while technically open source, running our own rebranded version of this would be next to impossible. On top of that, it’s a source of truth for identity, and likely unsuitable as an IDP broker, so this still sticks us with maintaining a databse of user credentials and doesn’t allow users to bring their own external identities either. Reuse or Rebrand OpenStackID¶ We’re really looking for a service which delegates to other identity providers, but OpenStackID is itself a source of truth. Not relying solely on OpenStackID actually puts us in a good position to get OpenStackID adoption faster than we would otherwise, since there will be a legitimate path from Launchpad/UbuntuOne to OpenStackID. There is no intent to prevent people from pushing changes for review without first associating an OpenStackID. The purpose of this to better support non-OpenStack projects, and requiring an “OpenStack ID” sounds pretty antithetical to that. Independent IDPs Across Services¶ We can’t just have a place for people to write down all their accounts that may be used across services and then let services use arbitrary OpenID providers. Not all of our services support the same protocols, nor do all IDPs, so the intersection of these may be empty. We’d rather not require users to have more than one identity and have to remember which one to use for what service. Note: this is essentially status quo, it’s the situation we’re already in today with some services using Launchpad/UbuntuOne and others using OpenStackID. Implementation¶ Bootstrapping¶ We currently have a mapping of Launchpad/UbuntuOne OpenID accounts with E-mails used for Gerrit. We can use this to pre-populate the OpenDevID database so that all of our existing users will pre-exist in the system. We can then ask people to log in and then associate their OpenStackID account. Options¶ Five options for implementation have been evaluated: Write our own Ipsilon Dex Hydra Keycloak (current consensus choice) Write our own¶ We probably shouldn’t write our own from scratch, unless no other options are remotely viable. We have enough bespoke software we’re already maintaining, and this is also how OpenStackID came into the picture. Ipsilon¶ Ipsilon is awesome, but seems to be more focused on having a user database (be that FreeIPA, OpenLDAP, et cetera) so can’t serve as a mere aggregation broker for external IDPs. Otherwise it fits the majority of our criteria. We tried our own proof of concept deployment around five years ago, but it has come a long way since then Dex¶ Dex, from CoreOS, seems like a decent fit, in that it’s designed to provide OAuth2 and can be set up to delegate auth to another configured provider: Downside to Dex is it’s a bit more complex and is focused on being run in a Kubernetes, so we might have to tease apart some things to run it not in Kubernetes. There is also already Gerrit integration written. It doesn’t seem to support OpenID v1 (at least not after a quick skim of the docs) so we would have to add that (and hope upstream takes it) in order to continue supporting Launchpad/UbuntuOne. Hydra¶ Hyrdra seems simpler and not Kubernetes focused, but might involve us needing to write some Java code for Gerrit integration. This too doesn’t seem to support OpenID v1 so we’d have to implement it and hope upstream is receptive to those changes as part of supporting Launchpad/UbuntuOne. Gluu ( ) is another option we looked into, though it would need OpenID support added. Keycloak¶ Keycloak looks promising, and directly supports identity brokering: It does not support OpenID for authentication or brokering, so we would potentially need to add this. It’s also a large Java application, but then again we have a fair amount of experience running these lately anyway (Gerrit, Zanata, Zookeeper…). A solution discussed at the virtual PTG in April 2020 was to use as a go-between identity broker to provide a SAML authentication proxy for Launchpad/UbuntuOne. Based on the above evaluation, Keycloak is the consensus front-runner. Gerrit Integration¶ We currently use OpenID 1.0 in SSO mode for Gerrit. There is an OAuth2 provider: One of the options it already supports is CoreOS Dex, so if we go that route we should be able to integrate with Gerrit. If we go Keycloak, Gluu, Ipsilon or Hydra, we might need to work with gerrit-oauth-provider maintainers to make a more generic OAuth2 driver or something, though this probably works already or would at worst be straightforward to add. Gerrit’s OAuth plugin supports keycloak starting in the 2.14 version: It’s worth noting, StoryBoard already contains OIDC support (and has been tested with the OIDC implementation in OpenStackID), so this is less of a concern for it. Gerrit Topic¶ Use Gerrit topic “central-auth” for all patches related to this spec. git-review -t central-auth Work Items¶ Stand up a Keycloak proof of concept deployment Work out an UbuntuOne auth proxy for Keycloak using SimpleSAMLphp Test the Gerrit OAuth driver’s Keycloak support Test StoryBoard’s OIDC support with the Keycloak PoC Write up a migration workflow plan for evaluation TODO: additional steps once the above has been knocked out and we know more Servers¶ At least the ID broker service will need to run somewhere and, due to its sensitivity, almost certainly on its own server isolated from everything else. Basically any of our Web-based services which have public authentication enabled will also be affected as we switch them to use this for their single source of identity. DNS Entries¶ Based on prior discussions, we should give the identity broker service a distinct domain rather than putting it in a subdomain of opendev.org, to minimize risk from cross-site request forgery and similar sorts of attacks which are easier to pull off between sites in the same parent domain. Documentation¶ At a minimum, the new account creation documentation in the OpenDev Manual will need updating for our new workflow. Security¶ This is essentially the core of our security for user accounts on public Web services we operate, so pretty much any security consideration you can think of applies here.
https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html
2021-02-24T23:02:43
CC-MAIN-2021-10
1614178349708.2
[]
docs.opendev.org
Conditional styles give you freedom to change a pivot table appearance to make the data more readable. For example, you can highlight values of some field that are greater than 100 with the red color, and values of another field that meet some other condition with the blue color, the difference between data in a pivot table will be clear at a glance. Besides setting a background color for some cells, you can select an image as a background, change a font and its color, specify a condition to apply only for some values in a required field, etc. Let’s examine conditional styles benefits by the following example: There is the list of films and their rental payments grouped by month and year. It is required to quickly identify which films bring the expected revenue, the sum of their rental payments meet the planned monthly range of 20-40 and which films are in less demand, the sum of their rental payments are less than 10. Conditional styles in use: In the Appearance section click the BackColor field and select LightBlue. Now, the first condition is ready. Click the Add Condition icon to create another condition for the films which have the sum of payment amounts less than 10.
https://docs.devart.com/studio-for-mysql/data-analysis/applying-conditional-styles.html
2021-02-25T00:04:02
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
The SQL History form provides an enhanced display of the history information. As you execute SQL statements from the SQL Editor or run functions from the Data Source Explorer, an entry is added to the SQL History list. The history displays extensive information about each SQL execution. It includes execution elapsed time, DBMS time, fetch time, database server, row count, parameter display for macros and stored procedures, SQL statement, and SQL statement type. The SQL History form displays the data in a grid format that lets you select rows or cells. It provides options to copy cells or rows, sort columns, delete rows, edit the note or SQL for a history entry, filter columns, search for result history data, format the display, and re-execute SQL statements. The SQL History uses an embedded Derby Database to manage the SQL History entries. If you close the form and need to re-open it, go to. You can add notes manually by clicking on the Note column value. You can also choose to be prompted to add notes when you execute the SQL. Set the Prompt for notes option using the SQL Handling preferences. - Columns are movable and re-sizable. Many of the columns are grouped together under a collapsible header. - Rows can be selected by clicking in the numbered row header. - Column order, collapsed columns, filter, sort, and formatting are preserved from session to session. - If a filter is in effect, the result of a newly executed SQL statement is inserted as the top row, regardless of whether it meets the filter criterion. If a sort is in effect the new entry is inserted in the sorted order. - The Result column contains a summary of the executed SQL operation. To read the entire contents in a column cell, hover the mouse pointer over the cell. Error results show as red text in the resulting tool tip display. The following examples are samples of the Result summary. - Example 1: In this example a single statement was executed successfully. Executed as Single statement. >Elapsed time = 00:00:00.108 STATEMENT 1: Select Statement completed. 26 rows returned. - Example 2: In this example the executed statement failed. This statement also has this icon associated with it. Executed as Single statement. Failed [3807 : 42S02] Object 'bogus' does not exist. Elapsed time = 00:00:00.145 STATEMENT 1: Select Statement failed. - Example 3: In this example the executed statement was canceled. This statement also has this icon associated with it. Executed as Single statement. Canceled. Elapsed time = 00:00:00.000 STATEMENT 1: Select Statement canceled.
https://docs.teradata.com/r/vqSvZtr8m~hpTpFE6qebdQ/CDiL5jRNmEKN3NV0jRKG8g
2021-02-25T00:17:03
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
Pencil Tool Properties The Pencil tool creates central lines as you draw on vector layers, adding each stroke on top of the previous ones. When you select the Pencil tool, the Tool Properties view displays the different Pencil modes that control how the pencil line is drawn. For tasks related to this tool, see About the Pencil Tool and About Pencil Modes. - In the Stage view, select a layer. - In the Tools toolbar, click the Pencil button. The tool's properties are displayed in the Tool Properties view.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/reference/tool-properties/pencil-tool-properties.html
2021-02-24T23:12:10
CC-MAIN-2021-10
1614178349708.2
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/SBP/Reference/pencil-tool.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
also customises the content of the event tooltips. Displaying Your Events Calendar [eo_fullcalendar att1="val1" att2="val2"] The optional attributes (‘att1’, ‘att2’) allow you to customise the events calendar, for instance to add buttons or filters, customiser the date/time format or the ‘view’ of the calendar. See the section below for the supported attributes and their values. Available Options Below is a list of the options (called ‘attributes’) available to you when displaying your calendar. All are optional and you can display a calendar without specifying any of these attributes. Default View defaultView This selects the default view of the calendar when it is loaded. You can add buttons to allow the user to change the view in the ‘header’ attributes below. (Default: 'month'). One of 'month', 'agendaWeek', 'agendaDay', 'basicWeek', 'basicDay' 'listDay' 'listWeek' 'listMonth' year – Specify the year (2014,2015 etc) the calendar should open on. By default, it is today’s year. month – Specify the month by number (1 for January,…, 12 for December), the calendar should open on. By default, it is today’s month. date – Specify the date (1-31) the calendar should open on. By default, it is today’s date. Calendar Header - headerLeft, headerCenter, headerRight These attributes determine what appears at the top of the calendar, on the left, centre and right, respectively: including, navigation buttons, buttons to switch the calendar view, or drop-down filters. Any collection of 'title'– displays the current month/week/day - the views ( 'month', 'agendaWeek', 'agendaDay', 'basicWeek', 'basicDay') – buttons to toggle bewtween the views, 'category'– adds a dropdown to filter by category 'venue'– adds a dropdown to filter by venue 'country'– adds a dropdown to filter by country (Pro only) 'state'– adds a dropdown to filter by state (Pro only) 'city'– adds a dropdown to filter by city (Pro only) 'next'and 'prev'– buttons to navigate the calendar 'today'– button to jump to today’s date. In month/week view this will jump to the appropriate month/week. 'goto'– button to open a datepicker to jump to a particular date. In month/week view this will jump to the appropriate month/week. Separate these by either commas or spaces (to place a gap in between the buttons). Use ” to leave that part of the header empty. Defaults: headerLeft: 'title', headerCenter: '', headerRight: 'prev,next today' Calendar Filters These attributes allow you to restrict events on the calendar to specified categories, venues or events booked by the current user. - category – A comma delimetered list of category slugs to show on the calendar. Leave blank for all. - venue – A comma delimetered list of venue slugs to show on the calendar. Leave blank for all. - users_events – (‘true’ or ‘false’). If true display only events for which the current user is attending Calendar Appearance These attributes allow you to alter the appearance of the calendar: - theme: – Whether to use the jQuery UI theme. Setting this to false makes styling the calendar easier. Defaults to true. - tooltip: – Whether to display a tooltip. True/false. Defaults to true. Content is filtered by eventorganiser_event_tooltip - weekends (default: true) – whether to include weekends in the calendar - mintime (default: “) – Determines the first hour/time that will be displayed, even when the scrollbars have been scrolled all the way up. - maxtime (default: 24) – Determines the last hour/time that will be displayed, even when the scrollbars have been scrolled all the way down. - alldayslot (default: true) – Determines if the “all-day” slot is displayed at the top of the calendar - alldaytext (default: __('All Day','eventorganiser')) – Sets the text for the all day alot Date & Time Format Tip: The format argument supports a few special operators. They include {...}switches to formatting the 2nd date. ((...))only displays the enclosed format if the current date is different from the alternate date in the same regards. For example the default value of titleformatweek is M j(( Y)){ '—'(( M)) j Y} and produces the following dates: Dec 30 2013 — Jan 5 2014, Jan 6 — 12 2014 These attributes allow you to change the date and title formating of the calendar: - timeformat: The format in which the time appears on the events. Specify in php format. Default: H:i - axisformat (default: ga) – time format displayed on the vertical axis of the agenda views - titleformatday (string) Date format (PHP) for title for month view. Default l, M j, Y - titleformatweek (string) Date format (PHP) for title for week view. Default M j(( Y)){ '—'(( M)) j Y}. - titleformatmonth (string) Date format (PHP) for title for day view. Default F Y - columnformatmonth (default: D) – Determines format of the date in the column headers in month view - columnformatweek (default: D n/j) – Determines format of the date in the column headers in month view - columnformatday (default: l n/j) – Determines format of the date in the column headers in day view Examples To display a calendar with the following options: - Title in the centre - Previous/Next navigation buttons, and a ‘today’ button next to them, but with a gap between - Buttons to switch the Calendar view between month and agendaWeek use the shortcode as follows: [eo_fullcalendar headerLeft='prev,next today' headerCenter='title' headerRight='month,agendaWeek']
http://docs.wp-event-organiser.com/shortcodes/calendar/
2021-02-24T23:42:38
CC-MAIN-2021-10
1614178349708.2
[array(['http://wp-event-organiser.com/wp-content/uploads/2012/01/fullcalendar.png', 'WordPress events calendar'], dtype=object) ]
docs.wp-event-organiser.com
Generates a table with exactly one column, with an empty header, followed by the specified number of rows, where the first row has the value 1, and each row after is assigned an incrementing value. TableSequence([Row Count]) Where: Row Count is the number of data rows to be created in the table. Customers Table DriveWorks will return the table above as an array, which will look like:
https://docs.driveworkspro.com/Topic/TableSequence
2021-02-24T23:37:44
CC-MAIN-2021-10
1614178349708.2
[]
docs.driveworkspro.com
Spooler Commands The list of supported commands have been mainly derived from the Pick/jBase/Reality platforms but can appear to some extent on other platforms. The following is a summary of the commands available from the command line (and also embedded into applications). Most (but not all) of the commands take the format USER:SP-XXXXXX [ ARG1 [ ARG2 ... ]] and if the arguments are not entered on the command line, they will be prompted for. The following example shows an invocation of SP-DEVICE with all the arguments being specified on the command line: USER:SP-DEVICE STANDARD "|PRN|HPLJ80" The following example shows two invocations of SP-DEVICE issuing prompts for unspecified arguments: USER:SP-DEVICE FORM-QUEUE DEVICE: STANDARD HPLJ80 USER:SP-DEVICE FORM-QUEUE DEVICE: STANDARD DEVICE: HPLJ80 USER: The available spooler commands and their implementation status follow. SETPTR The SETPTR command lists and sets the current printer settings. SETPTR [chan,width,depth,topmargin,botmargin,mode,option[,option]] SETPTR with no arguments lists the current printer settings. To change one or more printer settings, specify the desired positional argument(s) with the appropriate leading commas. After specifying these settings, SETPTR prompts you to confirm them with a Y or N, unless you specified the BRIEF option. If you specify an invalid option, SETPTR informs you with a Warning, then sets all of the valid specified arguments. In Caché MultiValue, unspecified values for width, depth, topmargin, botmargin, or mode default to the systemwide defaults. This behavior for unspecified values is emulation-dependent. For example, SETPTR 0,,,,,3 reverts these four settings to the defaults in Caché MultiValue. In other MultiValue emulations, these four settings retain their previously set values. This behavior can be overridden for any emulation by specifying the DEFAULT or NODEFAULT option keyword. If the &HOLD& file does not exist, SETPTR with mode=3 creates &HOLD& as a directory-type file in the current working directory (identified in the @PATH variable). For example, Mgr/namespace. Normally &HOLD& should be a directory-type file, but you can pre-created it as an anode-type file if that is preferred. To do this, you can use CREATE-FILE &HOLD& ANODE to create &HOLD& as a MultiValue global. In this case, a subsequent SETPTR with mode=3 writes to this existing &HOLD& ANODE global file. You must specify ANODE; by default CREATE-FILE creates an INODE file. An INODE file cannot be used by SETPTR. BRIEF means that changes to the current printer settings will be made without displaying the changes and prompting you to confirm them. Thus, SETPTR ,,64 prompts you for confirmation before it sets the page width to 64 lines; SETPTR ,,64,,,,BRIEF sets the page width to 64 lines without prompting for confirmation. BANNER name: item ID is always name. Each subsequent job overwrites the previous version of name. BANNER NEXT: item ID is an incremented number, P#0000_nnnn, where nnnn is incremented for each entry to &HOLD&, nnnn increments from 0001 through 9999. BANNER NEXT name: item ID is an incremented number, name_nnnn, where nnnn is incremented for each entry to &HOLD&. nnnn increments from 0001 through 9999. BANNER UNIQUE: item ID is an incremented number, P#0000_nnnn, where nnnn is incremented each time the SETPTR command is executed. nnnn increments from 0001 through 9999. In UniData emulation, BANNER UNIQUE is a synonym for BANNER NEXT. You can establish the SETPTR settings for print channel 0 as the systemwide default settings for print channel 0 by issuing the SETPTR.DEFAULT command. SETPTR.DEFAULT SETPTR.DEFAULT [ (D) ] The SETPTR.DEFAULT command takes the current print channel 0 settings and establishes them as the print channel 0 default settings. SETPTR.DEFAULT must be run from the SYSPROG account. Before issuing SETPTR.DEFAULT you define the print channel 0 settings using SETPTR. SETPTR.DEFAULT makes these settings the print channel 0 defaults for all future SETPTR commands systemwide. SETPTR.DEFAULT has no effect on print channels other than print channel 0. These SETPTR.DEFAULT settings remain in effect across system reboots until you issue a SETPTR.DEFAULT (D) command. The (D) option reverts all settings to the initial printer default settings. SP-ASSIGN SP-ASSIGN ? SP-ASSIGN {formspecs} {options} This command assigns a printer form queue and printer options to a printer channel. It can also be used to clear existing printer channel assignments. SP-ASSIGN is one of the commands that does not fit the usual command format. The ? argument displays the current spooler options. The formspecs argument specifies the assignment (or deassignment) of a form queue. This form queue is created using SP-CREATE. In Caché MultiValue, and several MultiValue emulations, the specified form queue must already exist. In D3, MVBase, R83, POWER95, and Ultimate emulations, SP-ASSIGN automatically creates the named form queue if it doesn’t exist, and then assigns it. This behavior can be changed using SP-CONDUCT bit position 8192. The formspecs are as follows. Where specified, the leading equal sign is mandatory; a space after the equal sign is optional: =formidentifier Specifies a form queue identifier for print channel 0. formidentifier can be either a name or a number, as displayed by the LISTPTR command. By default, print channel 0 is named STANDARD. nn=formidentifier Specifies a form queue identifier for print channel nn. formidentifier can be either a name or a number, as displayed by the LISTPTR command. nn= Clears the existing form queue identifier from print channel nn. Fformname Specifies a form queue name for print channel 0. By default, print channel 0 is named STANDARD. Fformnum Specifies a form queue number for print channel 0. By default, print channel 0 is assigned form queue 0. Qformname Specifies a form queue name for print channel 0. By default, print channel 0 is named STANDARD. printnum An integer that specifies the number of copies to print. The default is 1. When no other formspecs item is specified, printnum changes the number of copies for print channel 0. When specified with nn=formname, printnum changes the number of copies for the specified print channel. Print channel assignment and the number of copies must be separated by a blank space. In D3 and related emulations, you can use SP-ASSIGN to create a new form queue with a user-chosen number. For example, SP-ASSIGN F3 will create a queue numbered 3 and named F3 if queue #3 doesn't already exist. If you do that, you still need to either use SP-DEVICE to set the queue's despooler, or use other commands to move jobs from the queue to other queues for printing. You can specify one or more options values in any order (for example, (AMU). The following options values are supported: A – Auxiliary printing F – Create form queue H – Hold job after being printed K – Kill (clear) the form queue assignments for all print channels. Parenthesis required. M – Suppress the display of “Entry #” message when hold job created O – Keeps the print job open over multiple programs Q – Create form queue S – Suppress automatic printing when job created U – Unprotect the spool job For example, USER:SP-ASSIGN =MYFORMQUEUE 2 (HS assigns print channel 0 to the form queue MYFORMQUEUE, with 2 copies printed per print job. The print job will be held (H) and printing suppressed (S). Assigning options values deletes any prior options values. The M option is initially provided by default. When M is not specified, the form queue informs by default. To view the options you have assigned, use SP.LOOK or SP.ASSIGN ?. These two commands display other additional information and display the copies and options values in different formats. SP-AUX SP-AUX JobNumber[-JobNumber] [JobNumber[-JobNumber]] [(S)]… This command takes a number of print jobs and sends them to the auxiliary printer attached to a user terminal. The terminal must have the codes defined to control an auxiliary printer. Assuming these codes exist, the SP-AUX command turns on auxiliary printing attached to the terminal, transmits the specified jobs, and then turns off the auxiliary printer. The jobs to be transmitted can be any number of jobs, a range of jobs such as 25-28, or any combination (97 100 104-109). The (S) option is the “silent” option. When this is set, notification will be sent to the terminal that auxiliary jobs are being printed. SP-CLEAR SP-CLEAR [form-queue] This command clears all the jobs from the specified form queue. SP-CLOSE SP-CLOSE {(Rnnn} SP-CLOSE closes print jobs that are currently open due to the KEEP option specified in SETPTR. Without a job number, this command will close ALL print jobs that are currently open for the current user. The use of the (Rnnn) option means only print job nnn will be closed. SP-CONDUCT SP-CONDUCT [(V)] SP-CONDUCT ? SP-CONDUCT nnn [nnn [...]] [(V)] The SP-CONDUCT command allows you to control the conduct of the spooler to accurately reflect your own application’s needs. SP-CONDUCT settings will override default settings established for the current emulation. Users who wish this to happen in every session should add SP-CONDUCT to the login command. SP-CONDUCT has three syntactical forms: SP-CONDUCT with no arguments (except the optional (V) verbose option) returns the current settings as the integer total of the bit positions. The verbose option lists the component bit positions that make up this integer total. SP-CONDUCT ? lists all of the available bit integers and their symbolic names. SP-CONDUCT nnn allows you to set bit positions for various spooler behaviors. You can specify a single nnn value or several nnn values separated by blank spaces. There are six ways to specify spooler behavior settings: nnn: an integer that sets the specified bit positions. Any prior bit settings are eliminated. For example, 129 sets bits 1 and 128. +nnn: a signed integer that sets a single specified bit position. All other bit positions remain unchanged. For example, +256 sets the 256 bit; it has no effect on other bit settings. -nnn: a signed integer that resets (clears) a single specified bit position. All other bit positions remain unchanged. For example, –256 clears the 256 bit, setting it to 0; it has no effect on other bit settings. +name: a keyword that sets a single bit position by symbolic name. All other bit positions remain unchanged. For example, +NO_INHERIT sets the 32 bit (see table below); it has no effect on other bit settings. -name: a keyword that resets (clears) a single bit position by symbolic name. All other bit positions remain unchanged. For example, –NO_INHERIT clears the 32 bit, setting it to 0 (see table below); it has no effect on other bit settings. DEFAULT: a keyword that resets all bit positions to the default settings for the current emulation. The following are the available bit positions and their symbolic names: These bit position values are additive. The default setting for each emulation is: SP-CONTROL SP-CONTROL form-queue [ff,nl[,lff]] This command is used to define what control characters are used by the despool process when outputting a print job to a printer. By default, a new page is defined by an ASCII 12 form feed character, and a new line is defined by a two-character sequence: an ASCII 13 (carriage return) and an ASCII 10 (line feed). These defaults are not appropriate for all printers. A new page on some printers requires both a form feed and a carriage return character (ASCII 12 and 13). A new line on some printers requires just a line feed character (ASCII 10). Define the new page sequence using the ff parameter. Define the new line sequence using the nl parameter. The available values are: LF = line feed (ASCII 10); FF = form feed (ASCII 12); CR = carriage return (ASCII 13); nnn = a single character, specified by the corresponding ASCII decimal integer value. You can specify multiple characters by using the underscore symbol, for example, CR_LF. To leave the existing value unchanged, omit the parameter, leaving the placeholder comma when necessary. For example, SP-CONTROL HP7210 ,CR_LF. To revert a value to the system default, use the value DE. For example, SP-CONTROL HP7210 DE,DE. On UNIX® platforms, SP-CONTROL converts a CR+LF sequence into a LF sequence. This is intended to aid developers programming applications that write to both Windows systems and UNIX systems: Windows systems require a CR+LF sequence for a new line and UNIX systems require simply a LF sequence. Therefore, if you use a CR+LF sequence in a SP-CONTROL command for a form queue that outputs to a UNIX file, Caché converts this sequence to LF only. The optional lff parameter specifies the form feed sequence for any leading form feeds at the start of a line. If lff is defined, the ff parameter is ignored, and only form feeds at the start of a line are translated. A form feed found in the middle of a line is not translated. This parameter is used to support binary data in a print job. Once lff is set, you can print binary data as follows: PRINT CHAR(255):"BINARY": PRINT binarydata PRINT "Hello World" This program translates the form feed in the first line. The binarydata value is then output without translation. This is followed by the “Hello World” character string without any intervening new line sequences. SP-COPY Copies one or more spooler jobs. There are two syntactical forms: the first copies a spooler job to another spooler job; the second copies a spooler job to a MultiValue file defined in the VOC. SP-COPY jn1 {jn2 {jn3 ...}} {(DOV)} TO: {jna {jnb {jnc ...}}} With this command format, one or more spooler jobs (jn_) are copied to new spooler jobs on the spooler. Multiple job numbers are separated by spaces. If an alternate job number is specified in the TO: prompt, then it is copied to this print job number. If no alternate job number is specified, the next available job number is automatically used. SP-COPY jn1 {jn2 {jn3 ...}} {(DOV)} TO: ({DICT} fna) {itemidA {itemidB ...}} This form of the command, where the output file name is given by (fn_) or (DICT fn_), the jobs are copied to the MultiValue file. If an alternate item id is specified in the TO: prompt, then the job is copied to this item id. If no alternate item id is specified, the output item id becomes the job number. The MultiValue file can be either a Caché global or a directory as defined by your computer's file system. Note that a Caché global (INODE file) has a maximum size limit of roughly 3.5 million characters; output beyond that limit is truncated and warning message is issued. To copy spooler jobs larger than that, you can use CREATE-FILE to create a target file of type ANODE; however, an ANODE file larger than 3.5 million characters encounters the same maximum size limit when being accessed or edited. The optional DOV letter codes perform the following operations: the (D) option deletes the original job once the copy has completed successfully. The (O) option overwrites any existing target output; without this option, if the target already exists an error is reported. The (V) option reports in Verbose mode, giving additional information about what is copied and what is deleted. The following example copies spooler job number 4 and 5 to other spooler jobs. Because no target job numbers are supplied, they are automatically allocated: USER:SP-COPY 4 5 TO: Job 4 copied to job 21 Job 5 copied to job 22 USER: The following example copies spooler jobs numbered 4 and 5 to specified spooler jobs numbered 1001 and 1011. The D option means that once the copy executes successfully, jobs 4 and 5 are deleted: USER:SP-COPY 4 5 (DV TO:1001 1011 Job 4 copied to job 1001 Job number 4 deleted. Job 5 copied to job 1011 Job number 5 deleted. USER: The following example copies spooler jobs numbered 4 and 5 to the Windows directory C:\TMP. Because the program specifies no item ids, the file names default to the job numbers: USER:CREATE-FILE C_TMP DIR C:\TMP [421] DICT for file 'C_TMP' created. Type = INODE [429] Default Data Section of 'C_TMP' set to use directory 'C:\TMP' [437] Added default record '@ID' to 'DICT C_TMP'. [417] CreateFile Completed. USER:SP.COPY 4 5 (VO TO:(C_TMP Job 4 copied to OS file C:\TMP/4 Job 5 copied to OS file C:\TMP/5 USER: SP-COPIES SP-COPIES [job [copies] ] This command changes the number of copies to be printed for the specified job. SP-CREATE SP-CREATE [form-queue [device-type [device-name] ] ] SP-CREATE creates a form queue and assigns a device to it. (SP-DEVICE is used to assign a device to an existing form queue.) All three arguments are optional. If you omit an argument, SP-CREATE prompts you for an argument value. The supported device-type values are: CACHE, DEBUG, GROUP, LPTR, NULL, PORT, PROG, TAPE, and UNIX®. A device-type of DEBUG directs output to the user terminal (device 0), with 0.2 of a second delay between lines. A device-type of GROUP creates a form queue group named form-queue. This form queue group contains those existing form queues that you specify as a series of device-name values (separated by blank spaces). You can specify a device-name of "", then later assign existing form queues to the form queue group using SP-DEVICE.. A device-name must exist and be known to the system. When device-name is specified as a command argument, it must be specified as a quoted string. If you specify no device values at the prompts and press return, SP-CREATE creates an empty form queue. SP-CREATE does not start the printing on that form queue. SP-START is used for this purpose. SP-DELETE SP-DELETE [joblist] This command allows you to delete one or more print jobs. The optional joblist argument accepts a list of jobs to delete separated by spaces, as shown in the following example: SP-DELETE 66 68 71. You can also delete a range of print jobs, as shown in the following example: SP-DELETE 66-70. If you omit the joblist argument, SP-DELETE prompts you for a print job list. SP-DEVICE SP-DEVICE [form-queue [device-type [device-name] ] ] This command allows you to: Change the device for an existing form queue. SP-DEVICE assigns the specified printer device to a spooler form queue. It assigns the form queue a form number (FQ) in the global ^%MV.SPOOL. Change the form queue list for an existing form queue group by specifying device-type GROUP. SP-DEVICE can assign multiple form queues to a form queue group. By default, SP-DEVICE overwrites any existing form queues in the form queue group. The (A letter code option causes it to append the specified form queues to the form queue group, rather than replacing them. The (R letter code option causes it to remove the specified form queues from the form queue group. (SP-CREATE is used to create a form queue and assigns a device to it.) All three arguments are optional. If you omit an argument, SP-DEVICE prompts you for the argument value. The form-queue default is STANDARD. The supported device-type values are: CACHE, DEBUG, GROUP, LPTR, NULL, PORT, PROG, TAPE, and UNIX®. A device-type of DEBUG directs output to the user terminal (device 0), with 0.2 of a second delay between lines.. The device-name must exist and be known to the system. The device-name of a printer must be prefaced by |PRN|. When device-name is specified as a command argument, it must be specified as a quoted string. Any change in device name assignment takes effect when the despool process starts writing a new print job; it will not affect a print job midway through. The following example shows several invocations of SP-DEVICE: USER:SP-DEVICE STANDARD "|PRN|HPLJ80" USER:SP-DEVICE CANON CACHE "|PRN|Canon MP530:(/WRITE:/APPEND:/DATATYPE="TEXT"):0" USER:SP-DEVICE HP20 CACHE "|PRN|\\MACHINE\HP32:wan:0" USER:SP-DEVICE FILEOUT CACHE E:\DATA\FILES:wa SP-DISPLAY SP-DISPLAY job1/formqueue [job1/formqueue [...jobn/formqueue]] This command provides detailed information on the specified jobs or form queues. SP-DISPLAY is a synonym for SP-VERBOSE. SP-EDIT, SP.EDIT SP-EDIT [job[-job]] SP.EDIT [job[-job]] {L} {MD} {MS} This command allows an administrator to manipulate spooler jobs. The SP-EDIT form is similar to the jBase, Pick, and Reality implementations. The SP.EDIT form is similar to UniVerse. The SP-EDIT command allows you to edit pending print jobs. It is called from the command line as SP-EDIT [JOB[-JOB]] which allows editing of the characteristics of a single job or a range of jobs. Once invoked, the administrator enters a series of commands which are applied to the identified jobs. The available commands are: The SP.EDIT command allows you to perform simple administration on pending print jobs. It is called from the command line as SP.EDIT [JOB[-JOB]] {L} {MD} {MS} which allows editing of the characteristics of a single job or a range of jobs. If no jobs are specified, the commands apply to ALL jobs. The meaning of the command options are: If none of the options is specified on the command line, the characteristics of each selected job will be displayed in a form similar to ------- Details of Print Job # 45 in ^MV.SPOOL("45") ------- Form queue number : 00000 Form queue name : STANDARD Job status : CLOSED Time of last status change : Dec 08 2006 16:22:29 Number of lines in job : 6 Number of pages in job : 1 Time job created : Dec 08 2006 16:22:29 Time job closed : Dec 08 2006 16:22:29 Number copies to print : 1 Namespace of job creator : %SYS Account name of job creator : SYSPROG Username of job creator : Greg Port number of job creator : 5700 Despool page position : 0,0 Options : HOLD, INFORM, SKIP The administrator is then prompted for a series of commands drawn from the following list: SP-EJECT SP-EJECT [pages] This command creates a print job that begins with the specified number of blank pages. It spools the specified number of form feeds (page ejects). The optional pages argument must be an integer from 0 through 10. The default is 1. SP-FORM SP-FORM [old-form-queue [new-form-queue] ] Change the name of a form queue. The form queue number remains the same. SP-FQDELETE SP-FQDELETE [form-queue] Deletes a form queue and all the jobs on the queue. If there are any print jobs currently being printed, it will leave that print job as-is and not delete the form queue. If you use SP-FQDELETE to delete a form queue which was a member of a form queue group, the form queue is deleted from the form queue group. Therefore, if you re-create the same form queue, you will need to add it again to the form queue group. SP-GLOBAL SP-GLOBAL [global-name] [(S] SP-GLOBAL without an operand displays the name of the spooler table global for the current account. SP-GLOBAL with an operand changes the name of the spooler table global for the current account. By default, the name of the global (and therefore the spooler table) is ^%MV.SPOOL, which is a system-wide global. Thus by default, all users share the same spooler table. SP-GLOBAL changes the name of the global where output will be collected for the current account. This allows each account to maintain a separate global, or for multiple accounts to share a global. The global-name argument must be a syntactically valid Caché global variable name. Caché global variable names begin with the ^ character. SP-GLOBAL rejects a global name that fails global naming conventions with an appropriate error message. For further details on naming conventions for globals, refer to the “Variables” chapter of Using Caché ObjectScript. The global-name should be either a nonexistent global or an existing MultiValue spooler global. If global-name refers to an existing data global or a MultiValue file, SP-GLOBAL displays an appropriate error message, then prompts you before proceeding, as shown in the following examples: Existing global: USER:SP-GLOBAL ^MYGLOBAL Warning, the global ^MYGLOBAL already contains non-spool data. If you continue, this data will be lost. Do you wish to continue (Y/N) ? Existing MultiValue file: USER:CREATE-FILE MYSPOOLER USER:SP-GLOBAL ^MYSPOOLER Warning, the global ^MYSPOOLER is already allocated to the MV file MYSPOOLER. If you continue, this file will be deleted. Do you wish to continue (Y/N) ? You can use the (S letter code option to suppress this prompt and assign the specified global as the spooler table. The following example sets the Caché global ^SPOOLER in the namespace ADMIN as the spooler table for the current namespace GREG. Because Caché namespace names map to MultiValue account names, this means that all future users of account GREG will use SPOOLER in account ADMIN: GREG:SP-GLOBAL ^|"ADMIN"|SPOOLER Setting spooler global name to '^|"ADMIN"|SPOOLER' The name of the global is part of the account metadata. It persists once set, so it remains set for all future logins for that user until explicitly changed. SP-JOBS SP-JOBS The SP.JOBS command shows the status of print jobs queued to the spooler, and prompts you to enter a numeric action code to control a specified print job. It lists jobs in the sequence in which they were created, with the most recently created job shown first. SP.JOBS lists the job number, the queue name, the line number, the account name, the date and time created, the status, and the number of pages printed. Print jobs are assigned sequential integer numbers. Once a day Caché MultiValue resets the assignment sequence to 1, so that print job numbers can be reused. However, numbers already assigned to pending print jobs are skipped over. For example, if the job number sequence is reset when there are pending print jobs numbered 1, 2, and 4, new print jobs will be assigned job numbers 3, 5, and so forth. The following action codes allow manipulation of print job options and status. SP-KILL SP-KILL [form-queue] This command will kill printing of the current job on the form queue and then stop the despool process. The difference between this command and SP-STOP is that SP-STOP waits for the current job to finish printing, this one does not. You can specify * as the form queue name/number in which case we kill all running print despool processes. SP-LOOK SP-LOOK Displays the assigned print channels with their form queue numbers (Q#), form queue names, specifications, number of copies to print (P#), and their assignment options. SP-ASSIGN assigns form queues and their options to a print channel. SETPTR assigns the Width, Lines, Top, and Bot specifications to a print channel. The SP-LOOK option keywords correspond to the SP-ASSIGN letter codes as follows: SP-MOVEQ SP-MOVEQ [from-form-queue [to-form-queue] ] This command moves all the jobs from one form queue to another. Any job that is currently being printed is not moved. SP-NEWTAB SP-NEWTAB This command completely re-initializes all form queues and adds a single default form queue called STANDARD. All print jobs and form queues are lost. SP-NEWTAB kills the current spooler global, as assigned by SP-GLOBAL. By default, the name of the MultiValue spooler global is ^%MV.SPOOL, which is a system-wide global. Thus by default, all users share the same spooler global. If you use SP-GLOBAL to change the spooler global to another value (for example, the Caché general-purpose ^SPOOL spooler global) then you should be careful using SP-NEWTAB to avoid deleting non-MultiValue spool jobs. SP-NEWTAB is performed independent of transaction status. SP-OPEN SP-OPEN This command causes any new print job to remain open until either the user logs off or until the SP-CLOSE command is executed. It has the same effect as using the O option in the SP-ASSIGN command or the OPEN or KEEP option in the SETPTR command. SP-OPTS SP-OPTS [job [options] ] This allows you to change the options on a job. The options are those items that can appear in position 7 of the SETPTR command: HOLD, SKIP, BANNER and so on. SP-PAGESIZE SP-PAGESIZE form-queue [width [ depth [ topmargin [ bottommargin]]]] The SP-PAGESIZE command defines the page size for a MV spooler form queue. These values that can be set are page width , page depth, top margin and bottom margin. When a form queue is assigned using either SETPTR or SP-ASSIGN, these values will be used as necessary. You can delete these values by executing the SP-PAGESIZE with all the values set to 0. The SETPTR command also allows defining a page width, depth, top and bottom margin. If these values are omitted in SETPTR, and the form queue has values set by SP-PAGESIZE, then these are the values will be used. SP-POSTAMBLE SP-POSTAMBLE form-queue subroutine Sets the name of the subroutine name to be called after a job finishes printing on the named form queue. SP-PREAMBLE SP-PREAMBLE form-queue subroutine Sets the name of the subroutine name to be called before a job begins printing on the named form queue. See Form Queue Control for a discussion of the preamble. SP-PURGEQ SP-PURGEQ [form-queue [job-list]] This command removes jobs in the list from the specified form queue. The job-list is made up of a space-separated list of job numbers, or job number ranges, for example, “6–11”. Use “*” to remove all jobs from the queue. SP-RESUME SP-RESUME [form-queue] If a despool process has been suspended with the SP-SUSPEND command, then this will resume the printing. SP-SHOW SP-SHOW job1/formqueue [job1/formqueue [...jobn/formqueue]] This command provides detailed information on the specified jobs or form queues. SP-SHOW is a synonym for SP-VERBOSE. SP-SKIP SP-SKIP [form-queue [[lead-]trail] ] Defines how many pages to skip after and (optionally) before printing a job. You can specify an integer number of lead leading form feeds before a print job and trail trailing form feeds after a print job. For example, SP-SKIP myqueue 1-3. If you specify a single integer, it defines the number of trailing form feeds, with a default of zero leading form feeds. SP-START SP-START [printer-name | *] [(B | F | I)] [(L)] The SP-START command starts a despool process. This despool process monitors the spooler form queue and sends jobs from the spooler to the printer. If you specify no argument, you are prompted to specify a form queue name. An * argument defaults to the STANDARD form queue. If you supply *, Caché will try to start all the defined printers. By default, SP-START runs despool as a background process. To run despool as an interactive process, specify (I). To run despool as a background process, specify (B) or specify no letter option. To start despool as a foreground process which occupies a terminal, specify (F). The (F) option is useful for debugging, as described in “Debugging the Despool Program”. The (L) option logs some of the activities of the despool process to the MultiValue log file, located at Cache/mgr/mv.log. You also can use the %SYS.System.WriteToMVLog() method to write to the mv.log file. Refer to the TRAP-EXCEPTIONS command for further details on mv.log. To start printer spoolers as part of Caché start-up, do the following: Write a paragraph in the voc of someaccount that starts the printers: 0001 PA 0002 SP-START PRINTER1 CACHE |PRN|PRINTER1 0003 SP-START PRINTER2 CACHE |PRN|PRINTER2Copy code to clipboard Have Caché %ZSTART schedule a background job to run this using the MV command: ZN "someaccount" MV "PHANTOM START.PRINTERS"Copy code to clipboard %ZSTART is an ObjectScript routine you create and save (%ZSTART.mac) in the %SYS namespace. For further details, refer to “Customizing Start and Stop Behavior with ^%ZSTART and ^%ZSTOP Routines”. SP-STATUS SP-STATUS The SP-STATUS command shows the status of the currently defined spooler form queues. It then prompts you to enter one of the following numeric action codes to control a queue or printer assignment. SP-STOP SP-STOP [printer-name | * ] The SP-STOP command stops printing on the printer at the end of the current job and then stop the despool process. Stopping a despool process causes jobs to wait on the spooler form queue and not be sent from the spooler to the printer. If you specify no argument, you are prompted to specify a form queue name. An * argument defaults to the STANDARD form queue. Specifying * will stop all executing print spool jobs. SP-SUSPEND SP-SUSPEND [form-queue] This command will suspend printing of a job. The despool process simply loops waiting for a SP-STOP, SP-RESUME or SP-KILL command to be issued. This command is useful so the operator can, for example, adjust the printer on a long print job. If the operator notices a problem (ink running out, paper becoming skewed) the operator can suspend the printing, correct the problem, reposition the print job to where the problems first began and then use SP-RESUME to continue printing. Repositioning the print job follows these steps: Suspend the despool process, if necessary, with the SP-SUSPEND command. Edit the print job with the SP-EDIT command (note: For legacy reasons the SP-EDIT and SP.EDIT commands differ, always use SP-EDIT for this task). Once in the edit, position the editor to the line you want to reposition at. In the editor execute the SP command. Restart the printer with the SP-RESUME command. For example, the following transcript shows repositioning print job 2 , currently printing on the STANDARD form queue, to resume at line 402: USER:SP-SUSPEND STANDARD SUSPEND command initiated on form queue STANDARD running on job 2508 USER:SP-EDIT 2 PRINT JOB # 2 WARNING: Job status is PRINTING TOI .401 000401 XXXX YYYY ZZZZZZ aaaaa bbbbb cccc .SP Print position set to 402,0 .EX USER:SP-RESUME STANDARD RESUME command initiated on form queue STANDARD running on job 2508 USER: SP-SWITCH SP-SWITCH [new-form-queue [job] ... The SP.SWITCH command allows the administrator to switch one or more print jobs to the specified spooler form queue. If you specify no arguments, you are prompted to specify a form queue name and one or more print jobs. SP-TESTPAGE SP-TESTPAGE [device | form-queue] This command generates a standard test page and sends it to the designated destination. SP-TESTPAGE sends the test page to the standard print device, "|PRN|:(/WRITE:/APPEND:/DATATYPE="TEXT"):0". SP-TESTPAGE device sends the test page to the specified Caché print device. This can be in the form, "devicename", or optionally with an added open mode and timeout, for example, "|PRN|:rwn:2". SP-TESTPAGE form-queue sends the test page to the print device associated with the given form-queue. If the form queue is of type DEBUG, or the device name is NULL or DEBUG, then Caché will generate an error message. The standard test page consists of the block letters “Cache Test Page”, followed by the Caché version string, the date and time that the test page was written, the output device name (for example |PRN|), and the Cache I/O name (for example |TRM|:|5740). SP-TESTPAGE performs this test page operation using the following steps: Creates a temporary form queue. Adds a test page as a job to that form queue. Initiates a despool process using SP-START. Waits for the job to finish printing (or for timeout). Stops the despool process using SP-STOP. Deletes the temporary form queue. Accordingly, the output of a SP-TESTPAGE command looks something like the following: USER:SP-TESTPAGE File Form queue STANDARD created as form number FQ00000000 in global ^%MV.SPOOL Temporary form queue 'SPTESTPAGE_4616' created Test print job number 1 created, starting despool process Spooler STARTED in BATCH mode on form queue SPTESTPAGE_4616 at job 10012 The test page appears to have printed successfully. STOP command initiated on form queue SPTESTPAGE_4616 running on job 10012 Temporary form queue 'SPTESTPAGE_4616' deleted Full log written to file c:\intersystems\cache\mgr\mv.log USER: SP-VERBOSE SP-VERBOSE {job|formqueue} [{job2|formqueue2} [...]] This command provides detailed information on the specified print jobs or form queues. It allows you to supply one or more print job numbers or form queue identifiers (form queue names or form queue numbers as displayed by the LISTPTR command) for verbose display. If specifying multiple print jobs or form queues, separate them with blank spaces. If you specify no arguments, you are prompted to specify a print job number or form queue name (or number). The following is a print job display: USER:SP-VERBOSE 5 ------- Details of Print Job # 5 in ^%MV.SPOOL("5") ------- Form queue number : FQ00000000 Form queue name : STANDARD Job status : CLOSED Time of last status change : Mar 08 2011 13:53:41 Number of lines in job : 10 Number of pages in job : 1 Time job created : Mar 08 2011 13:53:41 Time job closed : Mar 08 2011 13:53:41 Number copies to print : 1 Namespace of job creator : USER MV Account name of job creator : USER OS Username of job creator : glenn Cache Username of job creator : UnknownUser Port number of job creator : 20 Despool page position : 0,0 Unique Job Identifier : 12 System name : DELLHOME IP Address : 127.0.0.1 Cache Instance : DELLHOME:CACHE Options : The following is a form queue display: USER:SP-VERBOSE STANDARD ------- Details of form queue STANDARD in ^%MV.SPOOL("FQ00000000") ------- Form queue number : FQ00000000 Form queue name : STANDARD Time formq created : Mar 08 2011 10:36:26 Status of formq : INACTIVE Namespace of formq creator : USER Account name of formq creator : USER Username of formq creator : glenn Number of jobs on formq : 0 Job number being despooled : --none-- Job ID of despool process : --none-- Device name for despool : |PRN|DOC2 Leading and trailing page skips :1 Notional device type : CACHE Despool job pre-amble routine : --undefined-- Despool job post-amble routine : --undefined-- Page Size : --undefined-- Despool control chars : FF,CR_LF USER:
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GVSP_SPOOLER_COMMANDS
2021-02-24T23:31:52
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
StorageGRID provides two REST APIs for performing installation tasks: the StorageGRID Installation API and the StorageGRID Appliance Installer API. Both APIs use the Swagger open source API platform to provide the API documentation. Swagger allows both developers and non-developers to interact with the API in a user interface that illustrates how the API responds to parameters and options. This documentation assumes that you are familiar with standard web technologies and the JSON (JavaScript Object Notation) data format. Each REST API command includes the API's URL, an HTTP action, any required or optional URL parameters, and an expected API response. The StorageGRID Installation API is only available when you are initially configuring your StorageGRID system, and in the event that you need to perform a primary Admin Node recovery. The Installation API can be accessed over HTTPS from the Grid Manager. To access the API documentation, go to the installation web page on the primary Admin Node and select:
https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-install-vmw/GUID-EEEE7EAF-33BB-485F-99DA-7688EFFDD1AE.html
2021-02-25T00:56:46
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
Abstract Contributors Download PDF of this page This NetApp HCI for Red Hat OpenShift on Red Hat Virtualization (RHV) deployment guide is for the fully automated installation of Red Hat OpenShift through the Installer Provisioned Infrastructure (IPI) method onto the verified enterprise architecture of NetApp HCI for Red Hat Virtualization described in NVA-1148: NetApp HCI with Red Hat Virtualization. This reference document provides deployment validation of the Red Hat OpenShift solution, integration of the NetApp Trident storage orchestrator, and a solution verification consisting of an example application deployment.
https://docs.netapp.com/us-en/netapp-solutions/containers/rh-os-rhv-redhat_openshift_abstract.html
2021-02-25T00:25:32
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
Export to EDL/AAF/XML Window The Export to EDL/AAF/XML window lets you export a storyboard project directly to Apple Final Cut Pro using the EDL or XML formats or to Adobe Premiere, Avid Xpress, or Sony Vegas using the AAF format. The timing, motions, and sounds edited with Storyboard Pro are preserved. - Select File > Export > EDL/AAF/XML. The Export to EDL/AAF/XML window opens.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/reference/windows/export-edl-aaf-xml-window.html
2021-02-24T23:40:34
CC-MAIN-2021-10
1614178349708.2
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/SBP/Export/sbp4_export_edlaffxml_dialog.png', None], dtype=object) ]
docs.toonboom.com
To set up your deployment so that end users can connect removable devices, such as USB flash drives, cameras, and headsets, you must install certain components on both the remote desktop or RDS host and the client device, and you must verify that the global setting for USB devices is enabled in Horizon Administrator. This checklist includes both required and optional tasks for setting up USB redirection in your enterprise. The USB redirection feature is available only on some types of clients. To find out whether this feature is supported on a particular type of client, see the feature support matrix included in the installation and setup document for the specific type of client device. - When you run the Horizon Agent installation wizard on the remote desktop source or RDS host, be sure to include the USB Redirection component. This component is deselected by default. You must select the component to install it. - When you run the VMware Horizon Client installation wizard on the client system, include the USB Redirection component. This component is included by default. - Verify that access to USB devices from a remote desktop or application is enabled in Horizon Administrator. In Horizon Administrator, go to USB access is set to Allow.and verify that - (Optional) Configure Horizon Agent group policies to specify which types of devices are allowed to be redirected. See Using Policies to Control USB Redirection. - (Optional) Configure similar settings on the client device. You can also configure whether devices are automatically connected when Horizon Client connects to the remote desktop or application, or when the end user plugs in a USB device. The method of configuring USB settings on the client device depends on the type of device. For example, for Windows clients, you can configure group policies. For Mac clients, you use a command-line command. For more information, see the installation and setup document for the specific type of client device. - Have end users connect to a remote desktop or application and plug their USB devices into the local client system. If the driver for the USB device is not already installed in the remote desktop or RDS host, the guest operating system detects the USB device and searches for a suitable driver, just as it would on a physical Windows computer.
https://docs.vmware.com/en/VMware-Horizon-7/7.9/horizon-remote-desktop-features/GUID-C64A6CE8-F02A-4A05-A4CB-FB127A7465DE.html
2021-02-24T23:02:36
CC-MAIN-2021-10
1614178349708.2
[]
docs.vmware.com
By default, you can see that Amazon Web Services (AWS), Google Cloud, IBM Cloud, and Microsoft Azure are included in vRealize Operations Manager . You can also add your own cloud provider by using a standard vRealize Operations Manager template. You can configure the new cloud provider as per the standard vRealize Operations Manager template and perform a migration scenario. The vRealize Operations Manager template contains data points for vCPU, CPU, RAM, OS, region, plan term, location, and built-in instance storage, you must provide these values when you add cloud providers. The result of the migration scenario helps you assess the cost savings achieved using your cloud provider against the default cloud providers. You can edit the rate card for new cloud providers and default cloud providers. However, you cannot delete the default cloud providers.
https://docs.vmware.com/en/vRealize-Operations-Manager/8.3/com.vmware.vcom.core.doc/GUID-6BA186E2-415F-4198-8103-1C336EFC0F8B.html
2021-02-25T00:20:34
CC-MAIN-2021-10
1614178349708.2
[]
docs.vmware.com
MapGL API Release notes There are two ways to choose MapGL version: - (recommended) major version: - fixed version: Major version guarantees to have stable API (breaking changes will land into the next major version). It is updated regularly with new features, bug fixes and performance improvements. Fixed version does not have any updates. That's why it has a short support period of 3 months after a new version is released. We guarantee that the version will be working during the support period. The only reason to choose fixed version is when you need Subresource Integrity (SRI). Because of short support period you have to update the version regularly. v1.7.1 URL: SRI: sha512-e2LUonKGcxMuQ+eVDrBflqsMJKeIMXkS9iecrwTbPqtP0iBB9PvoJ7MPvs+YOWOpd4zLMW4HMhwQwovR6hx5og== Release Date: 02.02.2021 - Fixed a bug in the fitBounds method, when rotation has negative value or rotation has not be considered. v1.7.0 URL: SRI: sha512-aCuhmji0murRQCFMNpbyxI9atKUj1LcamTjMS1oMi+d+gWewmuytCTsdKfDhCXnPXL/e9s3uKJbhYmDaUqcEIg== Release Date: 29.01.2021 Support End Date: 02.05.2021 - Added a new method fitBounds to pan and zoom the map to contain its visible area within the specified geographical bounds. - Added a new method isSupported which checks whether the current browser supports MapGL. - Added a new method notSupportedReason which checks whether the current browser supports MapGL and which returns the reason in a string. v1.6.2 URL: SRI: sha512-Qd8FJ4dTV0r7QlO5EZ/bH3eJHryP5F2JlGgSLpm16hKxd3v+YAx2fALVs2qdO8hKWD1FR6vpvuuqaioWbA08mQ== Release Date: 24.12.2020 Support End Date: 29.04.2021 - Fixed a bug where the inertia of the map doesn't work if a styleZoom was set v1.6.1 URL: SRI: sha512-bvhJrH7/tuHmfLODpVydlwpKOhLXPvWLoiQ0ekhz6UGrzLnRkV30W7imPBstY/CFBB/nDwUVeoaY8JBCSq2TsA== Release Date: 22.12.2020 Support End Date: 24.03.2021 - Fixed the map center changing behavior by setting different zoom types. If a styleZoom option was set, the map style zoom doesn't change while map dragging. Conversely, if a zoom option was set — the map zoom doesn't change, but the style zoom does. v1.6.0 URL: SRI: sha512-HDATpIofgFh3SujGnHfWkSTs8jOHMHMvS4cYBr9S3vqsbCVkh9o+UEpaBI9N7OeG4IiyYA0sN+cgvM3MoRXs1A== Release Date: 18.12.2020 Support End Date: 22.03.2021 - Added a new map option styleZoom. This option allows you to set the same zoom that is used in the style settings. The styleZoom and zoom options set the same map scale but in different projections. - Added a new method setLanguage to change the desired map language. The map will try to set the language if it possible. - Added a new method getLanguage, which returns the desired map language. - Fixed a bug with touch events that leads to the exception with undefined. v1.5.0 URL: SRI: sha512-/Hw2egQaYHHPB1n0gd5bjYW+nJVHPWL4jgtlxe9XPpmPTmHlM8+MuoW2zDkujS4T/Ws1R1+DNfSYpxedHy/peA== Release Date: 10.12.2020 Support End Date: 18.03.2021 - Added a new method setStyleById to change the map style by its ID. 🎉 - Added a new map option defaultBackgroundColor. This option allows you to set the default background color while style is loading. - Added a new map option style that takes a style ID. The map will be initialized with the specified style. v1.4.3 URL: SRI: sha512-Uxyq5h4XINWzyyVRQJryWKJKTDPSyufCgMicxI0WWBxOcck8LdwOh0/Rq77iOdHuSip09XrnHU5cTleVQxEL8A== Release Date: 1.12.2020 Support End Date: 10.03.2021 - Internal changes that do not affect the operation of the map engine v1.4.2 URL: SRI: sha512-OOipC0yturhRcjT5OE2y+xYc8w3qAiE4iR9kr6tYw1sTQD3Xm8iXED18EwDYA/kJVZtdQVOcrYRWn+jWpOGlew== Release Date: 09.11.2020 Support End Date: 1.03.2021 - Internal changes that do not affect the operation of the map engine v1.4.0 URL: SRI: sha512-osMewflim+spxdlIWkAjWLTMf+xBLIOjQhpqYokSVctjLJh5gNW6hYl3OPNSAS2lcmPd5Xz/Xp5o+NTmXafoXQ== Release Date: 22.10.2020 Support End Date: 09.02.2021 - Added a new method setStyleZoom. It sets the map style zoom. - Added a new map option maxBounds. It restricts map to specified bounding box. v1.3.2 URL: SRI: sha512-tz6H4FpSamGsGRnoMOejtLQscum0RGJcVZlBtTaQBZuemD71Jz1kXvFsXhzggRXB+EprMhiFPPvBh7qDdJrzPA== Release Date: 07.10.2020 Support End Date: 22.01.2021 - Added stable fix for missing rerender (IOS 14 and other) v1.3.1 URL: SRI: sha512-Zqr8S1rWWTKcC4f+kqllqpahA0l7XCdPj1Hs3h/KJBUx+6bQWI4+yPuaUrmNNayFMaXG0AzjJY77Cn/873Ssgg== Release Date: 22.09.2020 Support End Date: 07.01.2021 - Added patch for missing rerender on IOS 14 v1.3.0 Release Date: 18.09.2020 Support End Date: 22.12.2020 - Added disableRotationByUserInteractionand disablePitchByUserInteractionto the MapOptions. disableRotationByUserInteraction blocks rotation by any user interaction (from keyboard, mouse and etc). disablePitchByUserInteraction prevents users from pitching the map by any interaction. - Added a new method getStyleZoom. It returns the current map style zoom. - Added a new method getWebGLContext which returns current WebGLRenderingContext of the map canvas. - Added a new method getCanvas which returns HTMLCanvasElement of the map. v1.2.0 Release Date: 26.08.2020 Support End Date: 18.12.2020 - Added the new relativeAnchorand offsetoptions to the LabelOptions. Use these options for the Label placement instead an anchoroption, which is deprecated now. - Supported the new label placement options in the Marker object. v1.1.0 Release Date: 04.08.2020 Support End Date: 26.11.2020 - Added the new map object Label for the independent text labels on the map. - Added the ability to use a stretchable image as background for the marker label. - Added the option autoHideOSMCopyrightto the MapOptions. If true, the OSM copyright will be hidden after 5 seconds from the map initialization. v1.0.0 Release Date: 21.07.2020 Support End Date: 04.11.2020 There are no any release notes. This is the first release with a fixed version. Old releases 05.06.2020 - New option disableZoomOnScrollwas added to the Map options. This option disable zoom on scroll, when a mouse cursor is above the map container.
https://docs.2gis.com/en/mapgl/releases
2021-02-24T23:43:09
CC-MAIN-2021-10
1614178349708.2
[]
docs.2gis.com
Defining an Owner To operate a thing from a mobile application, you need to specify a Kii Cloud user as the thing owner. See here for the overview. You can create a user with the User Management Feature of the Kii Cloud SDK. After you create a user, you will pass the user ID and access token of this user to the initialization API. See Logging in and Using an Access Token to learn more about the access token. There are several ways for creating a user. This guide uses the pseudo user which is also used in the development guides for Android and iOS. By using the pseudo user, you will create a user without any name and keep using their access token thereafter. Since it does not require any username or password, you can leverage the user without any explicit login. This method is the easiest if your scenario is simple (e.g., just sending commands and browsing state). However, especially when you use a web browser, be careful that you will not be able to access the pseudo user if you lose the saved access token. Should you want to secure a login method after the access token is cleared or develop more complex apps by leveraging the Kii Cloud SDK features, you can create a user or group and set them as the thing owner. See Implementation tips at the bottom of this page for more discussion. Creating a pseudo user The following sample code shows an example of creating a pseudo user. In this example, a new pseudo user is created and their user ID and access token are stored in the device storage (please note that the implementation of the data storing method is omitted in the sample code). The access token and user ID are used when initializing the Thing-IF SDK. If you lose the access token, you will no longer be able to access to this user because there will be no way to distinguish the user. // Set predefined and custom fields. const userFields = { }; ... // Register a pseudo user. kii.KiiUser.registerAsPseudoUser(null, userFields).then((authedUser)=> { // Get the access token of the preudo user. const accessToken: string = authedUser.getAccessToken(); // Get the user ID. const userID: string = authedUser.getID(); // Store the access token and the user ID with your own function. storeUser(accessToken, userID) }).catch((error) => { // Handle the error. const theUser = error.target; const errorString = error.message; }); // Set predefined and custom fields. var userFields = { }; ... // Register a pseudo user. KiiUser.registerAsPseudoUser(null, userFields).then( function(authedUser) { // Get the access token of the preudo user. var accessToken = authedUser.getAccessToken(); // Get the user ID. var userID = authedUser.getID(); // Store the access token and the user ID with your own function. storeUser(accessToken, userID) } ).catch( function(error) { // Handle the error. var theUser = error.target; var errorString = error.message; } ); The basic steps are as follows. - Execute the registerAsPseudoUsermethod to create and register a pseudo user. - After the user creation and login is executed, execute the getAccessTokenand getIDmethods to get the access token and the user ID of this pseudo user, respectively. Then, save them in the device. Next, execute the following code to specify the user who will be the thing owner. Finally, pass owner and accessToken obtained in the above code to the constructor of ThingIFAPI when you initialize the Thing-IF SDK. // Create a typed ID from an ID type and the user ID. const owner = new ThingIF.TypedID(ThingIF.Types.User, userID); // Create a typed ID from an ID type and the user ID. var owner = new ThingIF.TypedID(ThingIF.Types.User, userID); This code creates a TypedID with the user ID and Types.User that specifies the type of the ID. If you are adding a group as an owner, pass the group ID with Types.Group. By using this "pseudo user as a thing owner" implementation on multiple devices, you will be able to share a thing among users who own the devices. This is done by binding the thing to the multiple users (i.e., pseudo users). This approach is OK as the Thing-IF SDK supports sharing a thing by multiple owners. If you are planning to expand your application with the Kii Cloud SDK features (e.g., use the data management to store extra data), this simple relationship between the thing and users might not be sufficient (e.g., the user scope data are created per device, and they cannot be shared among users). Please refer to the next section "Implementation tips". If you simply insert the above codes in the initialization process, a new user will be created every time the application is launched. Please implement the appropriate logic to reuse the access token and user ID stored with storeUser when the application is launched for the second time. Implementation tips User type of the owner We've presented a method to add a pseudo user as the thing owner. Alternatively, you can use the following approaches. All of them leverage the Kii Cloud SDK features. - Create a user with the username (or email address/phone number) and password, and let the user logs in with them (ref. Managing Users) - Create a user with the external service account like Facebook and Twitter (ref. Authenticating with an External Service Account) - Create a group and set the group as the thing owner to share the thing among multiple group members (ref. Managing Groups) Getting a command result To get a command result, you need to receive its command ID as a push notification. However, such a push notification is sent only to the devices associated with the command sender. Design your mobile app with this point in mind. If the owner is a pseudo user The command ID is notified only to the device on which the command is executed because a pseudo user is unique to each device. You do not receive notifications for commands issued on other devices which share the thing. If the owner is a normal user The command ID is notified to all the devices associated with the owner by installing devices.
https://docs.kii.com/en/guides/thingifsdk/non_trait/javascript/initialize/owner/
2021-02-24T23:50:55
CC-MAIN-2021-10
1614178349708.2
[]
docs.kii.com
Starbucks Interactive Cup Brewer (iCup) @ Microsoft The real Microsoft news today is not that Microsoft (and Google) want to dominate the web, nor is it *real* news that Microsoft today announced (along with the likes of BEA, Cisco, IBM and others) a draft of a new specification that describes the modeling IT resources and services: the Service Modeling Language (SML). No, no. The real news is that the Microsoft is having the all new Starbucks Interactive Cup Brewer (iCup) coffee dispensers installed across campus. ! . You read that? Interactive, no less. Pictures are emerging as I write and yes, there is even video evidence to prove it.
https://docs.microsoft.com/en-us/archive/blogs/alexbarn/starbucks-interactive-cup-brewer-icup-microsoft
2021-02-25T00:48:27
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
Azure Stack Hub services, plans, offers, subscriptions overview Microsoft Azure Stack Hub is a hybrid cloud platform that lets you deliver services from your datacenter. Services include virtual machines (VMs), SQL Server databases, SharePoint, Exchange, and even Azure Marketplace items. As a service provider, you can offer services to your tenants. Within a business or government agency, you can offer on-premises services to your employees. Overview As an Azure Stack Hub operator, you configure and deliver services by using offers, plans, and subscriptions. Offers contain one or more plans, and each plan includes one or more services, each configured with quotas. By creating plans and combining them into different offers, users can subscribe to your offers and deploy resources. This structure lets you manage: - Which services and resources your users can access. - The amount of resources that users can consume. - Which regions have access to the resources. To deliver a service, follow these high-level steps: Plan your service offering, using: - Foundational services, like compute, storage, networking, or Key Vault. - Value-add services, like Event Hubs, App Service, SQL Server, or MySQL Server. Create a plan that consists of one or more services. When creating a plan, select or create quotas that define the resource limits of each service in the plan. Create an offer that has one or more plans. The offer can include base plans and optional add-on plans. After you've created the offer, your users can subscribe to it to access the services and deploy resources. Users can subscribe to as many offers as they want. The following figure shows a simple example of a user who has subscribed to two offers. Each offer has a plan or two, and each plan gives them access to specific services. Services You can offer Infrastructure as a Service (IaaS) services that enable your users to build an on-demand computing infrastructure, provisioned and managed from the Azure Stack Hub user portal. You can also deploy Platform as a Service (PaaS) services for Azure Stack Hub from Microsoft and third-party providers. The PaaS services that you can deploy include, but aren't limited to: You can also combine services to integrate and build complex solutions for different users. Quotas To help manage your cloud capacity, you can use pre-configured quotas, or create a new quota for each service in a plan. Quotas define the upper resource limits that a user subscription can provision or consume. For example, a quota might allow a user to create up to five VMs. Important It can take up to two hours for new quotas to be available in the user portal or before a changed quota is enforced. You can set up quotas by region. For example, a plan that provides compute services for Region A could have a quota of two VMs. Note In the Azure Stack Development Kit (ASDK), only one region (named local) is available. Learn more about quota types in Azure Stack Hub. Plans Plans are groupings of one or more services. As an Azure Stack Hub operator, you create plans to offer to your users. In turn, your users subscribe to your offers to use the plans and services they include. When creating plans, make sure to set your quotas, define your base plans, and consider including optional add-on plans. Base plan When creating an offer, the service administrator can include a base plan. These base plans are included by default when a user subscribes to that offer. When a user subscribes, they have access to all the resource providers specified in those base plans (with the corresponding quotas). Note If an offer has multiple base plans, the combined storage capacity of the plans cannot exceed the storage quota. Add-on plans Add-on plans are optional plans you add to an offer. Add-on plans aren't included by default in the subscription. Add-on plans are additional plans (with quotas) available in an offer that a subscriber can add to their subscriptions. For example, you can offer a base plan with limited resources for a trial, and an add-on plan with more substantial resources to customers who decide to adopt the service. Offers Offers are groups of one or more plans that you create so that users can subscribe to them. For example: Offer Alpha can contain Plan A, which provides a set of compute services, and Plan B, which provides a set of storage and network services. When you create an offer, you must include at least one base plan, but you can also create add-on plans that users can add to their subscription. When you're planning your offers, keep the following points in mind: Trial offers: You use trial offers to attract new users, who can then upgrade to additional services. To create a trial offer, create a small base plan with an optional larger add-on plan. Alternatively, you can create a trial offer consisting of a small base plan, and a separate offer with a larger "pay as you go" plan. Capacity planning: You might be concerned about users who grab large amounts of resources and clog. Subscriptions Subscriptions let users access your offers. If you're an Azure Stack Hub operator for a service provider, your users (tenants) buy your services by subscribing to your offers. If you're an Azure Stack Hub operator at an organization, your users (employees) can subscribe to the services you offer without paying. Users create new subscriptions and get access to existing subscriptions by signing in to Azure Stack Hub. Each subscription represents an association with a single offer. The offer (and its plans and quotas) assigned to one subscription can't be shared with other subscriptions. Each resource that a user creates is associated with one subscription. As an Azure Stack Hub operator, you can see information about tenant subscriptions, but you can't access the contents of those subscriptions unless you are explicitly added through RBAC by a tenant administrator of that subscription. This allows tenants to enforce separation of power and responsibilities between Azure Stack Hub operator and tenant spaces. The exception to this case is a situation in which the subscription owner is unable to provide the operator with access to the subscription, which requires the administrator to take ownership of the subscription as discussed in Change the billing owner for an Azure Stack Hub user subscription. If your Azure Stack Hub instance is disconnected and you have two different domains where users in domain 1 create subscriptions that users in domain 2 consume, some subscriptions may appear in the admin portal but not appear in the user portal. To fix this, have the users in domain 1 set the correct RBAC for the subscriptions in domain 2. Default provider subscription The default provider subscription is automatically created when you deploy the ASDK. This subscription can be used to manage Azure Stack Hub, deploy additional resource providers, and create plans and offers for users. For security and licensing reasons, it shouldn't be used to run customer workloads and apps. The quota of the default provider subscription can't be changed. Next steps To learn more about creating plans, offers, and subscriptions, start with Create a plan.
https://docs.microsoft.com/en-us/azure-stack/operator/service-plan-offer-subscription-overview?view=azs-2008
2021-02-25T00:20:23
CC-MAIN-2021-10
1614178349708.2
[array(['media/azure-stack-key-features/image4.png?view=azs-2008', 'Tenant subscription with offers and plans'], dtype=object)]
docs.microsoft.com
BlackList Overview The BlackList API is used to get specific users from the blacklist from among those who have installed the corresponding application. BlackList Object Fields API Request HTTP Methods Proxy model - GET - Gets blacklist entry - Gets blacklist collection Trusted model - GET - Gets blacklist entry - Gets blacklist collection Endpoint URL - Sandbox environment{guid}/{selector}{-prefix|/|pid} - Production environment{guid}/{selector}{-prefix|/|pid} guid Specify one of the following values for the guid parameter. Before a valid Mobage user ID can be specified with the guid parameter, the user must have installed the associated application. selector Specify the following value for the selector parameter. pid When selector is @all, you can specify the pid parameter. If you specify the pid parameter when requesting the entry resource and 200 OK is returned, it means that the designated user is in the blacklist. However, if 404 Not Found is returned, it means that the user is not in the blacklist. Query Parameters The following query parameters can be specified. format The response format is as shown below. fields Here, you can specify attributes that you wish to obtain. For the fields that can be specified, please see the description of the BlackList object field. To specify multiple fields, separate them with commas without any spaces in between. If the fields parameter is omitted, all of the attribute information listed below is obtained. In the interest of performance, you should only get the attributes that you need. count The count field enables you to specify the maximum number of entries in a collection resource that can be obtained in a response. count can be a maximum of 1000, with a default value of 50. startIndex For startindex, specify the starting value of the collection resource. If startindex is omitted, 1 (the minimum value) is assumed. API Response Response Codes The following is a list of the API response codes. Notes - If a user other than @me is specified for guid, the user must have installed the associated application. - When using the Trusted model (Consumer Request), you cannot specify @me for guid. Doing so will result in a 400 Bad Request. - guid must be a number, otherwise 400 will be returned for the response code. - If the user has not installed the corresponding application, 403 will be returned for the response code. Examples Getting the BlackList Collection Registered by the Viewer Request Format GET /api/restful/v1/blacklist/@me/@all?format=json&fields=id,targetId&count=10&startIndex=1 Response Format 200 OK { "entry" : [ { "targetId" : "sb.sp.app.mbga.jp:10032", "id" : "sb.mbga.jp:10028" }, { "targetId" : "sb.sp.app.mbga.jp:10033", "id" : "sb.mbga.jp:10028" }, { "targetId" : "sb.sp.app.mbga.jp:10039", "id" : "sb.mbga.jp:10028" } ], "startIndex" : 1, "itemsPerPage" : 10, "totalResults" : 3 } Checking the Blacklist Between Two People Request Format GET /api/restful/v1/blacklist/10028/@all/10045?format=json Response Format Since user 10028 has not registered user 10045 in the blacklist, 404 Not Found is returned. If user 10045 were registered in the blacklist, 200 OK would be returned. 404 Not Found Reference Links XML Schema Part 2: Datatypes Second Edition URI Template draft-gregorio-uritemplate-03 Revision History - 03/01/2013 - Initial release
https://docs.mobage.com/display/JPSPBP/BlackList
2021-02-24T23:31:47
CC-MAIN-2021-10
1614178349708.2
[]
docs.mobage.com
Workflows¶ The process of using Spack involves building packages, running binaries from those packages, and developing software that depends on those packages. For example, one might use Spack to build the netcdf package, use spack load to run the ncdump binary, and finally, write a small C program to read/write a particular NetCDF file. Spack supports a variety of workflows to suit a variety of situations and user preferences, there is no single way to do all these things. This chapter demonstrates different workflows that have been developed, pointing out the pros and cons of them. Definitions¶ First some basic definitions. Package, Concrete Spec, Installed Package¶ In Spack, a package is an abstract recipe to build one piece of software. Spack packages may be used to build, in principle, any version of that software with any set of variants. Examples of packages include curl and zlib. A package may be instantiated to produce a concrete spec; one possible realization of a particular package, out of combinatorially many other realizations. For example, here is a concrete spec instantiated from curl: $ spack spec curl Input spec -------------------------------- curl Concretized -------------------------------- [email protected]%[email protected]~darwinssl~gssapi~libssh~libssh2~nghttp2 arch=linux-ubuntu18.04-skylake_avx512 ^[email protected]%[email protected] arch=linux-ubuntu18.04-skylake_avx512 ^[email protected]%[email protected] arch=linux-ubuntu18.04-skylake_avx512 ^[email protected]%[email protected] ^[email protected]%[email protected] arch=linux-ubuntu18.04-skylake_avx512 ^[email protected]%[email protected]+optimize+pic+shared arch=linux-ubuntu18.04-skylake_avx512 Spack’s core concretization algorithm generates concrete specs by instantiating packages from its repo, based on a set of “hints”, including user input and the packages.yaml file. This algorithm may be accessed at any time with the spack spec command. Every time Spack installs a package, that installation corresponds to a concrete spec. Only a vanishingly small fraction of possible concrete specs will be installed at any one Spack site. Consistent Sets¶ A set of Spack specs is said to be consistent if each package is only instantiated one way within it — that is, if two specs in the set have the same package, then they must also have the same version, variant, compiler, etc. For example, the following set.8%[email protected] arch=linux-SuSE11-x86_64 The following set is.7%[email protected] arch=linux-SuSE11-x86_64 The compatibility of a set of installed packages determines what may be done with it. It is always possible to spack load any set of installed packages, whether or not they are consistent, and run their binaries from the command line. However, a set of installed packages can only be linked together in one binary if it is consistent. If the user produces a series of spack spec or spack load commands, in general there is no guarantee of consistency between them. Spack’s concretization procedure guarantees that the results of any single spack spec call will be consistent. Therefore, the best way to ensure a consistent set of specs is to create a Spack package with dependencies, and then instantiate that package. We will use this technique below. Building Packages¶ Suppose you are tasked with installing a set of software packages on a system in order to support one application – both a core application program, plus software to prepare input and analyze output. The required software might be summed up as a series of spack install commands placed in a script. If needed, this script can always be run again in the future. For example: #!/bin/sh spack install modele-utils spack install emacs spack install ncview spack install nco spack install modele-control spack install py-numpy In most cases, this script will not correctly install software according to your specific needs: choices need to be made for variants, versions and virtual dependency choices may be needed. It is possible to specify these choices by extending specs on the command line; however, the same choices must be specified repeatedly. For example, if you wish to use openmpi to satisfy the mpi dependency, then ^openmpi will have to appear on every spack install line that uses MPI. It can get repetitive fast. Customizing Spack installation options is easier to do in the ~/.spack/packages.yaml file. In this file, you can specify preferred versions and variants to use for packages. For example: packages: python: version: [3.5.1] modele-utils: version: [cmake] everytrace: version: [develop] eigen: variants: ~suitesparse netcdf: variants: +mpi all: compiler: [[email protected]] providers: mpi: [openmpi] blas: [openblas] lapack: [openblas] This approach will work as long as you are building packages for just one application. Multiple Applications¶ Suppose instead you’re building multiple inconsistent applications. For example, users want package A to be built with openmpi and package B with mpich — but still share many other lower-level dependencies. In this case, a single packages.yaml file will not work. Plans are to implement per-project packages.yaml files. In the meantime, one could write shell scripts to switch packages.yaml between multiple versions as needed, using symlinks. Combinatorial Sets of Installs¶ Suppose that you are now tasked with systematically building many incompatible versions of packages. For example, you need to build petsc 9 times for 3 different MPI implementations on 3 different compilers, in order to support user needs. In this case, you will need to either create 9 different packages.yaml files; or more likely, create 9 different spack install command lines with the correct options in the spec. Here is a real-life example of this kind of usage: #!/bin/bash compilers=( %gcc %intel %pgi ) mpis=( openmpi+psm~verbs openmpi~psm+verbs mvapich2+psm~mrail mvapich2~psm+mrail mpich+verbs ) for compiler in "${compilers[@]}" do # Serial installs spack install szip $compiler spack install hdf $compiler spack install hdf5 $compiler spack install netcdf $compiler spack install netcdf-fortran $compiler spack install ncview $compiler # Parallel installs for mpi in "${mpis[@]}" do spack install $mpi $compiler spack install hdf5~cxx+mpi $compiler ^$mpi spack install parallel-netcdf $compiler ^$mpi done done Running Binaries from Packages¶ Once Spack packages have been built, the next step is to use them. As with building packages, there are many ways to use them, depending on the use case. Find and Run¶ The simplest way to run a Spack binary is to find it and run it! In many cases, nothing more is needed because Spack builds binaries with RPATHs. Spack installation directories may be found with spack location --install-dir commands. For example: $ spack location --install-dir cmake ~/spack/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7 This gives the root of the Spack package; relevant binaries may be found within it. For example: $ CMAKE=`spack location --install-dir cmake`/bin/cmake Standard UNIX tools can find binaries as well. For example: $ find ~/spack/opt -name cmake | grep bin ~/spack/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7/bin/cmake These methods are suitable, for example, for setting up build processes or GUIs that need to know the location of particular tools. However, other more powerful methods are generally preferred for user environments. Using spack load to Manage the User Environment¶ Suppose that Spack has been used to install a set of command-line programs, which users now wish to use. One can in principle put a number of spack load commands into .bashrc, for example, to load a set of Spack packages: spack load modele-utils spack load emacs spack load ncview spack load nco spack load modele-control Although simple load scripts like this are useful in many cases, they have some drawbacks: - The set of packages loaded by them will in general not be consistent. They are a decent way to load commands to be called from command shells. See below for better ways to assemble a consistent set of packages for building application programs. - The spack specand spack installcommands use a sophisticated concretization algorithm that chooses the “best” among several options, taking into account packages.yamlfile. The spack loadand spack module tcl loadscommands, on the other hand, are not very smart: if the user-supplied spec matches more than one installed package, then spack module tcl loadswill fail. This default behavior may change in the future. For now, the workaround is to either be more specific on any failing spack loadcommands or to use spack load --firstto allow spack to load the first matching spec. Generated Load Scripts¶ Another problem with using spack load is, it can be slow; a typical user environment could take several seconds to load, and would not be appropriate to put into .bashrc directly. This is because it requires the full start-up overhead of python/Spack for each command. In some circumstances it is preferable to use a series of spack module tcl loads (or spack module lmod loads) commands to pre-compute which modules to load. This will generate the modulenames to load the packages using environment modules, rather than Spack’s built-in support for environment modifications. These can be put in a script that is run whenever installed Spack packages change. For example: #!/bin/sh # # Generate module load commands in ~/env/spackenv cat <<EOF | /bin/sh >$HOME/env/spackenv FIND='spack module tcl loads --prefix linux-SuSE11-x86_64/' \$FIND modele-utils \$FIND emacs \$FIND ncview \$FIND nco \$FIND modele-control EOF The output of this file is written in ~/env/spackenv: # [email protected]%[email protected]+gold~krellpatch~libiberty arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/binutils-2.25-gcc-5.3.0-6w5d2t4 # [email protected]%[email protected]~tk~ucs4 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/python-2.7.12-gcc-5.3.0-2azoju2 # [email protected]%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/ncview-2.1.7-gcc-5.3.0-uw3knq2 # [email protected]%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/nco-4.5.5-gcc-5.3.0-7aqmimu # modele-control@develop%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/modele-control-develop-gcc-5.3.0-7rddsij # [email protected]%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/zlib-1.2.8-gcc-5.3.0-fe5onbi # [email protected]%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/curl-7.50.1-gcc-5.3.0-4vlev55 # [email protected]%[email protected]+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/hdf5-1.10.0-patch1-gcc-5.3.0-pwnsr4w # [email protected]%[email protected]~hdf4+mpi arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/netcdf-4.4.1-gcc-5.3.0-rl5canv # [email protected]%[email protected] arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/netcdf-fortran-4.4.4-gcc-5.3.0-stdk2xq # modele-utils@cmake%[email protected]+aux+diags+ic arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/modele-utils-cmake-gcc-5.3.0-idyjul5 # everytrace@develop%[email protected]+fortran+mpi arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/everytrace-develop-gcc-5.3.0-p5wmb25 Users may now put source ~/env/spackenv into .bashrc. Note Some module systems put a prefix on the names of modules created by Spack. For example, that prefix is linux-SuSE11-x86_64/ in the above case. If a prefix is not needed, you may omit the --prefix flag from spack module tcl loads. Transitive Dependencies¶ In the script above, each spack module tcl loads command generates a single module load line. Transitive dependencies do not usually need to be loaded, only modules the user needs in $PATH. This is because Spack builds binaries with RPATH. Spack’s RPATH policy has some nice features: - Modules for multiple inconsistent applications may be loaded simultaneously. In the above example (Multiple Applications), package A and package B can coexist together in the user’s $PATH, even though they use different MPIs. - RPATH eliminates a whole class of strange errors that can happen in non-RPATH binaries when the wrong LD_LIBRARY_PATHis loaded. - Recursive module systems such as LMod are not necessary. - Modules are not needed at all to execute binaries. If a path to a binary is known, it may be executed. For example, the path for a Spack-built compiler can be given to an IDE without requiring the IDE to load that compiler’s module. Unfortunately, Spack’s RPATH support does not work in all case. For example: - Software comes in many forms — not just compiled ELF binaries, but also as interpreted code in Python, R, JVM bytecode, etc. Those systems almost universally use an environment variable analogous to LD_LIBRARY_PATHto dynamically load libraries. - Although Spack generally builds binaries with RPATH, it does not currently do so for compiled Python extensions (for example, py-numpy). Any libraries that these extensions depend on ( blasin this case, for example) must be specified in the LD_LIBRARY_PATH.` - In some cases, Spack-generated binaries end up without a functional RPATH for no discernible reason. In cases where RPATH support doesn’t make things “just work,” it can be necessary to load a module’s dependencies as well as the module itself. This is done by adding the --dependencies flag to the spack module tcl loads command. For example, the following line, added to the script above, would be used to load SciPy, along with Numpy, core Python, BLAS/LAPACK and anything else needed: spack module tcl loads --dependencies py-scipy Dummy Packages¶ As an alternative to a series of module load commands, one might consider dummy packages as a way to create a consistent set of packages that may be loaded as one unit. The idea here is pretty simple: - Create a package (say, mydummy) with no URL and no install()method, just dependencies. - Run spack install mydummyto install. An advantage of this method is the set of packages produced will be consistent. This means that you can reliably build software against it. A disadvantage is the set of packages will be consistent; this means you cannot load up two applications this way if they are not consistent with each other. Filesystem Views¶ Filesystem views offer an alternative to environment modules, another way to assemble packages in a useful way and load them into a user’s environment. A single-prefix filesystem view is a single directory tree that is the union of the directory hierarchies of a number of installed packages; it is similar to the directory hierarchy that might exist under /usr/local. The files of the view’s installed packages are brought into the view by symbolic or hard links, referencing the original Spack installation. A combinatorial filesystem view can contain more software than a single-prefix view. Combinatorial filesystem views are created by defining a projection for each spec or set of specs. The syntax for this will be discussed in the section for the spack view command under adding_projections_to_views. The projection for a spec or set of specs specifies the naming scheme for the directory structure under the root of the view into which the package will be linked. For example, the spec [email protected]%[email protected] could be projected to MYVIEW/zlib-1.2.8-gcc. When software is built and installed, absolute paths are frequently “baked into” the software, making it non-relocatable. This happens not just in RPATHs, but also in shebangs, configuration files, and assorted other locations. Therefore, programs run out of a Spack view will typically still look in the original Spack-installed location for shared libraries and other resources. This behavior is not easily changed; in general, there is no way to know where absolute paths might be written into an installed package, and how to relocate it. Therefore, the original Spack tree must be kept in place for a filesystem view to work, even if the view is built with hardlinks. spack view¶ A filesystem view is created, and packages are linked in, by the spack view command’s symlink and hardlink sub-commands. The spack view remove command can be used to unlink some or all of the filesystem view. The following example creates a filesystem view based on an installed cmake package and then removes from the view the files in the cmake package while retaining its dependencies. $ spack view --verbose symlink myview [email protected] ==> Linking package: "ncurses" ==> Linking package: "zlib" ==> Linking package: "openssl" ==> Linking package: "cmake" $ ls myview/ bin doc etc include lib share $ ls myview/bin/ captoinfo clear cpack ctest infotocap openssl tabs toe tset ccmake cmake c_rehash infocmp ncurses6-config reset tic tput $ spack view --verbose --dependencies false rm myview [email protected] ==> Removing package: "cmake" $ ls myview/bin/ captoinfo c_rehash infotocap openssl tabs toe tset clear infocmp ncurses6-config reset tic tput Note If the set of packages being included in a view is inconsistent, then it is possible that two packages will provide the same file. Any conflicts of this type are handled on a first-come-first-served basis, and a warning is printed. Note When packages are removed from a view, empty directories are purged. Controlling View Projections¶ The default projection into a view is to link every package into the root of the view. This can be changed by adding a projections.yaml configuration file to the view. The projection configuration file for a view located at /my/view is stored in /my/view/.spack/projections.yaml. When creating a view, the projection configuration file can also be specified from the command line using the --projection-file option to the spack view command. The projections configuration file is a mapping of partial specs to spec format strings, as shown in the example below. projections: zlib: {name}-{version} ^mpi: {name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version} all: {name}-{version}/{compiler.name}-{compiler.version} The entries in the projections configuration file must all be either specs or the keyword all. For each spec, the projection used will be the first non- all entry that the spec satisfies, or all if there is an entry for all and no other entry is satisfied by the spec. Where the keyword all appears in the file does not matter. Given the example above, any spec satisfying [email protected] will be linked into /my/view/zlib-1.2.8/, any spec satisfying [email protected]+mpi %[email protected] ^[email protected] will be linked into /my/view/hdf5-1.8.10/mvapich2-2.2-gcc-4.9.3, and any spec satisfying [email protected]~mpi %[email protected] will be linked into /my/view/hdf5-1.8.10/gcc-4.9.3. If the keyword all does not appear in the projections configuration file, any spec that does not satisfy any entry in the file will be linked into the root of the view as in a single-prefix view. Any entries that appear below the keyword all in the projections configuration file will not be used, as all specs will use the projection under all before reaching those entries. Fine-Grain Control¶ The --exclude and --dependencies option flags allow for fine-grained control over which packages and dependencies do or not get included in a view. For example, suppose you are developing the appsy package. You wish to build against a view of all appsy dependencies, but not appsy itself: $ spack view --dependencies yes --exclude appsy symlink /path/to/MYVIEW/ appsy Alternately, you wish to create a view whose purpose is to provide binary executables to end users. You only need to include applications they might want, and not those applications’ dependencies. In this case, you might use: $ spack view --dependencies no symlink /path/to/MYVIEW/ cmake Hybrid Filesystem Views¶ Although filesystem views are usually created by Spack, users are free to add to them by other means. For example, imagine a filesystem view, created by Spack, that looks something like: /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so Now, the user may add to this view by non-Spack means; for example, by running a classic install script. For example: $ tar -xf B.tar.gz $ cd B/ $ ./configure --prefix=/path/to/MYVIEW \ --with-A=/path/to/MYVIEW $ make && make install The result is a hybrid view: /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA /path/to/MYVIEW/bin/programB /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so /path/to/MYVIEW/lib/libB.so In this case, real files coexist, interleaved with the “view” symlinks. At any time one can delete /path/to/MYVIEW or use spack view to manage it surgically. None of this will affect the real Spack install area. Global Activations¶ spack activate may be used as an alternative to loading Python (and similar systems) packages directly or creating a view. If extensions are globally activated, then spack load python will also load all the extensions activated for the given python. This reduces the need for users to load a large number of packages. However, Spack global activations have two potential drawbacks: - Activated packages that involve compiled C extensions may still need their dependencies to be loaded manually. For example, spack load openblasmight be required to make py-numpywork. - Global activations “break” a core feature of Spack, which is that multiple versions of a package can co-exist side-by-side. For example, suppose you wish to run a Python package in two different environments but the same basic Python — one with [email protected] one with [email protected]. Spack extensions will not support this potential debugging use case. Discussion: Running Binaries¶ Modules, extension packages and filesystem views are all ways to assemble sets of Spack packages into a useful environment. They are all semantically similar, in that conflicting installed packages cannot simultaneously be loaded, activated or included in a view. With all of these approaches, there is no guarantee that the environment created will be consistent. It is possible, for example, to simultaneously load application A that uses OpenMPI and application B that uses MPICH. Both applications will run just fine in this inconsistent environment because they rely on RPATHs, not the environment, to find their dependencies. In general, environments set up using modules vs. views will work similarly. Both can be used to set up ephemeral or long-lived testing/development environments. Operational differences between the two approaches can make one or the other preferable in certain environments: - Filesystem views do not require environment module infrastructure. Although Spack can install environment-modules, users might be hostile to its use. Filesystem views offer a good solution for sysadmins serving users who just “want all the stuff I need in one place” and don’t want to hear about Spack. - Although modern build systems will find dependencies wherever they might be, some applications with hand-built make files expect their dependencies to be in one place. One common problem is makefiles that assume that netcdfand netcdf-fortranare installed in the same tree. Or, one might use an IDE that requires tedious configuration of dependency paths; and it’s easier to automate that administration in a view-building script than in the IDE itself. For all these cases, a view will be preferable to other ways to assemble an environment. - On systems with I-node quotas, modules might be preferable to views and extension packages. - Views and activated extensions maintain state that is semantically equivalent to the information in a spack module tcl loadsscript. Administrators might find things easier to maintain without the added “heavyweight” state of a view. Using Spack to Replace Homebrew/Conda¶ Spack is an incredibly powerful package manager, designed for supercomputers where users have diverse installation needs. But Spack can also be used to handle simple single-user installations on your laptop. Most macOS users are already familiar with package managers like Homebrew and Conda, where all installed packages are symlinked to a single central location like /usr/local. In this section, we will show you how to emulate the behavior of Homebrew/Conda using Environments! Setup¶ First, let’s create a new environment. We’ll assume that Spack is already set up correctly, and that you’ve already sourced the setup script for your shell. To create a new environment, simply run: $ spack env create myenv ==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view ==> Created environment 'myenv' in /Users/me/spack/var/spack/environments/myenv $ spack env activate myenv Here, myenv can be anything you want to name your environment. Next, we can add a list of packages we would like to install into our environment. Let’s say we want a newer version of Bash than the one that comes with macOS, and we want a few Python libraries. We can run: $ spack add bash ==> Adding bash to environment myenv ==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view $ spack add python@3: ==> Adding python@3: to environment myenv ==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view $ spack add py-numpy py-scipy py-matplotlib ==> Adding py-numpy to environment myenv ==> Adding py-scipy to environment myenv ==> Adding py-matplotlib to environment myenv ==> Updating view at /Users/me/spack/var/spack/environments/myenv/.spack-env/view Each package can be listed on a separate line, or combined into a single line. Notice that we’re explicitly asking for Python 3 here. You can use any spec you would normally use on the command line with other Spack commands. Next, we want to manually configure a couple of things. In the myenv directory, we can find the spack.yaml that actually defines our environment. $ vim ~/spack/var/spack/environments/myenv/spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: [bash, 'python@3:', py-numpy, py-scipy, py-matplotlib] view: default: root: /Users/me/spack/var/spack/environments/myenv/.spack-env/view projections: {} config: {} mirrors: {} modules: enable: [] packages: {} repos: [] upstreams: {} definitions: [] concretization: separately You can see the packages we added earlier in the specs: section. If you ever want to add more packages, you can either use spack add or manually edit this file. We also need to change the concretization: option. By default, Spack concretizes each spec separately, allowing multiple versions of the same package to coexist. Since we want a single consistent environment, we want to concretize all of the specs together. Here is what your spack.yaml looks like with these new settings, and with some of the sections we don’t plan on using removed: spack: - specs: [bash, 'python@3:', py-numpy, py-scipy, py-matplotlib] + specs: + - bash + - 'python@3:' + - py-numpy + - py-scipy + - py-matplotlib - view: - default: - root: /Users/me/spack/var/spack/environments/myenv/.spack-env/view - projections: {} + view: /Users/me/spack/var/spack/environments/myenv/.spack-env/view - config: {} - mirrors: {} - modules: - enable: [] - packages: {} - repos: [] - upstreams: {} - definitions: [] + concretization: together - concretization: separately Symlink location¶ In the spack.yaml file above, you’ll notice that by default, Spack symlinks all installations to /Users/me/spack/var/spack/environments/myenv/.spack-env/view. You can actually change this to any directory you want. For example, Homebrew uses /usr/local, while Conda uses /Users/me/anaconda. In order to access files in these locations, you need to update PATH and other environment variables to point to them. Activating the Spack environment does this automatically, but you can also manually set them in your .bashrc. Warning There are several reasons why you shouldn’t use /usr/local: - If you are on macOS 10.11+ (El Capitan and newer), Apple makes it hard for you. You may notice permissions issues on /usr/localdue to their System Integrity Protection. By default, users don’t have permissions to install anything in /usr/local, and you can’t even change this using sudo chownor sudo chmod. - Other package managers like Homebrew will try to install things to the same directory. If you plan on using Homebrew in conjunction with Spack, don’t symlink things to /usr/local. - If you are on a shared workstation, or don’t have sudo privileges, you can’t do this. If you still want to do this anyway, there are several ways around SIP. You could disable SIP by booting into recovery mode and running csrutil disable, but this is not recommended, as it can open up your OS to security vulnerabilities. Another technique is to run spack concretize and spack install using sudo. This is also not recommended. The safest way I’ve found is to create your installation directories using sudo, then change ownership back to the user like so: for directory in .spack bin contrib include lib man share do sudo mkdir -p /usr/local/$directory sudo chown $(id -un):$(id -gn) /usr/local/$directory done Depending on the packages you install in your environment, the exact list of directories you need to create may vary. You may also find some packages like Java libraries that install a single file to the installation prefix instead of in a subdirectory. In this case, the action is the same, just replace mkdir -p with touch in the for-loop above. But again, it’s safer just to use the default symlink location. Installation¶ To actually concretize the environment, run: $ spack concretize This will tell you which if any packages are already installed, and alert you to any conflicting specs. To actually install these packages and symlink them to your view: directory, simply run: $ spack install Now, when you type which python3, it should find the one you just installed. In order to change the default shell to our newer Bash installation, we first need to add it to this list of acceptable shells. Run: $ sudo vim /etc/shells and add the absolute path to your bash executable. Then run: $ chsh -s /path/to/bash Now, when you log out and log back in, echo $SHELL should point to the newer version of Bash. Updating Installed Packages¶ Let’s say you upgraded to a new version of macOS, or a new version of Python was released, and you want to rebuild your entire software stack. To do this, simply run the following commands: $ spack env activate myenv $ spack concretize --force $ spack install The --force flag tells Spack to overwrite its previous concretization decisions, allowing you to choose a new version of Python. If any of the new packages like Bash are already installed, spack install won’t re-install them, it will keep the symlinks in place. Using Spack on Travis-CI¶ Spack can be deployed as a provider for userland software in Travis-CI. A starting-point for a .travis.yml file can look as follows. It uses caching for already built environments, so make sure to clean the Travis cache if you run into problems. The main points that are implemented below: - Travis is detected as having up to 34 cores available, but only 2 are actually allocated for the user. We limit the parallelism of the spack builds in the config. (The Travis yaml parser is a bit buggy on the echo command.) - Without control for the user, Travis jobs will run on various x86_64microarchitectures. If you plan to cache build results, e.g. to accelerate dependency builds, consider building for the generic x86_64target only. Limiting the microarchitecture will also find more packages when working with the E4S Spack build cache. - Builds over 10 minutes need to be prefixed with travis_wait. Alternatively, generate output once with spack install -v. - Travis builds are non-interactive. This prevents using bash aliases and functions for modules. We fix that by sourcing /etc/profilefirst (or running everything in a subshell with bash -l -c '...'). language: cpp sudo: false dist: trusty cache: apt: true directories: - $HOME/.cache addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 - environment-modules env: global: - SPACK_ROOT: $HOME/.cache/spack - PATH: $PATH:$HOME/.cache/spack/bin before_install: - export CXX=g++-4.9 - export CC=gcc-4.9 - export FC=gfortran-4.9 - export $SPACK_ROOT/etc/spack/config.yaml && printf "packages:\n all:\n target: ['x86_64']\n" \ > $SPACK_ROOT/etc/spack/packages.yaml; fi - travis_wait spack install [email protected]~openssl~ncurses - travis_wait spack install [email protected]~graph~iostream~locale~log~wave - spack clean -a - source /etc/profile && source $SPACK_ROOT/share/spack/setup-env.sh - spack load cmake - spack load boost script: - mkdir -p $HOME/build - cd $HOME/build - cmake $TRAVIS_BUILD_DIR - make -j 2 - make test Upstream Bug Fixes¶ It is not uncommon to discover a bug in an upstream project while trying to build with Spack. Typically, the bug is in a package that serves a dependency to something else. This section describes procedure to work around and ultimately resolve these bugs, while not delaying the Spack user’s main goal. Buggy New Version¶ Sometimes, the old version of a package works fine, but a new version is buggy. For example, it was once found that Adios did not build with [email protected]. If the old version of hdf5 will work with adios, the suggested procedure is: Revert adiosto the old version of hdf5. Put in its adios/package.py: # Adios does not build with HDF5 1.10 # See: depends_on('hdf5@:1.9') Determine whether the problem is with hdf5or adios, and report the problem to the appropriate upstream project. In this case, the problem was with adios. Once a new version of adioscomes out with the bugfix, modify adios/package.pyto reflect it: # Adios up to v1.10.0 does not build with HDF5 1.10 # See: depends_on('hdf5@:1.9', when='@:1.10.0') depends_on('hdf5', when='@1.10.1:') No Version Works¶ Sometimes, no existing versions of a dependency work for a build. This typically happens when developing a new project: only then does the developer notice that existing versions of a dependency are all buggy, or the non-buggy versions are all missing a critical feature. In the long run, the upstream project will hopefully fix the bug and release a new version. But that could take a while, even if a bugfix has already been pushed to the project’s repository. In the meantime, the Spack user needs things to work. The solution is to create an unofficial Spack release of the project, as soon as the bug is fixed in some repository. A study of the Git history of py-proj/package.py is instructive here: On April 1, an initial bugfix was identified for the PyProj project and a pull request submitted to PyProj. Because the upstream authors had not yet fixed the bug, the py-projSpack package downloads from a forked repository, set up by the package’s author. A non-numeric version number is used to make it easy to upgrade the package without recomputing checksums; however, this is an untrusted download method and should not be distributed. The package author has now become, temporarily, a maintainer of the upstream project: # We need the benefits of this PR # version('citibeth-latlong2', git='', branch='latlong2') By May 14, the upstream project had accepted a pull request with the required bugfix. At this point, the forked repository was deleted. However, the upstream project still had not released a new version with a bugfix. Therefore, a Spack-only release was created by specifying the desired hash in the main project repository. The version number @1.9.5.1.1was chosen for this “release” because it’s a descendent of the officially released version @1.9.5.1. This is a trusted download method, and can be released to the Spack community: # This is not a tagged release of pyproj. # The changes in this "version" fix some bugs, especially with Python3 use. version('1.9.5.1.1', 'd035e4bc704d136db79b43ab371b27d2', url='') Note It would have been simpler to use Spack’s Git download method, which is also a trusted download in this case: # This is not a tagged release of pyproj. # The changes in this "version" fix some bugs, especially with Python3 use. version('1.9.5.1.1', git='', commit='0be612cc9f972e38b50a90c946a9b353e2ab140f') Note In this case, the upstream project fixed the bug in its repository in a relatively timely manner. If that had not been the case, the numbered version in this step could have been released from the forked repository. The author of the Spack package has now become an unofficial release engineer for the upstream project. Depending on the situation, it may be advisable to put preferred=Trueon the latest officially released version. As of August 31, the upstream project still had not made a new release with the bugfix. In the meantime, Spack-built py-projprovides the bugfix needed by packages depending on it. As long as this works, there is no particular need for the upstream project to make a new official release. If the upstream project releases a new official version with the bugfix, then the unofficial version()line should be removed from the Spack package. Patches¶ Spack’s source patching mechanism provides another way to fix bugs in upstream projects. This has advantages and disadvantages compared to the procedures above. Advantages: - It can fix bugs in existing released versions, and (probably) future releases as well. - It is lightweight, does not require a new fork to be set up. Disadvantages: - It is harder to develop and debug a patch, vs. a branch in a repository. The user loses the automation provided by version control systems. - Although patches of a few lines work OK, large patch files can be hard to create and maintain.
https://spack.readthedocs.io/en/latest/workflows.html
2021-02-24T22:34:10
CC-MAIN-2021-10
1614178349708.2
[]
spack.readthedocs.io
ctalyst® – Using your own Registration Menu This article will discuss how to implement ctalyst® registration system with Registration Menu of your own. This article assume that ctalyst® Unity SDK is already installed in you project. Prerequisites: Download the ctalyst® Unity Plugin and set it up in your project. For more details on setting up ctalsyt® SDK read Unity 3D – Getting Started with ctalyst Register a User to ctalyst®: Initializing Plugin: Following code will initialize the Registration-Login Component of ctalyst® Unity Plugin. It will also add method RunOnRegistrationSuccessful to be executed when registration is successful and method RunOnRegistrationNotSuccessful when registration is unsuccessful. You can define your own code in these methods to implement. private ctalyst.ctalyst_Game_Registration ctalystRegisterLogin; void Awake() { ctalystRegisterLogin= gameObject.GetComponent(); // Add custom method to execute when Registration is Successful ctalyst.ctalyst_Game_Registration.AddUserRegisterSuccessfulEventListener(RunOnRegistrationSuccessful); // Add custom method to execute when Registration is Successful ctalyst.ctalyst_Game_Registration.AddRegistrationErrorEventListener(RunOnRegistrationNotSuccessful); } public void RunOnRegistrationSuccessful() { // This method will execute if registration is successful // You can put custom code here ... } public void RunOnRegistrationNotSuccessful() { // This method will execute if registration is not successful // You can put custom code here ... } Now the plugin is configured to be used to register user to ctalyst®. Following method should be used to make API call to register the user to ctalyst®. ctalystRegisterLogin.Register( user_email, // string user_password, // string user_confirm_password, // string user_year_of_birth, // positive int user_approx_income, // positive int user_gender // "Male", "Female" or "Unknown" ); “user_gender” have only 3 specific values as mentioned in the comment of the above code, “Male”, “Female” or “Unknown”. “Unknown” is used if a user do not want to disclose there gender.
http://docs.ctalyst.com/article/unity-3d-registration-menu/
2021-02-24T22:33:09
CC-MAIN-2021-10
1614178349708.2
[]
docs.ctalyst.com
Implementing the Initialization and Reception Processes In order to use APNs, implement the APIs of the notification framework of iOS for initializing the push notification feature and preparing the handlers in your mobile app. The basic implementation method is based on that for iOS push notification. For more information, refer to the iOS documentation and general technical information on the Internet. To receive push notifications, you need to implement the following process and handlers. Initialization process Initialize APNs and install the device to Kii Cloud. If you have taken the tutorial, you have implemented the push notification feature. See the linked topics if you need to use actionable notifications and know more about device installation. To support different versions of iOS, two versions of sample code are provided for initializing APNs. The UNUserNotificationCenterclass is used for iOS 10 or later while the UIUserNotificationSettingsclass is used for iOS 8 and iOS 9. Some of the handler calls explained below will change depending on which initialization API is used. This handler is called when the mobile app directly receives a push notification when it is running in the foreground or background. Prepare this handler if you want to receive silent notifications. This handler receives notification actions. With iOS 10 or later, this handler also receives push notifications that are reacted through Notification Center in addition to notification actions. The table below shows the APIs to implement and the supported iOS versions. This guide provides sample code for each version so that you can run your mobile app on iOS 8 or later. These handlers are called by different triggers depending on the situation. Different payload content and user actions can cause the methods to be called twice and not to be called at all. Kii has tested different combinations of payload keys and user actions with real apps. For the test result, see Combinations of Reception Methods as a reference for implementation and testing. You can implement your mobile app in an efficient way, for example, by letting the reception handler and the user action handler call a common method that contains necessary processes. Depending on the payload keys in use, you might need a mechanism to eliminate duplication in processes.
https://docs.kii.com/en/guides/cloudsdk/ios/managing-push-notification/implementation/
2021-02-24T23:23:16
CC-MAIN-2021-10
1614178349708.2
[]
docs.kii.com
Initialization The thing program starts with the main() function which initializes the program and onboards the thing. Processes at Start of the Program First, the function gets the program arguments and initializes the mutex used for the exclusive processing of the state handler. static pthread_mutex_t m_mutex; int main(int argc, char** argv) { ...... char *vendorThingID, *thingPassword; if (argc != 3) { printf("hellothingif {vendor thing id} {thing password}\n"); exit(1); } vendorThingID = argv[1]; thingPassword = argv[2]; if (pthread_mutex_init(&m_mutex, NULL) != 0) { printf("Failed to get mutex.\n"); exit(1); } ...... } As seen in Running Sample Apps, the thing program starts with command line parameters as below. $ ./hellothingif 1111 DEF456 Specify the command name, vendorThingID, and the thing password. argc and argv count the command name. Therefore, argv[1] stores vendorThingID and argv[2] stores the thing password. Next, the mutex of Pthreads is initialized for exclusive processing. pthread_mutex_init(), which is one of the Pthreads APIs, initializes the static variable m_mutex. Initializing the Thing-IF SDK Next, the Thing-IF SDK is initialized. The Thing-IF SDK does not dynamically reserve memory with malloc or anything. Therefore, the user program needs to prepare buffers for communication processing. Also, as seen in Source Code Structure, it needs to pass the action and state handlers to the Thing-IF SDK. Processes for the above tasks are implemented with the code below. int main(int argc, char** argv) { kii_bool_t result; kii_thing_if_command_handler_resource_t command_handler_resource; kii_thing_if_state_updater_resource_t state_updater_resource; char command_handler_buff[EX_COMMAND_HANDLER_BUFF_SIZE]; char state_updater_buff[EX_STATE_UPDATER_BUFF_SIZE]; char mqtt_buff[EX_MQTT_BUFF_SIZE]; kii_thing_if_t kii_thing_if; ...... /* prepare for the command handler */ memset(&command_handler_resource, 0x00, sizeof(command_handler_resource)); command_handler_resource.buffer = command_handler_buff; command_handler_resource.buffer_size = sizeof(command_handler_buff) / sizeof(command_handler_buff[0]); command_handler_resource.mqtt_buffer = mqtt_buff; command_handler_resource.mqtt_buffer_size = sizeof(mqtt_buff) / sizeof(mqtt_buff[0]); command_handler_resource.action_handler = action_handler; command_handler_resource.state_handler = state_handler; /* prepare for the state updater */ memset(&state_updater_resource, 0x00, sizeof(state_updater_resource)); state_updater_resource.buffer = state_updater_buff; state_updater_resource.buffer_size = sizeof(state_updater_buff) / sizeof(state_updater_buff[0]); state_updater_resource.period = EX_STATE_UPDATE_PERIOD; state_updater_resource.state_handler = state_handler; /* initialize */ result = init_kii_thing_if(&kii_thing_if, EX_APP_ID, EX_APP_KEY, EX_APP_SITE, &command_handler_resource, &state_updater_resource, NULL); if (result == KII_FALSE) { printf("Failed to initialize the SDK.\n"); exit(1); } ...... } The above code sets the buffers and handlers in command_handler_resource and state_updater_resource and calls init_kii_thing_if(). The code might look complicated, but you can see it is quite simple from the below figure. Also, it sets the interval to update state information in seconds in state_updater_resource.period. EX_STATE_UPDATE_PERIOD is set to 60 (one minute) in the sample program. init_kii_thing_if() identifies the target application on Kii Cloud with the AppID, AppKey and server location. As seen in Application Class Implementation, the mobile app and the thing program work together by accesing the same application. As a result of initialization, the kii_thing_if_t structure, the first argument of init_kii_thing_if(), is initialized and the main() function gets control. From now, subsequent APIs are called with this kii_thing_if_t structure. You need to reserve enough size of buffers in order to process received actions and register state information. If the buffers are smaller than data to send or receive, such processes will fail. Onboarding the Thing Next, the thing gets onboarded. In this tutorial, onboarding from the Android mobile app aims to associate the owner with the thing, but onboarding from the thing program just registers the thing to Kii Cloud. You can start onboarding from either the mobile app or the thing program; the execution sequence does not matter. int main(int argc, char** argv) { ...... result = onboard_with_vendor_thing_id(&kii_thing_if, vendorThingID, thingPassword, THING_TYPE, THING_PROPERTIES); if (result == KII_FALSE) { printf("Failed to onboard the thing."); exit(1); } ...... } The onboard_with_vendor_thing_id() of the Thing-IF SDK onboards the thing. The arguments include the initialized kii_thing_if_t structure, vendorThingID, and the thing password. The remaining arguments, THING_TYPE and THING_PROPERTY are declared as below. The values specified in onboarding from the mobile app must be specified. const char THING_TYPE[] = "HelloThingIF-SmartLED"; const char THING_PROPERTIES[] = "{}"; You can get the detailed root cause of errors in onboarding, in any. By changing the code as below, you can get the HTTP response code and body returned by the REST API of Thing Interaction Framework for debugging purposes. printf("Failed to onboard the thing. %d, %s\n", kii_thing_if.command_handler.kii_core.response_code, kii_thing_if.command_handler.kii_core.response_body); Waiting The last part of main() keeps the program waiting for commands by entering in the loop with the while statement. int main(int argc, char** argv) { ...... printf("Waiting for commands\n"); while(1){}; /* run forever. */ } The initialization of the Thing-IF SDK has internally created threads to process actions and state information. These threads call the action and state handlers to control the thing. What's Next? Let us walk through the process of receiving commands. Go to Receiving Commands. If you want to learn more... - See Initializing and Onboarding for more information about initialization parameters for the Thing-IF SDK.
https://docs.kii.com/en/samples/hello-thingif/thing/initialize/
2021-02-24T23:54:32
CC-MAIN-2021-10
1614178349708.2
[array(['01.png', None], dtype=object)]
docs.kii.com
Select the Overwrite Existing Sound Clips option so that the resulting audio clip will be position in its entire length, overwriting any existing clip positioned in its way. Change Frame When Clicking on Audio Tracks Opens the Vectorize Options dialog box when importing bitmap images. Lock All Audio Tracks Locks all audio tracks. If an audio track is already locked, the Lock All Audio Tracks command changes to Unlock All Audio Tracks.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/reference/menus/main/sound-menu.html
2021-02-24T23:30:16
CC-MAIN-2021-10
1614178349708.2
[]
docs.toonboom.com
Workspace. - Select Windows > Toolbars > Workspace.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/reference/toolbars/workspace-toolbar.html
2021-02-24T23:35:49
CC-MAIN-2021-10
1614178349708.2
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/SBP/Reference/workspace-toolbar.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Deleting data sources You delete a data source when you want to remove the information contained in the data source from your Amazon Kendra index. For example, delete a data source when: A data source is incorrectly configured. Delete the data source, wait for the data source to finish deleting, and then recreate it. You migrated documents from one data source to another. Delete the original data source and recreate it in the new location. You have reached the limit of data sources for an index. Delete one of the existing data sources and add a new one. For more information about the number of data sources that you can create, see Quotas. To delete a data source, use the console, the AWS Command Line Interface (AWS CLI), or the DeleteDataSource operation. Deleting a data source removes all of the information about the data source from the index. If you only want to stop synching the data source, change the synchronization schedule for the data source to "run on demand". To delete a data source (console) Sign in to the AWS Management Console and open the Amazon Kendra console at . In the navigation pane, choose Indexes, and then choose the index that contains the data source to delete. In the navigation pane, choose Data sources. Choose the data source to remove. Choose Delete to delete the data source. To delete a data source (CLI) In the AWS Command Line Interface, use the following command. The command is formatted for Linux and macOS. If you are using Windows, replace the Unix line continuation character (\) with a caret (^). aws kendra delete-data-source \ --id data-source-id\ --index-id index-id When you delete a data source, Amazon Kendra removes all of the stored information about the data source. Amazon Kendra removes all of the document data stored in the index, and all run histories and metrics associated with the data source. Deleting a data source does not remove the original documents from your storage. Deleting a data source is an asynchronous operation. When you start deleting a data source, the data source status changes to DELETING. It remains in the DELETING state until the information related to the data source is removed. After the data source is deleted, , it no longer appears in the results of a call to the ListDataSources operation. If you call the DescribeDataSource operation with the deleted data source's identifier, you receive a ResourceNotFound exception. Amazon Kendra releases the resources for a data source as soon as you call the DeleteDataSource operation or choose to delete the data source in the console. If you are deleting the data source to reduce the number of data sources below your limit, you can create a new data source right away. If you are deleting a data source and then creating another data source to the document data, wait for the first data source to be deleted before you sync the new data source. You can delete a data source that is in the process of syncing with Amazon Kendra. The sync is stopped and the data source is removed. If you attempt to start a sync when the data source is being deleted, you get a ConflictException exception. You can't delete a data source if the associated index is in the DELETING state. Deleting an index deletes all of the data sources for the index. You can start deleting an index while a data source for that index is in the DELETING state. If you have two data sources pointing to the same documents, such as two data sources pointing to the same S3 bucket, documents in the index might be inconsistent when one of the data sources is deleted. When two data sources reference the same documents, only one copy of the document data is stored in the index. Removing one data source removes the index data for the documents. The other data source is not aware that the documents have been removed, so Amazon Kendra won't correctly re-index the documents the next time it syncs. When you have two data sources pointing to the same document location, you should delete both data sources and then recreate one.
https://docs.aws.amazon.com/kendra/latest/dg/delete-data-source.html
2020-11-24T01:33:33
CC-MAIN-2020-50
1606141169606.2
[]
docs.aws.amazon.com
Tips to sort and distribute data plots in Power BI reports This article targets you as a report author designing Power BI reports, when using data plot visuals. Watch the video demonstrating the top nine tips to sort and distribute data plots in Power BI reports. Tips In summary, the top nine tips to sort and distribute data plots in Power BI reports include: - Sort nominal categories - Sort interval categories - Sort ordinal categories - Visualize distant plots - Visualize crowded categorical plots - Visualize crowded time plots - Distribute multiple dimensions - Categorize data plots - Differentiate value plots Next steps For more information related to this article, check out the following resources: - Tips for creating stunning reports - biDezine video: Top 9 Tips to Sort and Distribute data plots in Power BI - Questions? Try asking the Power BI Community - Suggestions? Contribute ideas to improve Power BI
https://docs.microsoft.com/en-us/power-bi/guidance/report-tips-sort-distribute-data-plots
2020-11-24T02:36:32
CC-MAIN-2020-50
1606141169606.2
[]
docs.microsoft.com
Work Orders (KPI) Total WOs worked on: This is a count of every work order that has a posting that is within the date range of the report. Miscellaneous WO's are not included. Total WO cost: This is a total of work order postings (Part, Labor, Tire, Misc.) by date. This cost includes markup and overhead calculations. Tax and Shop Supply charges are not included in this cost. Miscellaneous WOs also are not included. Average WO cost: This takes the above count of WOs worked on and divides by the above Total WO Cost. Highest WO cost: This is a total of Part, Labor, Tire, Misc. postings for a single work order then displaying the WO number and the total of the postings that occurred within the date range of the report. Oldest open WO: This is the Work Order number that still has the Open status on it and the oldest created date. If 2 WO's have the same date the Lowest WO number is displayed. Total deferred jobs: This is the total Work Order lines that are currently deferred. Total open driver reports: This is a count of the number Driver reports open. (A single report can have multiple complaints on it but it is still counted as a single report.) Total open service bulletins: This is a count of the number of Open WO's that are being tracked in the service bulletin listing. This is a combination of Service Bulletins, Campaigns, & Recalls. Most common repairs: For the date range of the report, this is a tally of the number of times and the total $$ posted to the respective VMRS code. This is based on the transaction date, open or closed WO's. There are 5 VMRS codes displayed with following layout: VMRS Code / Description – Total Count – Total Cost Total open WOs: This is a count of work orders created in the facility. This will include any WO's for vehicles that belong to other facilities but the WO in this facility. Total open WO lines: This is a count of open and partial lines on open WOs. Average days open for completed WOs: If a WO transaction exists in the date range, and the WO that transaction belongs to is closed, the "days open" from that WO are added up and then divided by the number of WO's found. Total inside repairs: This counts the number of WO lines with Inside part, labor, tire, warranty, or miscellaneous postings for the date range of the report. (If there is a inside posting to a outside line that counts too.) Total outside repairs: This counts the number of WO lines that had Outside part or labor postings for the date range of the report. (If there is a inside posting to a outside line that counts too.) Ratio of inside to outside repairs: This ratio is calculated from the previous 2 lines. The total WO count divided by the total count of outside WO's. i.e. (528 / 28 = 18 ---- Ratio = 18:1) Highest repair cost vehicles: This is the total Transactions for each vehicle for the date range of the report and the top 5 vehicles are listed in descending order. This also includes overhead but not tax.
https://docs.rtafleet.com/rta-manual/key-performance-indicators/work-orders-(kpi)/
2020-11-24T01:36:11
CC-MAIN-2020-50
1606141169606.2
[]
docs.rtafleet.com
. Note This is the latest version of AWS WAF , named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide . Updates the specified RegexPatternSet . See also: AWS API Documentation See 'aws help' for descriptions of global parameters. update-regex-pattern-set --name <value> --scope <value> --id <value> [--description <value>] --regular-expression-list <value> --lock-token <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --name (string) The name of the set. You cannot change the name after you create the set. --scope (string) Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB), an. --description (string) A description of the set that helps with identification. You cannot change the description of a set after you create it. --regular-expression-list . A single regular expression. This is used in a RegexPatternSet . RegexString -> (string)The string representing the regular expression. Shorthand Syntax: RegexString=string ... JSON Syntax: [ { "RegexString": "string" } ... ] --lock-token (string) settings for an existing regex pattern set The following update-regex-pattern-set updates the settings for the specified regex pattern set. This call requires an ID, which you can obtain from the call, list-regex-pattern-sets, and a lock token which you can obtain from the calls, list-regex-pattern-sets and get-regex-pattern-set. This call also returns a lock token that you can use for a subsequent update. aws wafv2 update-regex-pattern-set \ --name ExampleRegex \ --scope REGIONAL \ --id a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \ --regular-expression-list RegexString="^.+$" \ --lock-token ed207e9c-82e9-4a77-aadd-81e6173ab7eb Output: { "NextLockToken": "12ebc73e-fa68-417d-a9b8-2bdd761a4fa5" } For more information, see IP Sets and Regex Pattern Sets in the AWS WAF, AWS Firewall Manager, and AWS Shield Advanced Developer Guide.
https://docs.aws.amazon.com/cli/latest/reference/wafv2/update-regex-pattern-set.html
2020-11-24T01:08:51
CC-MAIN-2020-50
1606141169606.2
[]
docs.aws.amazon.com
Document Type Booklet Abstract The booklet illustrates the history of the Pokanoket nation, the original inhabitants of the Bristol and greater East Bay area, and their ancestral land which they called Sowams. Recommended Citation Students of Roger Williams University, "Pokanoket: the First People of the East Bay, Bristol, Rhode Island" (2020). Arts and Sciences Course Related Student Projects. 1. Included in American Studies Commons, History Commons This informational booklet is the result of a collaboration between the Pokanoket Tribe of the Pokanoket Nation and the "Decolonizing the Land" seminar at Roger Williams University, taught by Dr. Jeremy M. Campbell in Spring 2020. Students conducted archival research and virtual interviews with Pokanoket leaders and elders.
https://docs.rwu.edu/fcas_studentprojects/1/
2020-11-24T01:35:15
CC-MAIN-2020-50
1606141169606.2
[]
docs.rwu.edu
Importing and exporting resource profile packages To share resource profiles among tenants, you can export a resource profile to a resource profile package file. This allows for another tenant to then import that resource profile package. Resource profiles may be imported and exported as resource packages. The package is a zip file containing the following: - manifest.json – manifest file with meta-data. - script.js. - icon.png (optional). - password-profile.json – password profile (optional). There are no directory structures in the zip. The text files in resource profile packages (scripts, manifest, and password profile) are in UTF-8 encoding. Line endings may be either CRLF or LF. Note: Other file names are permitted but must start with the names given above. For example: manifest-PAN311.json. The documentation below provides steps to perform the following: - Importing a resource profile package - Exporting a resource profile package. - Updating an existing resource profile from a resource profile package - Manually creating and modifying a resource profile package - Using sets with resource profiles. Importing a resource profile package You can import previously exported resource profile packages. Things to keep in mind before you import a profile package: - If the resource profile package has a password profile, you are given the option to ignore it and specify an existing password profile. You can create a new profile based on the information in the package or manually create a password profile. - The information in the package is used to initialize a form for creating the new resource profile. You can edit the resource profile before saving it. To import a resource profile package - In the Admin Portal, navigate to Settings > Resources > Resource Profiles > Import Profile. A warning message appears to ensure the package is from a trusted source. - Click Continue and proceed to import profile package. Click Browse to add your package and assign Password Complexity Profile to custom or package settings: - The details of the imported package will appear. Confirm all the fields to the package are correct or amend as needed. Click Save. Exporting a resource profile package You can export a resource profile package from an existing resource profile. The <identifier>.zip contains the following: - manifest-<identifier>.json. - script-<identifier>.js. - icon-<identifier>.png. - password-profile-<identifier>.json. Note: There is no directory structure. When exporting, you can export the optional icon and password profile components of the package. To export a resource profile package - In the Admin Portal, navigate to Settings > Resources > Resource Profiles. Choose a profile and navigate to Actions and choose Export. - An Export Profile window appears. Name the package and check off if you want to include the Password Profile and Logo: - Click Export and you will see the downloaded package in your Downloads folder. Updating an existing resource profile from a resource profile package After you have imported a resource profile package, at a later date you might want to update it with the "latest copy" of the resource profile (for example, the script may have been updated). To update a resource profile package - In the Admin Portal, navigate to Settings > Resources > Resource Profiles. Choose a profile to update. Navigate to Actions and choose Update. Here, you can update the: - Script. - Manifest. - Icon. - Password Profile. - Make changes as needed and click Update. The profile package will appear. Confirm all the components are correct and click Save. Manually creating and modifying a resource profile package You can manually create and modify resource profile packages. To create a resource package manually: - Write a script and create a manifest file (using the editor of your choice). - Create an icon (optional). - Create Resource Profile Package password profile file. - Create a zip with these files. The zip is a resource profile package that can be imported. Resource Profile package manifest The manifest file is JSON as in the following example: { "Identifier": "Pan311", "Name": "PAN 311", "Description": "{ \"en\": \"PAN 311 Description\", \"es\": \"Descripcion de PAN 311\" }", "Author": "Rich Smith", "Version": "4.4.4.4" } Resource Profile package password profile The optional password profile allows a package developer to suggest the settings for a password profile that works for the device (example: has particular device requirements for password generation). Note: When importing a package, you can ignore the password profile in the package in favor of your own password profile. The password profile is JSON as in the following example: { "Name": "dev2 profile", "Description": "dev2 password profile", "MinimumPasswordLength": 6, "MaximumPasswordLength": 8, "AtLeastOneLowercase": true, "AtLeastOneUppercase": true, "AtLeastOneDigit": true, "ConsecutiveCharRepeatAllowed": true, "AtLeastOneSpecial": true, "MaximumCharOccurrenceCount": 2, "SpecialCharSet": "!$%&()*+,-./:;<=>?[\\]^_{|}~", "FirstCharacterType": "AnyChar", "LastCharacterType:" "AnyChar", "MinimumAlphabeticCharacterCount": 2, "MinimumNonAlphabeticCharacterCount": 2 } Using sets with resource profiles You can add sets to resource profiles. For more information on managing sets, see Managing sets.
https://docs.centrify.com/Content/Infrastructure/resources-add/importing-exporting-resource-profile-packages.htm
2020-11-24T01:21:18
CC-MAIN-2020-50
1606141169606.2
[]
docs.centrify.com
FlashWindow function (winuser.h) Flashes the specified window one time. It does not change the active state of the window. To flash the window a specified number of times, use the FlashWindowEx function. Syntax BOOL FlashWindow( HWND hWnd, BOOL bInvert ); Parameters hWnd A handle to the window to be flashed. The window can be either open or minimized. bInvert Flashing a window means changing the appearance of its caption bar as if the window were changing from inactive to active status, or vice versa. (An inactive caption bar changes to an active caption bar; an active caption bar changes to an inactive caption bar.) Typically, a window is flashed to inform the user that the window requires attention but that it does not currently have the keyboard focus. The FlashWindow function flashes the window only once; for repeated flashing, the application should create a system timer.
https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-flashwindow
2020-11-24T01:51:44
CC-MAIN-2020-50
1606141169606.2
[]
docs.microsoft.com
Name Values Description The sub-premise will primarily represent a unit type and number. Ex: Suite 5, Flat 16 or Apartment 9Depending on the level of data available, the sub-premise unit type may be abbreviated or spelled out completely. The alpha-numeric value for the floor. Ex: The number 4 in 'Floor 5' or the letter D in 'Floor D' TrueFalse Status Note Indicates that the postal code was not found in our table of records. This could indicate that the postal code is bad, but it is also possible that we simply do not have it. We want to hear from you! We're always looking to improve our developer guides. Please email your suggestions to [email protected]
https://docs.serviceobjects.com/display/devguide/AVI+-+InformationComponents+and+Status+Codes
2020-11-24T00:16:23
CC-MAIN-2020-50
1606141169606.2
[]
docs.serviceobjects.com
Customer Insights available in Microsoft Dynamics 365 Online Government Feature details With more and more channels for interactions, citizen data is scattered across myriad systems, leading to siloed data and a fragmented view of information about citizen interactions. Without a complete view of each citizen's interactions across channels, it's impossible for governments to modernize at scale. Microsoft is committed to supporting the technology needs of government to keep up with citizen expectations for consistent and responsive experiences. Dynamics 365 Customer Insights is available for the Government Community Cloud (GCC), an environment built to meet the higher compliance needs of United States government agencies. Agencies gain a unified view of citizens and use prebuilt AI to derive insights that improve interactions, empower employees, and transform communities, while reducing IT complexity and meeting US compliance and security standards. Dynamics 365 Government meets the demanding requirements of the US Federal Risk and Authorization Management Program (FedRAMP), enabling US federal agencies to benefit from the cost savings and rigorous security of the Microsoft cloud platform.
https://docs.microsoft.com/en-us/dynamics365-release-plan/2020wave1/artificial-intelligence/dynamics365-customer-insights/customer-insights-dynamics-365-online-government
2020-11-24T01:34:56
CC-MAIN-2020-50
1606141169606.2
[]
docs.microsoft.com
The Text Mesh generates 3D geometry that displays text strings. You can create a new Text Mesh from Component > Mesh > Text Mesh. Text Meshes can be used for rendering road signs, graffiti etc. The Text places text in the 3D sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary. To make generic 2D text for GUIs, use a GUI Text component instead. Follow these steps to create a Text Mesh with a custom Font: Assetsfolder (Project tab) More info Note: If you want to change the font for a Text Mesh, need to set the component’s font property and also set the texture of the font material to the correct font texture. This texture can be located using the font asset’s foldout. If you forget to set the texture then the text in the mesh will appear blocky and misaligned.
https://docs.unity3d.com/2019.3/Documentation/Manual/class-TextMesh.html
2020-11-24T01:54:26
CC-MAIN-2020-50
1606141169606.2
[]
docs.unity3d.com
Plugins¶ One of the most useful aspects of the Girder platform is its ability to be extended in almost any way by custom plugins. Developers looking for information on writing their own plugins should see the Plugin Development section. Below is a listing and brief documentation of some of Girder’s standard plugins that come pre-packaged with the application. Audit Logging¶ PyPI package: girder-audit-logs Auto Join¶ PyPI package: girder-autojoin. DICOM Viewer¶ PyPI package: girder-dicom-viewer: Download Statistics¶ PyPI package: girder-download-statistics This plugin tracks and records file download activity. The recorded information (downloads started, downloads completed, and total requests made) is stored on the file model: file['downloadStatistics']['started'] file['downloadStatistics']['requested'] file['downloadStatistics']['completed'] Google Analytics¶ PyPI package: girder-google-analytics The Google Analytics plugin enables the use of Google Analytics to track page views with the Girder one-page application. It is primarily a client-side plugin with the tracking ID stored in the database. Each routing change will trigger a page view event and the hierarchy widget has special handling (though it does not technically trigger routing events for hierarchy navigation). To use this plugin, simply copy your tracking ID from Google Analytics into the plugin configuration page. Gravatar Portraits¶ PyPI package: girder-gravatar This lightweight plugin makes all users’ Gravatar image URLs available for use in clients. When enabled, user documents sent through the REST API will contain a new field gravatar_baseUrl if the value has been computed. If that field is not set on the user document, instead use the URL /user/:id/gravatar under the Girder API, which will compute and store the correct Gravatar URL, and then redirect to it. The next time that user document is sent over the REST API, it should contain the computed gravatar_baseUrl field. Javascript clients¶ The Gravatar plugin’s javascript code extends the Girder web client’s girder.models.UserModel by adding the getGravatarUrl(size) method that adheres to the above behavior internally. You can use it on any user model with the _id field set, as in the following example: import { getCurrentUser } from '@girder/core/auth'; const currentUser = getCurrentUser(); if (currentUser) { this.$('div.gravatar-portrait').css( 'background-image', `url(${currentUser.getGravatarUrl(36)})`); } Note Gravatar images are always square; the size parameter refers to the side length of the desired image in pixels. Hashsum Download¶ PyPI package: girder-hashsum-download. PyPI package: girder-homepage The Homepage plugin allows the default Girder front page to be replaced by content written in Markdown format. After enabling this plugin, visit the plugin configuration page to edit and preview the Markdown. Item Licenses¶ PyPI package: girder-item-licenses Jobs¶ PyPI package: girder-jobs The jobs plugin is useful for representing long-running (usually asynchronous) jobs in the Girder data model. Since the notion of tracking batch jobs is so common to many applications of Girder, this plugin is very generic and is meant to be an upstream dependency of more specialized plugins that actually create and execute the batch jobs. The job resource that is the primary data type exposed by this plugin has many common and useful fields, including: title: The name that will be displayed in the job management console. type: The type identifier for the job, used by downstream plugins opaquely. args: Ordered arguments of the job (a list). kwargs: Keyword arguments of the job (a dictionary). created: Timestamp when the job was created progress: Progress information about the job’s execution. status: The state of the job, e.g. Inactive, Running, Success. log: Log output from this job’s execution. handler: An opaque value used by downstream plugins to identify what should handle this job. meta: Any additional information about the job should be stored here by downstream plugins. Jobs should be created with the createJob method of the job model. Downstream plugins that are in charge of actually scheduling a job for execution should then call scheduleJob, which triggers the jobs.schedule event with the job document as the event info. The jobs plugin contains several built-in status codes within the girder.plugins.jobs.constants.JobStatus namespace. These codes represent various states a job can be in, which are: - INACTIVE (0) - QUEUED (1) - RUNNING (2) - SUCCESS (3) - ERROR (4) - CANCELED (5) Downstream plugins that wish to expose their own custom job statuses must hook into the jobs.status.validate event for any new valid status value, which by convention must be integer values. To validate a status code, the default must be prevented on the event, and the handler must add a True response to the event. For example, a downstream plugin with a custom job status with the value 1234 would add the following hook: from girder import events def validateJobStatus(event): if event.info == 1234: event.preventDefault().addResponse(True) def load(info): events.bind('jobs.status.validate', 'my_plugin', validateJobStatus): Downstream. LDAP Authentication¶ PyPI package: girder-ldap. Installation of this plugin requires LDAP and SASL shared libraries to be installed and available to the Girder process. These may be installed system-wide via package managers in the following way: - On Ubnutu 18.04, install the libldap2-devand libsasl2-devAPT packages. - On RHEL (CentOS) 7, install the openldap-develand cyrus-sasl-develRPM packages.. OAuth2 Login¶ PyPI package: girder-oauth This plugin allows users to log in using OAuth against a set of supported providers, rather than storing their credentials in the Girder instance. Specific instructions for each provider can be found below.. Google¶ On the plugin configuration page, you must enter a Client ID and Client secret. Those values can be created in the Google Developer Console, in the APIS & AUTH > Credentials section. When you create a new Client ID, you must enter the AUTHORIZED_JAVASCRIPT_ORIGINS and AUTHORIZED_REDIRECT_URI fields. These must point back to your Girder instance. For example, if your Girder instance is hosted at, then you should specify the following values: AUTHORIZED_JAVASCRIPT_ORIGINS: AUTHORIZED_REDIRECT_URI: After successfully creating the Client ID, copy and paste the client ID and client secret values into the plugin’s configuration page, and hit Save. Users should then be able to log in with their Google account when they click the log in page and select the option to log in with Google.) Note If event.preventDefault() is called in the event handler for oauth.auth_callback.before or oauth.auth_callback.after, the OAuth callback does not create a new Girder Token, nor sets a new authentication cookie. README¶ PyPI package: girder-readme This plugin will render a README item found in a folder on the folder hierarchy view. Sentry¶ PyPI package: girder-sentry The Sentry plugin enables the use of Sentry to detect and report errors in Girder. PyPI package: girder-terms. Thumbnails¶ PyPI package: girder-thumbnails User and Collection Quotas¶ PyPI package: girder-user-quota Virtual Folders¶ PyPI package: girder-virtual-folders Worker¶ This plugin should be enabled if you want to use the Girder worker distributed processing engine to execute batch jobs initiated by the server. This is useful for deploying service architectures that involve both data management and scalable offline processing. This plugin provides utilities for sending generic tasks to worker nodes for execution. The worker itself uses celery to manage the distribution of tasks, and builds in some useful Girder integrations on top of celery. Namely, - Data management: This plugin provides python functions for building task input and output specs that refer to data stored on the Girder server, making it easy to run processing on specific folders, items, or files. The worker itself knows how to authenticate and download data from the server, and upload results back to it. - Job management: This plugin depends on the Jobs plugin. Tasks are specified as python dictionaries inside of a job document and then scheduled via celery. The worker automatically updates the status of jobs as they are received and executed so that they can be monitored via the jobs UI in real time. If the script prints any logging information, it is automatically collected in the job log on the server, and if the script raises an exception, the job status is automatically set to an error state.
https://girder.readthedocs.io/en/v3.1.3/plugins.html
2020-11-24T00:16:48
CC-MAIN-2020-50
1606141169606.2
[array(['_images/dicom-viewer.png', '_images/dicom-viewer.png'], dtype=object) ]
girder.readthedocs.io
If you're doing this to install a new supervisor and want to run the latest version of Qube! on the new supervisor, match versions between the supervisors first, and then upgrade the new supervisor once the databases have been migrated over to it. If you're running Qube 6.0 or later, you will need to contact Sales at PipelineFX in order to get your Qube licenses moved over to the new machine. They'll need a MAC address for the new supervisor host. To migrate a Qube supervisor, you need to migrate both the PostgreSQL databases and the job logs if the job logs are stored on the supervisor's local disk. If your job logs are stored on the network you will not have to move them, simply set the supervisor_logpath on the new supervisor to point to the same network directory as the old supervisor. It is recommended that you start moving the job logs before you start migrating the PostgreSQL databases, as the job logs will take longer to move. Don't forget to duplicate any settings you've changed in the old supervisor's qb.conf. To migrate the PostgreSQL database: Stop the postgresql-pfx service on the old and new supervisors, then just copy the data directory to the new supervisor host. Make sure that the file ownership and permissions are all preserved (they should belong to the "pfx" user). The location of the data directory is: Linux: /usr/local/pfx/pgsql/data Mac: /Applications/pfx/pgsql/data Windows: C:\Program Files\pfx\pgsql\data Once that's done, you should start the postgresql-pfx service and the supervisor service on the new machine. On the new supervisor, verify that the jobs are present: qbjobs -u all Verify that you can see the logs for a random job: qbout <someJobId>
http://docs.pipelinefx.com/display/QUBE/How+to+migrate+a+Qube!+supervisor?src=search
2021-04-11T00:56:05
CC-MAIN-2021-17
1618038060603.10
[]
docs.pipelinefx.com
Summary Data Manager RT 1.1.18 release was geared primarily to pricing and publishing changes for MTD and Briggs. Bug Fixes - Pricing import needs to allow for a value of zero in order to remove price values. - Briggs: Image upload fails when there is text in the File Path text box without a file extension. New Features - The user will want to bulk update workflow on items.
https://aridocs.com/docs/data-manager-rt/release-notes/1-1-18-release/
2021-04-11T01:43:37
CC-MAIN-2021-17
1618038060603.10
[]
aridocs.com
The Recurly source supports Full Refresh syncs. That is, every time a sync is run, Airbyte will copy all rows in the tables and columns you set up for replication into the destination in a new table. Several output streams are available from this source: Automated Exports Measured Units If there are more endpoints you'd like Airbyte to support, please create an issue. The Recurly connector should not run into Recurly API limitations under normal usage. Please create an issue if you see any rate limit issues that are not automatically retried successfully. Recurly Account Recurly API Key Generate a API key using the Recurly documentation We recommend creating a restricted, read-only key specifically for Airbyte access. This will allow you to control which resources Airbyte should be able to access.
https://docs.airbyte.io/integrations/sources/recurly
2021-04-11T00:29:46
CC-MAIN-2021-17
1618038060603.10
[]
docs.airbyte.io
You must enable the below features before the installation of the Celigo integrator.io (ID: 20038) bundle in NetSuite. Enabling these features is necessary so that you can install the bundle successfully in your NetSuite account. Enable these features in NetSuite - In NetSuite, go to Setup > Company > Enable Features. - In the Company tab, under the Data Management section, check the File Cabinet box. 3. In the SuiteCloud tab, under the - SuiteBuilder section, check the Custom Records checkbox. - SuiteScript section, check the Client Suitescript, and Server Suitescript checkboxes. - SuiteTalk (Web Services) section, check Soap Web Services, and Rest Web Services checkboxes. 4. Click Save. FAQs How can I tell if the integrator.io (20038) bundle in NetSuite has been installed correctly? In general, to check if your integrator.io (20038) bundle has installed correctly in NetSuite, you will go to Customization > SuiteBundler > Search & Install Bundles > List. If you are looking at an error as shown in the below screenshot. It is very likely that you may not have enabled the features mentioned in this article before installing the integrator.io (20038) bundle. To resolve, this error, enable the features mentioned in this article in NetSuite and then contact Celigo Support with your NetSuite Account ID. As this bundle is a managed bundle, we will push the bundle in your account. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360015946372-Before-you-install-integrator-io-20038-bundle-in-NetSuite
2021-04-11T00:55:37
CC-MAIN-2021-17
1618038060603.10
[array(['/hc/article_attachments/360011760511/Setup_Company_File_Cabinet.png', 'Setup_Company_File_Cabinet.png'], dtype=object) array(['/hc/article_attachments/360011779812/Setup_SuiteCloud.png', 'Setup_SuiteCloud.png'], dtype=object) array(['/hc/article_attachments/360011779772/Path_to_check_install_base_for_failed_bundle.png', 'Path_to_check_install_base_for_failed_bundle.png'], dtype=object) array(['/hc/article_attachments/360011779732/Error_Mesage_Bundle_Failed.png', 'Error_Mesage_Bundle_Failed.png'], dtype=object) ]
docs.celigo.com
Read Access: The role allows the user to see the status of devices and deployments, but not make any modifications. This role is well suited for limited technical support users, or team leads who need an overview of deployment status or individual devices, but are not involved in day-to-day deployment management. Release Manager: Intended for Continuous Integration systems. It can only manage Mender Artifacts, such as upload and delete Artifacts. Deployments Manager: Intended for users responsible for managing deployments. With this role users can create and abort deployments. On it's own this role won't make the devices visible in the UI, you must pair it with Read Access for that. Troubleshooting: User with this role assigned has access to the troubleshooting features such as Remote Terminal, File Transfer, Port Forwarding. On it's own this role won't make the devices visible in the UI, you must pair it with Read Access for that. 2020 Northern.tech AS
https://docs.mender.io/development/overview/role.based.access.control
2021-04-11T02:28:43
CC-MAIN-2021-17
1618038060603.10
[]
docs.mender.io
scripting language (for the Administration Console) - TeamDrive Registration Server code (developed in PBT), executed by the Yvva Runtime Environment Apache module mod_yvva. - A background process td-regserver, to handle recurring tasks (e.g. sending mails, expiring licenses, etc.), based on the Yvva Runtime Environment daemon yvvad. See chapter Auto Tasks for details. See the TeamDrive Registration Server Installation Guide for detailed installation instructions.
https://docs.teamdrive.net/RegServer/3.5.3/html/TeamDrive-Registration-Server-Reference-Guide-en/RegServer_SoftwareComponents.html
2021-04-11T01:52:05
CC-MAIN-2021-17
1618038060603.10
[]
docs.teamdrive.net
for a saved connection. Authentication fail status code (optional): If the server you are connecting to returns authentication errors other than 401, enter the code that indicates that the authentication failed. Authentication fail path (optional): If the server puts the authentication error in the HTTP body, enter the JSON path to the error. Authentication fail values (optional): If you supplied a fail path above, enter the fail condition values to test to try to figure out why the connection failed. D. Edit common HTTP settings Additional HTTP settings are found in the other sections in the Create connection pane (optional sections are collapsed by default): - Application details - Nonstandard API rate limiter - How to test.
https://docs.celigo.com/hc/en-us/articles/360050722452-Set-up-a-basic-auth-HTTP-connection
2021-04-11T01:53:53
CC-MAIN-2021-17
1618038060603.10
[array(['/hc/article_attachments/360073167352/connections.png', None], dtype=object) array(['/hc/article_attachments/360073428131/http-oauth-sel.png', None], dtype=object) array(['/hc/article_attachments/360073428191/http-oauth-flow-sel.png', None], dtype=object) array(['/hc/article_attachments/360073428391/http-oauth-flow-source.png', None], dtype=object) array(['/hc/article_attachments/360073428491/http-oauth-gen.png', None], dtype=object) array(['/hc/article_attachments/360073186552/basic-http.png', None], dtype=object) array(['/hc/article_attachments/360073459491/amazon-redshift-confirm.png', None], dtype=object) ]
docs.celigo.com
puppet agent Man Page Included in Puppet Enterprise 3.3. A newer version is available; see the version menu above for details. NAME|file|console] [--no-client] [--noop] [-o|--onetime] [-t|--test] [-v|--verbose] [-V|--version] [-w|--waitforcert seconds] DESCRIPTION. USAGE NOTES '). '. - --certname. - --digest Change the certificate fingerprinting digest algorithm. The default is SHA256. Valid values depends on the version of OpenSSL installed, but will likely contain MD5, MD2, SHA1 and SHA256. - --disable - --logdest Where to send messages. Choose between syslog, the console, and a log file. Defaults to sending messages to syslog, or the console if debugging or verbosity is enabled. - --masterport The port on which to contact the puppet master. (This is a Puppet setting, and can go in puppet.conf.) - --no-client Do not create a config client. This will cause the daemon to start but not check configuration unless it is triggered with puppet kick. This only makes sense when puppet agent is being run with listen = true in puppet.conf or was started with the --listenoption. - --noop.) - --onetime.) - --test Enable the most common options used for testing. These are 'onetime', 'verbose', 'ignorecache', 'no-daemonize', 'no-usecacheonfailure', 'detailed-exitcodes', 'no-splay', and 'show_diff'. - --verbose Turn on verbose reporting. - --version Print the puppet version number and exit. - -.) Labs, LLC Licensed under the Apache 2.0 License
https://docs.puppet.com/puppet/3.6/man/agent.html
2018-12-09T22:19:53
CC-MAIN-2018-51
1544376823183.3
[]
docs.puppet.com
#include <wx/bitmap.h> This is the base class for implementing bitmap file loading/saving, and bitmap creation from data. It is used within wxBitmap and is not normally seen by the application. If you wish to extend the capabilities of wxBitmap, derive a class from wxBitmapHandler and add the handler using wxBitmap::AddHandler() in your application initialization. Note that all wxBitmapHandlers provided by wxWidgets are part of the wxCore library. For details about the default handlers, please see the note in the wxBitmap class documentation. Default constructor. In your own default constructor, initialise the members m_name, m_extension and m_type. Destroys the wxBitmapHandler object. Gets the file extension associated with this handler. Gets the name of this handler. Gets the bitmap type associated with this handler. Loads a bitmap from a file or resource, putting the resulting data into bitmap. wxBITMAP_TYPE_BMP_RESOURCEas type), the light grey colour is considered to be transparent, for historical reasons. If you want to handle the light grey pixels normally instead, call SetMask(NULL) after loading the bitmap. Saves a bitmap in the named file. Sets the handler extension. Sets the handler name. Sets the handler type.
https://docs.wxwidgets.org/stable/classwx_bitmap_handler.html
2018-12-09T21:44:57
CC-MAIN-2018-51
1544376823183.3
[]
docs.wxwidgets.org
Contents Certain adapters and operators require access to implementing JAR files provided by vendors of software external to TIBCO StreamBase® and TIBCO® Live Datamart. Examples of external JAR files include the JARs that implement TIBCO Enterprise Message Service™ or a third-party JAR file from a database vendor that implements JDBC access to that vendor's database system. External JAR files may include proprietary technology from their vendors and are made available only to valid license-holders of those products. Other external JAR files are made publicly available, either from a vendor's download site or from a public Maven repository such as Maven Central. You can also use the instructions on this page to install a WAR file, such as the one that implements TIBCO LiveView™ Web functionality, or other file types similar to JARs. Whatever their source, external JAR files must be obtained from their vendor and integrated into a Maven repository accessible to your EventFlow or LiveView fragment project. Projects generally do not pass StreamBase typechecking until all required external JAR files are identified as accessible to the Maven build system. Maven repositories can be located in several layers: A public repository, such as Maven Central. EventFlow and LiveView fragment projects have access to Maven Central by default. A vendor-provided public repository specific to that vendor's products. For example, Oracle Corporation provides an Oracle-specific repository. A site-specific network repository for your organization or department. In general, a site-specific central repository offers the most flexibility for the installation of required external JAR files and for the deployment of Maven artifacts. A read-only on-disk repository installed by a software package. Before release 10.3.0, TIBCO StreamBase and TIBCO Live Datamart installed a read-only on-disk Maven repository as part of its installation directory structure. If you are using a StreamBase release before 10.3.0, projects can pull StreamBase-provided artifacts from this installed repository, but you cannot add external JAR files to it. This feature is not used in release 10.3.0 and later. A machine-specific local repository. This is usually the ~/.m2/repositoryfolder of the current user's home directory on your local machine. Using the local machine's repository provides convenience, but any JAR file installed locally for one machine must be installed locally again for every developer and for any QA and deployment machines. You must install required external JAR files to the Maven system for your EventFlow or LiveView fragment project in one of several ways. The following ways are listed here in preference order. If your JAR file is made available by its vendor in a public repository such as Maven Central, then you only need to specify the artifact's name and the version you wish to include, using the> option. The JDBC driver for the MySQL database is an example of a JAR file in this category. If your JAR file is not available from a public repository, but is downloadable under license from its vendor, you can add such downloaded JAR files to your local machine's Maven repository, or to a site-specific network repository. In Maven terminology, this is known as installing a file. You can run a Maven Install operation with a Studio Run Configuration, or with command-line mvn install command. The Maven Install options are described in the next sections. To use either method, you must obtain the external JAR file from its vendor, and obtain, as far as possible, its exact groupId, artifactId, and version number. It is possible for you to run a Maven Install that specifies a JAR dependency based on a local system path. In this case, you specify an absolute path to the JAR file on your local system, or a path relative to your project folder. This method is not recommended because it is not portable. Your system path on a development system is likely not the same as the path to the same JAR file on a QA or deployment system. This method is not described on this page. For short-term testing ONLY, you can place an implementing JAR file in your project's src/main/resourcesfolder. However, you must remove such JAR files before creating the archive for your EventFlow fragment or LiveView fragment project. Otherwise, you could end up distributing another company's proprietary JAR files as part of your fragment, possibly without a license to do so. This is especially important to remember for JAR files that implement proprietary interfaces. Using a Run Configuration in StreamBase Studio is the best method to run a Maven Install operation if you prefer to not use the included mvn command line tool. The Run Configuration method of Maven-installing a file is described on Editing Maven Install Configurations. To add a third-party JAR file to your local Maven repository, you can use a mvn install command at the StreamBase Command Prompt (Windows) or in a StreamBase-configured shell (macOS and Linux). Command-line mvn is included with StreamBase release 10.3.0 and later. If you are using an earlier release, look for a mvn command installed with your operating system, or you may have to install Maven separately. You can install the JAR file to the local Maven repository, which is specific to the current user on the current machine. By default, this is the .m2 directory of the current user's home directory. Use command syntax like the following. (This command is shown on multiple lines, but must be entered as one long command.) mvn install:install-file -Dfile= path-to-file-DgroupId= group-id-DartifactId= artifact-id-Dversion= version-Dpackaging=jar -DgeneratePom=true Fill in the groupId, artifactId, and version information as obtained from the vendor when you downloaded the JAR file. If unavailable, use reasonable values that honor the vendor origin of the file, as described in the next section. The following example shows the command to install the JAR file that implements the JDBC driver for access to the Microsoft SQL Server database. This example presumes that the downloaded JDBC JAR file, sqljdbc4.jar, is in the current directory. (Again, this command is shown on multiple lines, but must be entered as one long command.) mvn install:install-file -Dfile=sqljdbc4.jar -DgroupId=com.microsoft -DartifactId=sqlserver-driver -Dversion=4.0 -Dpackaging=jar -DgeneratePom=true If your site has a central Maven repository, you can also add the -DLocalRepositoryPath= parameter to install the JAR file in your site's shared repository. Consult your local Maven administrators for the value to use at your site. It is not always obvious what values to fill in for the groupId, artifactId, and version fields for a vendor-provided JAR file. Many vendors assume their JAR files will be used with simple references to the file, and do not provide Maven-compatible information. Try the following ways to determine the best values to use with Maven Install, whether in a Run Configuration or on the command line. In all cases, use reasonable values that honor the origin vendor's trade names. Do not invent arbitrary names and version numbers, and do not provide values that match the StreamBase artifact you are building instead of matching the vendor of origin. If you obtain a JAR file from a public repository such as Maven Central, then by definition, the JAR file contains Maven-compatible information. This will be loaded automatically as Maven adds the dependency for the file from the public repository. You can almost always provide the exact version number for an external JAR file. The vendor's web site from which you download the external JAR file generally allows you to choose among several versions. You thus know the version number of the file you chose to download. If you have a Java JRE or JDK on your search PATH, you also have the jar command. You can use the jar command to look inside a downloaded JAR file for the values to use. Use a command like the following to view the JAR file's table of contents: jar tf filename.jar JAR files are archived with a Zip-compatible compression method, so you can also use an unzip utility or app, if your system has one. First, look for the presence of a META-INF/mavenfolder in the JAR. If that is present, and contains a pom.xmlfile, then you do not need to look farther. In your Maven Install Run Configuration, you only need to specify the fileparameter, and the other values are filled in at install time from the JAR's internal pom.xmlfile. However, if you are using command line Maven, you must still specify all values as arguments on the mvn install:install-file command line. In this case, you can extract the JAR file's pom.xmlto see what values to enter. For example: jar xf jms-2.0.jar META-INF/maven/javax.jms/jms/pom.xml If there is no mavenfolder in your JAR file, then extract the META-INF/MANIFEST.MFfile. For example, use the following command on the tibjms.jarfile shipped with TIBCO Enterprise Message Service™ 8.4: jar xf tibjms.jar META-INF/MANIFEST.MF Opening the MANIFEST.MFfile shows the following contents: Manifest-Version: 1.0 Implementation-Title: com/tibco/tibjms Implementation-Version: 8.4.0 Specification-Vendor: TIBCO Software Inc. Specification-Title: TIBCO Enterprise Message Service Specification-Version: 8.4.0 Created-By: 1.8.0_74 (Oracle Corporation) Implementation-Vendor: TIBCO Software Inc. Main-Class: com.tibco.tibjms.version From this information, you can now provide the following values for your Maven Install configuration or command: The JAR file's MANIFEST.MFfile may not contain as much Maven-compatible information as the previous example. It will almost always provide you with at least a version number. In this case, provide a reasonable groupId value that is the reverse URL for the vendor, and an artifactId that reflects the intended use for the file. You can use the basename of the JAR file as the artifactId. For example: When you install a JAR file into a local machine repository or a site-specific network repository, the installed file may not immediately appear in response to a search in the> dialog. To get the file to appear, you must rebuild Studio's index of the files contained in that repository. Use the following method for the local machine's .m2 repository. (Ask your site's Maven administrators for the version of this procedure that applies to a site-specific network repository.) In Studio, open the Maven Repositories view: Invoke> > . In the search field, enter maven. This restricts the list of views to those with Maven in their names. Select Maven Repositories. In the Maven Repositories view: Open Local Repositories. Select Local Repository. Right-click and select Rebuild Index from the context menu. Click. Watch for status messages in Studio's status bar in the lower right. This command sometimes does not start a rebuild on first invocation. If the command returns very quickly with no status messages, re-run the Rebuild Index command. Watch for Studio status bar messages. The command should take 5 to 15 seconds with many status messages and possibly a dialog box or two. On macOS, the newly added JAR file should now appear in response to a search in the> dialog, and should now be selectable from the results view of that dialog. On Windows, the newly added JAR file may still not appear in the Group Id, Artifact Id, and Version fields manually in the Add Dependency dialog.> dialog even after a successful index rebuild. In this case, if you saw a BUILD SUCCESS message when you installed the file, then it is safely installed. You can fill in the
http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/authoring/externaljars.html
2018-12-09T22:48:23
CC-MAIN-2018-51
1544376823183.3
[]
docs.streambase.com
The RhoMobile Suite provides several APIs for handling device data. Released with RhoMobile Suite 5.3 is the ORM common API, which supports JavaScript and Ruby. Pay attention to ORM Common API has not been officially released. It should not be used in a production environment. RMS 5.2 and earlier versions support the original Rhom API for Ruby apps and the ORM API, which adds JavaScript support to Rhom via the OPAL library. RMS 5.3 and higher: RMS 5.2 and lower: Documentation: In general computing, ORM refers to the object-relational mapping technique that permits records of a relational database to be stored and retrieved programatically as objects. For RhoMobile, the ORM API provides a powerful high-level interface to an on-device SQLite database that can work alone or with the RhoConnectClient to enable two-way synchronization between your application and a RhoConnect server. One of the main benefits of using an ORM is the simplicity it brings to database operations. Instead of having to write complex SQL statements by hand, an app can perform database actions by getting and setting properties on model objects. For example: Update a record with a SQL command: update product set price=119,brand='Symbol' where object='12345' Update the same record with ORM: product.updateAttibutes({price: 119, brand: "Symbol"}); Delete a record with SQL: delete from product where object='12345' Delete an object with ORM: product.destroy(); In general, RhoMobile applications follow the Model-View-Controller (MVC) pattern. In RhoMobile, a model can store information from two sources: Each model contains attributes (aka ‘fields’) that store information relating to that model. For example, a Product model might have the attributes of name, brand and price. Applications will normally have a model for each entity that they handle (i.e. Customer, Product, Invoice, InvoiceLine, Order, LineItem, etc). RhoMobile apps can use two kinds of models: Each model type has advantages and disadvantages depending on the application. fixed schema model, each model has a separate database table and attributes forms the columns of that table. In this sense, the fixed schema model is similar to a traditional relational database. By default, RhoMobile apps will be built to use the older Rhom implementation (for Ruby) and ORM implementation (for JavaScript). To activate the newer ORM Common API (which supports both JavaScript and Ruby), add the following line to application’s rhoconfig.txt file: use_new_orm = 1 Possible Values: If your application requires local (on-device) database encryption, enable it by setting a flag in build.yml: encrypt_database: 1
http://docs.tau-technologies.com/en/6.0/guide/local_database
2018-12-09T21:48:44
CC-MAIN-2018-51
1544376823183.3
[]
docs.tau-technologies.com
Contents This sample demonstrates how to use a capture field in the schema of a Query Table to make that table reusable in different copies of its containing module. Each instance of the table holds key-value data with different value data types for different instances. This sample also includes a Java operator (defined in CaptureFieldsAwareOperator.java) that demonstrates the use of Operator API methods that affect how operators handle data from streams that include capture fields. The schema for a Query Table that holds key-value pairs would normally have two fields, a key field and a data field of a particular data type. The schema for such a table might be, for example, {key long, data string}. Before capture fields, to store a key-value pair of with a different data type, you would need a separate Query Table with a different schema, such as {key long, data double}. The Query Table in this sample instead uses a capture field for the data field. Capture fields are only active in the context of a module, so the Query Table, GenericDataStore, is placed in the module GenericDataStore.sbapp. Notice that its schema is {id long, data capture}. This capture field's field name is data, while its defined data type is dataFields. This schema means: expect the first field to be named id and to have data type long; then expect any number of fields with different types thereafter. The inner module is completed with a Query operator to insert values, another to report the count of rows accumulated in the table so far, and one to read all rows so you can confirm its contents. It also contains two instances of the CaptureFieldsAwareOperator Java operator. See the Java comments in the source file for details on that operator. The inner module, GenericDataStore.sbapp is referenced twice in the outer module, TopLevel.sbapp. In the first reference, an input stream, Points, with schema {id long, x int, y int} feeds into the inner module's DataIn stream. The first field, id, matches the Query Table's requirement for a first field of type long named id. The x and y fields are captured by the Query Table's capture field, which adapts the first instance of the GenericDataStore Query Table to have the same schema as the Points input stream. In the second reference, the input stream, Names, with schema {id long, name string} feeds into another instance of the inner module's DataIn stream. In this case, the Query Table in the second instance of the inner module adapts its schema to match the Names input stream. Reusing the inner module does not mean there is a single Query Table that changes its schema. It means there are two instances of the Query Table, one per module reference, with different schemas that match the two input streams. Notice that the same abstract Query Table schema definition is used without change in both module references, yet each instance of the Query Table ends up with different concrete schemas at runtime. A simple feed simulation is provided that feeds generated values to all four input streams as follows: The result of running the feed simulation is that the two instances of the inner module's Query Table are populated with generated values, each of the appropriate data type. Meanwhile, once per second, Studio's Output Streams view reports the number of rows accumulated so far in each of the two tables. You can send an empty tuple at any time to the ReadTable stream for each module reference, and see the contents of the table at that time on the respective TableContents output stream. While the application is running, observe the console output from the Java operator (emitting on its logger at INFO level) to observe runtime output demonstrating the Operator API features for capture fields. In StreamBase Studio, import this sample with the following steps: From the top-level menu, select> . In the search field, type captureto narrow the list of samples. Select CaptureGenericDataStore from the Data Constructs and Operators category. Click. StreamBase Studio creates a single project containing the sample files. In the Project Explorer view, double-click to open the TopLevel.sbappmodule. Make sure the module is the currently active tab in the EventFlow Editor. Click the Run button. This opens the SB Test/Debug perspective and starts the module. Select the Feed Simulations view, select the TopLevel.sbfsfeed simulation, and click . View the results in the Output Streams view. In the Console view, for each tuple enqueued by the feed simulation, the two Java operators in the GenericDataStore.sbappmodule emit two messages (for a total of four messages per enqueued tuple). The messages show the different effects of using the FLATTEN and NEST strategies for Java operators accessing streams with capture fields. You can rerun the feed simulation to continue adding values to the sample's Query Table._CaptureGenericDataStore See Default Installation Directories for the default location of studio-workspace on your system.
http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/samplesinfo/CaptureGenericDataStore.html
2018-12-09T22:15:48
CC-MAIN-2018-51
1544376823183.3
[]
docs.streambase.com
What is dbt? dbt (data build tool) is a command line tool that enables data analysts and engineers to transform data in their warehouses by simply writing select statements. dbt handles turning these select statements into tables and views. dbt helps do the T in ELT (Extract, Load, Transform) processes – it doesn’t extract or load data, but it’s extremely good at transforming data that’s already loaded into your warehouse. The role of dbt within a modern data stack is discussed in more detail here. dbt also enables analysts to work more like software engineers, in line with the dbt Viewpoint. How do I use dbt? To use dbt to transform your data, you need three things: 1. A Project A project is a directory of .sql and . yml files. The directory must contain at a minimum: - Models: A model - model - A `model` is a single SQL file containing a single _select_ statement that either transforms raw data into a dataset that is ready for analytics, or, more often, is an intermediate step in such a transformation. is a single .sqlfile. Each model contains a single selectstatement that either transforms raw data into a dataset that is ready for analytics, or, more often, is an intermediate step in such a transformation. - A project file: a .ymlfile which specifies how dbt operates on your project. Your project may also contain extra information which specifies how your models should be built in your warehouse, for example, whether they should be built as a view or table. This is known as a materialization - materialization - A build strategy that turns a select statement in a model into a relation in a data warehouse. Built-in materializations are `view`, `table`, `ephemeral` and `incremental`. . If you are starting a project from scratch, see Create a project. Alternatively, if your organization already has a dbt project, see Use an existing project . 2. A Profile A profile contains information about how dbt should connect to your data warehouse. Profiles are defined in a profiles.yml file. A profile consists of targets, and a specified default target. Each target specifies the type of warehouse you are connecting to, the credentials to connect to the warehouse, and some dbt-specific configurations, most importantly the target schema for dbt to build relations (e.g. tables, views) in. See Configure your profile for more information. 3. A Command A dbt command is an instruction, issued from the command line, to execute dbt. When you issue a dbt command - command - An instruction, issued from the command line, to execute dbt. , such as run, dbt: - Determines the order to execute the models in your project in. - Wraps the select statement in each model in a create table/view statement, as per the model's materialization - materialization - A build strategy that turns a select statement in a model into a relation in a data warehouse. Built-in materializations are `view`, `table`, `ephemeral` and `incremental`. . - Executes the compiled queries against your data warehouse, using the credentials specified in the target - target - A set of connection details for a data warehouse (e.g. username and password), and a default schema for dbt to build relations (e.g. tables, views) in. defined in your profile - profile - A dictionary containing sets of connection details for a data warehouse, known as targets, as well as a default target. . Executing these queries creates relations in the target schema in your data warehouse. These relations contain transformed data, ready for analysis. A list of commands can be found in the reference section of these docs. What makes dbt so powerful? As a dbt user, your main focus will be on writing models (i.e. select queries) that reflect core business logic – there’s no need to write boilerplate code to create tables and views, or to define the order of execution of your models. Instead, dbt handles turning these models into objects in your warehouse for you. dbt handles boilerplate code to materialize queries as relations. For each model you create, you can easily configure a materialization - materialization - A build strategy that turns a select statement in a model into a relation in a data warehouse. Built-in materializations are `view`, `table`, `ephemeral` and `incremental`. . A materialization represents a build strategy for your select query – the code behind a materialization is robust, boilerplate SQL that wraps your select query in a statement to create a new, or update an existing, relation. dbt ships with the following built-in materializations: view(default): The model is built as a view in the database. table: The model is built as a table in the database. ephemeral: The model is not directly build in the database, but is instead pulled into dependent models as common table expressions. incremental: The model is initially built as a table, and in subsequent runs, dbt inserts new rows and updates changed rows in the table. Custom materializations can also be built if required. Materializations are discussed further in Configuring models. dbt determines the order of model execution. Often when transforming data, it makes sense to do so in a staged approach. dbt provides a mechanism to implement transformations in stages through the ref function. Rather than selecting from existing tables and views in your warehouse, you can select from another model, like so: select orders.id, orders.status, sum(case when payments.payment_method = 'bank_transfer' then payments.amount else 0 end) as bank_transfer_amount, sum(case when payments.payment_method = 'credit_card' then payments.amount else 0 end) as credit_card_amount, sum(case when payments.payment_method = 'gift_card' then payments.amount else 0 end) as gift_card_amount, sum(amount) as total_amount from {{ ref('base_orders') }} as orders left join {{ ref('base_payments') }} as payments on payments.order_id = orders.id When compiled to executable SQL, dbt will replace the model specified in the ref function with the relation name. Importantly, dbt also uses the ref function to determine the sequence in which to execute the models – in the above example, base_orders and base_payments need to be built prior to building the orders model. dbt builds a directed acyclic graph (DAG) based on the interdepencies between models – each node of the graph represents a model, and edges between the nodes are defined by ref functions, where a model specified in a ref function is recognized as a predecessor of the current model. When dbt runs, models are executed in the order specified by the DAG – there’s no need to explicitly define the order of execution of your models. Building models in staged transformations also reduces the need to repeat SQL, as a single transformation (for example, renaming a column) can be shared as a predecessor for a number of downstream models. For more information see Ref. Want to see a DAG visualization for your project? What else can dbt do? dbt has a number of additional features that make it even more powerful, including: Code compiler: In dbt, SQL files can contain Jinja, a lightweight templating language. Using Jinja in SQL provides a way to use control structures (e.g. if statements and for loops) in your queries. It also enables repeated SQL to be shared through macros. The power of using Jinja in your queries is discussed in Design Patterns. Documentation: dbt provides a mechanism to write, version-control, and share documentation for your dbt models. Descriptions (in plain text, or markdown) can be written for each model and field. These descriptions, along with additional implicit information (for example, the model lineage, or the field data type and tests applied), can be generated as a website and shared with your wider team, providing an easily referenceable databook for anyone that interacts with dbt models. For more information see Documentation. Tests: SQL can be difficult to test, since the underlying data is frequently changing. dbt provides a way to improve the integrity of the SQL in each model by making assertions about the results generated by a model. Out of the box, you can test whether a specified column in a model only contains: - Non-null values - Unique values - Values that have a corresponding value in another model (e.g. a customer_idfor an ordercorresponds to an idin the customersmodel) - Values from a specified list Tests can be easily extended to suit business logic specific to your organization – any assertion that you can make about your model in the form of a select query can be turned into a test. To learn more about writing tests for your models, see Testing. Package management: dbt ships with a package manager, which allows analysts to use and publish both public and private repositories of dbt code which can then be referenced by others. This means analysts can leverage libraries that provide commonly-used macros like dbt_utils, or dataset-specific projects for software services like Snowplow and Stripe, to hit the ground running. For more information, see Package Management. Seed file loader: Often in analytics, raw values need to be mapped to a more readable value (e.g. converting a country-code to a country name) or enriched with static, or infrequently changing data (e.g. using revenue targets set each year to assess your actuals). These data sources, known as seed files, can be saved as a CSV file in your project and loaded into your data warehouse through use of the seed command. The documentation for the seed command can be found here. Who should use dbt? dbt is appropriate for anyone who interacts with a data warehouse. It can be used by data engineers, data analysts and data scientists, or anyone that knows how to write select queries in SQL. For dbt users that are new to programming, you may also need to spend some time getting to know the basics of the command line, and familiarizing yourself with git. To make full use of dbt, it may also be beneficial to know some programming basics, such as for loops and if statements, to use Jinja effectively in your models. What's Next Ready to start modeling? Check out the installation docs
https://docs.getdbt.com/docs/introduction
2018-12-09T21:20:19
CC-MAIN-2018-51
1544376823183.3
[]
docs.getdbt.com
Kv and Kt Kv and Kt are important concepts. Those constants affect all motor parameters, including motor efficiency and torque. The following video explains those concepts. To help illustrate the concept of Kv and Kt, we worked with a student from Carleton University, @juandadelto. He developed a web page demonstrating how the motor parameter affect the Torque-Speed graph. Don't hesitate to play with the motor parameter to understand intuitively how changing them will change the mechanical power output, the heat losses, and the total power used.
https://docs.rcbenchmark.com/en/dynamometer/theory/kv-kt.html
2018-12-09T21:59:56
CC-MAIN-2018-51
1544376823183.3
[]
docs.rcbenchmark.com
Executes a command available in Panther's database drivers dbms dbmsStmt dbmsStmt The command to execute, where dbmsStmtcan include one of the following: - SQL statements preceded by the keyword RUN or QUERY. - Directives that are a part of Panther's database drivers—for example, fetch the next 10 rows. - Directives that are not standardized across dialects of SQL, such as commit transaction. The dbmscommand executes the specified command after colon expansion and syntax checking. These commands control the connections to database engines, process information fetched in SELECTstatements, and update database information. For information on available commands, refer to Chapter 11, "DBMS Statements and Commands." There are three methods of executing SQL statements: - QUERY and RUN pass the statement directly to the database engine. - DECLARE CURSOR creates a named cursor to use for executing the SQL statement. For more information, refer to Chapter 28, "Writing SQL Statements," in Application Development Guide. Because each database engine has unique features, refer to Database Drivers for information about database-specific features and commands. Additional forms of colon expansion–colon plus processing and colon equal processing—are available with the dbmscommand to help format information before passing it to the database engine. For more information, refer to Chapter 29, "Reading Information from the Database," in Application Development Guide. // Fetch next set of rows dbms continue// Commit transaction dbms commit// SQL statement dbms QUERY select * FROM titles WHERE title_id = :+title_id
http://docs.prolifics.com/panther/html/prg_html/jplref8.htm
2018-12-09T22:33:34
CC-MAIN-2018-51
1544376823183.3
[]
docs.prolifics.com
On this page: Related pages: In the AppDynamics application model, a business transaction represents the end-to-end, cross-tier processing path used to fulfill a request for a service provided by the application. This topic introduces and describes business transactions. View Business Transactions To view business transactions for a business application, click Business Transactions in the application navigation tree. The business transaction list shows key metrics for business transactions for the selected time range. Only business transactions that have performance data in the selected time range appear in the list by default. You can show inactive business transactions for the time range by modifying the filter view options. Other ways to modify the default view include showing transactions that belong to business transaction groups or transactions that exceed a configurable average response time. You can also choose which performance metrics appear for business transactions in the list from the View Options menu. To see the actions you can perform on business transactions, open the More Actions menu. Actions include viewing health rule violations, configuring thresholds, renaming business transactions, grouping business transactions, starting a diagnostic session for the transaction, and classifying a business transaction as a background task. Transaction Entry Points and Exit Points When you install an app agent, the agent detects incoming calls and registers transactions based on the default transaction detection rules. Automatic detection rules describe entry points for transactions based on supported frameworks. Usually, more than one tier participates in the processing of a transaction. A request to the originating tier may invoke services on: - Another instrumented tier, called a downstream tier. - Remote services that are not instrumented. Outbound requests from an instrumented application tier are called exit points. Downstream tiers may in turn have exit points that invoke other services or backend requests. App agents tag exit point calls with metadata describing the existing transaction. When an agent on a downstream tier detects an entry point that includes transaction metadata from another AppDynamics app agent, it treats the entry point as a continuation of the transaction initiated on the upstream tier. This linking of upstream exit points to downstream entry points is called correlation. Correlation maintains the client request context as it is processed by various tiers in your business application. Sample Business Transaction Consider, for example, the fictional ACME Online application. The application exposes a checkout service at. A user request to the service triggers the following distributed processing flow and actions: - The business transaction entry point at the originating tier is /checkout URI, which is mapped to a Servlet called CheckoutServlet. - The request results in the originating tier invoking the createOrder method on a downstream tier, the ECommerce-Services server. - The inventory tier application calls a backend database, which is an exit point for the business transaction. The request context is maintained across tiers, including calls to backend tiers. - Any user request on the entry point is similarly categorized as this business transaction, the Checkout business transaction. To enable detection of all the components in a distributed business transaction, downstream agents must be at the same AppDynamics release or newer than upstream agents. This is true whether the tiers are all built on the same platform (for example all Java) or multiple platforms (a Node.js tier calling a Java tier, calling a .NET tier and so on). Refine Business Transactions While the default rules can go a long way towards getting you a useful list of business transactions to track, an important part of implementing AppDynamics is verifying and refining the business transactions used to monitor your application. The business transaction you are monitoring should reflect those operations that are important to your application and business. It is important to consider the limits on business transactions, and apply your refinements accordingly. Refining your business transaction list requires a solid understanding of the important business processes in your environment. Identify the 5 to 20 most important operations in the application. These are the key operations that must work well for the application to be successful. Important services can be indicated by the number of calls or calls per minute received by the business transactions generated for the services. You can refine the list of transactions you want to monitor by locking down critical transactions and enabling automatic cleanup of stale transactions. For the Java and .NET environments, you can use interactive Live Preview tools to help identify important transactions. You can add business transactions manually from a virtual business transaction called "All Other Traffic", which is populated with transactions once the business transaction registration limits are reached, as described below. To customize the business transaction list, you can use either of these approaches: - You can modify existing business transactions by grouping, renaming, or removing the business transactions. Most of these operations are available from the business transaction list. Use this approach to apply relatively minor, small scale changes to the current business transaction list. For more information, see Organize Business Transactions. - You can affect how business transactions are created by modifying the automatic discovery rules. You can modify rule to similarly achieve business transaction grouping and naming, and to exclude transactions. Discovery rules also enable you to define new entry points for business transactions. Discovery rule modification is a powerful mechanism for changing transaction discovery on a larger scale. For more information, see Transaction Detection Rules. Business Transaction Limits When reviewing and refining your business transaction limits, it's important to consider the business transaction limits for the Controller and app server agents. Business transaction limits prevent boundless growth of the business transaction list. The default limits are: - Business Application Limits: Each application is limited to 200 registered business transactions. - App Server Agent Limits: Each agent is limited to 50 registered business transactions. There is no limit at the tier level. Also note that the app agent limit applies to each app agent, not to the machine. If you have multiple app agents on a single machine, the machine the business transactions originating from the machine could be up to the number of agents times 50. Correlate Business Transaction Logs For those times when tracing application code doesn't provide enough clues to track down the cause of a problem, AppDynamics provides visibility into the transaction logs that can be correlated to specific business transaction requests. Log correlation visibility requires a license for both Transaction Analytics and Log Analytics. See Business Transaction and Log Correlation.
https://docs.appdynamics.com/display/PRO43/Business+Transactions
2018-12-09T21:38:14
CC-MAIN-2018-51
1544376823183.3
[]
docs.appdynamics.com
Configure email forwarding in Office 365: To forward to multiple email addresses, create a distribution list, add the addresses to it, and then set up forwarding to point to the DL using the instructions in this article...
https://docs.microsoft.com/en-us/office365/admin/email/configure-email-forwarding?redirectSourcePath=%252ffr-fr%252farticle%252fConfigurer-le-transfert-du-courrier-dans-Office-365-AB5EB117-0F22-4FA7-A662-3A6BDB0ADD74&view=o365-worldwide
2018-12-09T22:33:15
CC-MAIN-2018-51
1544376823183.3
[]
docs.microsoft.com
Margins Chart margins are the distances from the outermost chart borders to the borders of the plot area. Margins are expressed in the RadChart PlotArea.Dimensions.Margins property and are specified in pixels or percentages. Percentages refer to a percentage of the RadChart width. In the figure below the dimensions are populated with some values in percentages, some in fixed pixels. To provide extra space for positioning legends, labels and title, use greater margin values for the PlotArea to provide room.
https://docs.telerik.com/devtools/winforms/controls/chart/understanding-radchart-elements/margins
2018-12-09T21:11:05
CC-MAIN-2018-51
1544376823183.3
[array(['images/chart-undestanding-radchart-elements-margins001.png', 'chart-undestanding-radchart-elements-margins 001'], dtype=object) array(['images/chart-undestanding-radchart-elements-margins002.png', 'chart-undestanding-radchart-elements-margins 002'], dtype=object)]
docs.telerik.com
LANSA ends commitment control automatically under certain circumstances. Refer to Commitment Control for details about when this occurs. The name of the program used to end commitment control is F@ENDCMT. Its source can be found in DC@F28. You may create your own version of F@ENDCMT based on this source. When it is called it is passed the following parameters.
https://docs.lansa.com/14/en/lansa010/content/lansa/ladtgubh_0155.htm
2018-12-09T22:53:43
CC-MAIN-2018-51
1544376823183.3
[]
docs.lansa.com
Node names always take the form nodeName. clusterName, in which you both specify a name for the node and assign it membership in a cluster. When you run a fragment in StreamBase Studio, Studio assigns each fragment a node and cluster name, following a defined naming pattern. For EventFlow fragments, Studio's default node name is: project_ module_ nn, where projectis the Studio project's name, moduleis the basename of the primary EventFlow module in the fragment, and nnis an integer sequence number for this fragment, if more than one instance is running. For LiveView fragments, Studio's default node name is: project_ nn, where projectis the Studio project's name, and nnis an integer sequence number for this fragment if more than one instance is running. For both EventFlow and LiveView fragments, Studio's default cluster name is your system login name. This keeps your Studio-launched nodes in a separate cluster from other colleagues using StreamBase in the same subnet. You can customize these defaults as follows: For any fragment, you can customize the node name to be assigned to the next launch of that fragment using its Run Configuration dialog. You can specify default node and cluster name assignments for all fragment launches in Studio Preferences, in the > panel. When using the epadmin command to install a node on the command line, you assign a name to the node with the nodename parameter, using the nodename. clusterName format. Do not re-use any node name currently in use, including any node names assigned by Studio launches. The cluster name is your system login name by default, but can be any string. For example: epadmin install node nodename=B.sbuser application=... If not specified, the default node name is hostname.cluster, where hostname is the return value from the hostname command, and where the cluster name is the literal string " cluster". The application archive being installed can declare one or more node names in the NodeDeploy > nodes > name object of a HOCON configuration file of type c.t.ep.dtm.c.node. In this case, you must use one of the configuration-declared node names when installing that application archive with epadmin.
http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/conceptgd/nodename.html
2018-12-09T22:34:57
CC-MAIN-2018-51
1544376823183.3
[]
docs.streambase.com
Welcome to WebEngage REST API. This document covers how to use the WebEngage REST API. Using REST API, you can access WebEngage Feedback data, Survey questionnaire and its responses (coming soon) and Notification details and statistics (coming soon). General Notes API Host Access to all the resources available via API is over HTTPS and accessed from the host api.webengage.com Response Format All the data is sent in a JSON or XML format. format parameter can be passed in the URL as json or xml. Default value is json. (Any parameters can be passed as an HTTP query string parameter in the API Urls.) Date Format All timestamps are returned in the following format: yyyy-MM-ddTHH:mm:ssZ e.g. 2013-01-26T07:31+0000 Response Container Every API Response will be wrapped in a main container called response. It will have three properties namely status, data and status will have values either success or error depending upon the server response. message will be related text for the corresponding status. data will contain the data for the requested resource. It could be Feedback Data, Feedback Reply Data etc. Authentication WebEngage API follows "Bearer" Authentication Scheme for the API access. API KEY You can get your API key for an account from your dashboard - click the account menu next to user icon on top right and then click "API Key". Every account manager for an account will have a different API Key. Access restriction for an API Key is driven by the access privileges of that account manager. You have to pass the API key in header, as shown below : curl -H "Authorization: bearer your_api_key" -d "format=xml" Errors Following is the list of errors you may receive. Invalid Resource/Parameters Invalid Resource Id will result in : 400 Bad Request
https://docs.webengage.com/docs/rest-api-overview
2017-09-19T15:16:03
CC-MAIN-2017-39
1505818685850.32
[]
docs.webengage.com
Tasklets¶ New in version 0.8: Tasklets were added in version 0.8, starting with the betas (named 0.7.9..) A Tasklet is a light-weight task. It looks very similar to a Task except that it does not save its results to disk. Every time you need its output, it is recomputed. Other than that, you can pass it around, just like a Task. They also get sometimes automatically generated. Before Tasklets¶ Before Tasklets were a part of jug, often there was a need to write some connector Tasks: def select_first(t): return t[0] result = Task(...) result0 = Task(select_first, result) next = Task(next_step,result0) This obviously works, but it has two drawbacks: - It is not natural Python - It is space inefficient on disk. You are saving resultand then result0. With Tasklets¶ First version: def select_first(t): return t[0] result = Task(...) result0 = Tasklet(select_first, result) next = Task(next_step,result0) If you look closely, you will see that we are now using a Tasklet. This jugfile will work exactly the same, but it will not save the result0 to disk. So you have a performance gain. It is still not natural Python. However, the Task can generate Tasklets automatically: Second version: result = Task(...) next = Task(next_function,result[0]) Now, we have a version that is both idiomatic Python and efficient.
http://jug.readthedocs.io/en/latest/tasklets.html
2017-09-19T15:18:14
CC-MAIN-2017-39
1505818685850.32
[]
jug.readthedocs.io
Warning IRISPy is still under heavy development and has not yet seen its first release. If you want to help test IRISPy follow these installation instructions. IRISPy is a SunPy-affiliated package that provides the tools to read in and analyze data from the IRIS solar-observing satellite in Python. The Interface Region Imaging Spectrograph is a NASA-funded Small Explorer which uses a high-frame-rate ultraviolet imaging spectrometer to make observations of the Sun. It provides 0.3 arcsec angular resolution and sub-angstrom spectral resolution. For more information see the mission/instrument paper which is available online for free.
http://docs.sunpy.org/projects/irispy/en/latest/index.html
2017-09-19T15:19:22
CC-MAIN-2017-39
1505818685850.32
[]
docs.sunpy.org
This section describes the Windows Driver Model (WDM), and discusses types of WDM drivers, device configuration, driver layering, and WDM versioning. WDM simplifies the design of kernel-mode drivers that are written to run on multiple versions of the Windows operating system. In this section Send comments about this topic to Microsoft
https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/windows-driver-model
2017-09-19T15:06:26
CC-MAIN-2017-39
1505818685850.32
[]
docs.microsoft.com
(T). TransactionId suspend() resume(TransactionId) void resume(TransactionId transactionId) transactionId- the transaction to resume IllegalStateException- if more than one listener exists on this cache TransactionListener[] getListeners() TransactionListeners; an empty array if no listeners @Deprecated void setWriter(TransactionWriter writer) writer- TransactionWriter TransactionWriter getWriter() TransactionWriter TransactionWriter setWriter(TransactionWriter)
http://gemfire-82-javadocs.docs.pivotal.io/com/gemstone/gemfire/cache/CacheTransactionManager.html
2017-09-19T15:29:55
CC-MAIN-2017-39
1505818685850.32
[]
gemfire-82-javadocs.docs.pivotal.io
ReceiptRuleSetMetadata Information about a receipt rule set. A receipt rule set is a collection of rules that specify what Amazon SES should do with mail it receives on behalf of your account's verified domains. For information about setting up receipt rule sets, see the Amazon SES Developer Guide. Contents - CreatedTimestamp The date and time the receipt rule set was created. Type: Timestamp Required: No - Name The name of the receipt rule set. The name must: This value can only contain ASCII letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). Start and end with a letter or number. Contain less than 64 characters. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/ses/latest/APIReference/API_ReceiptRuleSetMetadata.html
2018-10-15T17:31:07
CC-MAIN-2018-43
1539583509336.11
[]
docs.aws.amazon.com
Create a site collection This article shows how Office 365 global admins and SharePoint admins can create classic SharePoint Online site collections from the Microsoft a language for the site collection. You can enable the SharePoint multiple language interface on your sites, but the primary language for the site collection will remain the one you select here. Note It's important to select the appropriate language for the site collection, because once it's set, it cannot be changed. After creating a site collection, verify the locale and regional settings are accurate. (For example, a site created for Chinese will have its locale set to China.) In the Template Selection section, under Select a template, choose the template that most closely describes the purpose of your site collection. For example, if your site collection will be used for a team collaboration, choose Team Site. Tip For more information on templates, see Using templates to create different kinds of SharePoint sites. In the Time Zone box, select the time zone that's appropriate for the location of the site collection. In the Administrator box, type the user name of your site collection administrator. You can also use the Check Names or Browse button to find a user to make site collection administrator. In the Storage Quota box, type the number of megabytes (MB) you want to allocate to this site collection. Do not exceed the available amount that is displayed next to the box. In the Server Resource Quota box,
https://docs.microsoft.com/en-us/sharepoint/create-site-collection?redirectSourcePath=%252fda-dk%252farticle%252foprette-eller-slette-en-gruppe-af-websteder-3a3d7ab9-5d21-41f1-b4bd-5200071dd539
2018-10-15T18:24:55
CC-MAIN-2018-43
1539583509336.11
[]
docs.microsoft.com
Introduction to a Resource in Direct3D 11 Resources are the building blocks of your scene. They contain most of the data that Direct3D uses to interpret and render your scene. Resources are areas in memory that can be accessed by the Direct3D pipeline. Resources contain the following types of data: geometry, textures, shader data. This topic introduces Direct3D resources such as buffers and textures. You can create resources that are strongly typed or typeless; you can control whether resources have both read and write access; you can make resources accessible to only the CPU, GPU, or both. Up to 128 resources can be active for each pipeline stage. Direct3D guarantees to return zero for any resource that is accessed out of bounds. The lifecycle of a Direct3D resource is: - Create a resource using one of the create methods of the ID3D11Device interface. - Bind a resource to the pipeline using a context and one of the set methods of the ID3D11DeviceContext interface. - Deallocate a resource by calling the Release method of the resource interface. This section contains the following topics: Strong vs Weak Typing There are two ways to fully specify the layout (or memory footprint) of a resource: - Typed - fully specify the type when the resource is created. - Typeless - fully specify the type when the resource is bound to the pipeline. unless the resource was created with the D3D10_DDI_BIND_PRESENT flag. If D3D10_DDI_BIND_PRESENT is set render-target or shader resource views can be created on these resources using any of the fully typed members of the appropriate family, even if the original resource was created as fully typed. resource view. As the texture format remains flexible until the texture is bound to the pipeline, the resource is referred to as weakly typed storage. Weakly typed storage has the advantage that it can be reused or reinterpreted in another format as long as the number of components and the bit count of each component are the same in both formats. a DXGI_FORMAT_R32G32B32A32_FLOAT and a DXGI_FORMAT_R32G32B32A32_UINT at different locations in the pipeline simultaneously. Resource Views Resources can be stored in general purpose memory formats so that they can be shared by multiple pipeline stages. A pipeline stage interprets resource data using a view. A resource view is conceptually similar to casting the resource data so that it can be used in a particular context. A view can be used with a typeless resource. That is, you can create a resource at compile time and declare the data type when the resource is bound to the pipeline. A view created for a typeless resource always has the same number of bits per component; the way the data is interpreted is dependent on the format specified. The format specified must be from the same family as the typeless format used when creating the resource. For example, a resource created with the R8G8B8A8_TYPELESS format cannot be viewed as a R32_FLOAT resource even though both formats may be the same size in memory. A view also exposes other capabilities such as the ability to read back depth/stencil surfaces in a shader, generating a dynamic cubemap in a single pass, and rendering simultaneously to multiple slices of a volume. Raw Views of Buffers You can think of a raw buffer, which can also be called a byte address buffer, as a bag of bits to which you want raw access, that is, a buffer that you can conveniently access through chunks of one to four 32-bit typeless address values. You indicate that you want raw access to a buffer (or, a raw view of a buffer) when you call one of the following methods to create a view to the buffer: - To create a shader resource view (SRV) to the buffer, call ID3D11Device::CreateShaderResourceView with the flag D3D11_BUFFEREX_SRV_FLAG_RAW. You specify this flag in the Flags member of the D3D11_BUFFEREX_SRV structure. You set D3D11_BUFFEREX_SRV in the BufferEx member of the D3D11_SHADER_RESOURCE_VIEW_DESC structure to which the pDesc parameter of ID3D11Device::CreateShaderResourceView points. You also set the D3D11_SRV_DIMENSION_BUFFEREX value in the ViewDimension member of D3D11_SHADER_RESOURCE_VIEW_DESC to indicate that the SRV is a raw view. - To create an unordered access view (UAV) to the buffer, call ID3D11Device::CreateUnorderedAccessView with the flag D3D11_BUFFER_UAV_FLAG_RAW. You specify this flag in the Flags member of the D3D11_BUFFER_UAV structure. You set D3D11_BUFFER_UAV in the Buffer member of the D3D11_UNORDERED_ACCESS_VIEW_DESC structure to which the pDesc parameter of ID3D11Device::CreateUnorderedAccessView points. You also set the D3D11_UAV_DIMENSION_BUFFER value in the ViewDimension member of D3D11_UNORDERED_ACCESS_VIEW_DESC to indicate that the UAV is a raw view. You can use the HLSL ByteAddressBuffer and RWByteAddressBuffer object types when you work with raw buffers. To create a raw view to a buffer, you must first call ID3D11Device::CreateBuffer with the D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS flag to create the underlying buffer resource. You specify this flag in the MiscFlags member of the D3D11_BUFFER_DESC structure to which the pDesc parameter of ID3D11Device::CreateBuffer points. You can't combine the D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS flag with D3D11_RESOURCE_MISC_BUFFER_STRUCTURED. Also, if you specify D3D11_BIND_CONSTANT_BUFFER in BindFlags of D3D11_BUFFER_DESC, you can't also specify D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS in MiscFlags. This is not a limitation of just raw views because constant buffers already have a constraint that they can't be combined with any other view. Other than the preceding invalid cases, when you create a buffer with D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS, you aren't limited in functionality versus not setting D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS. That is, you can use such a buffer for non-raw access in any number of ways that are possible with Direct3D. If you specify the D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS flag, you only increase the available functionality. Related topics
https://docs.microsoft.com/en-us/windows/desktop/direct3d11/overviews-direct3d-11-resources-intro
2018-10-15T17:37:33
CC-MAIN-2018-43
1539583509336.11
[]
docs.microsoft.com
Contents Now Platform Capabilities Previous Topic Next Topic Deploy multiple MID Servers ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Deploy multiple MID Servers Depending upon how you use the MID Server (for an external integration, Discovery, Service Mapping or Orchestration) and the load placed on it, you might find it necessary to deploy multiple MID Servers in your network. You can install each MID Server on a separate machine or install multiple MID Servers on a single machine (including virtual machines). MID Server integrationsFactors determining the number of MID Servers your network will require to support external applications that integrate with ServiceNow include the following:Multiple MID Server deployment for Discovery and Service MappingThe same considerations for deploying multiple MID Servers apply to both Discovery and Service Mapping.Orchestration with multiple MID ServersWhen determining if multiple MID Servers are necessary to execute Orchestration activities, consider the following factors: On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/product/mid-server/concept/c_DeployMultipleMIDServers.html
2018-10-15T17:38:18
CC-MAIN-2018-43
1539583509336.11
[]
docs.servicenow.com
Scalar and Vector Field Functionality¶ Introduction¶ Vectors and Scalars¶ In physics, in the \(\mathbf{X}\), \(\mathbf{Y}\) and \(\mathbf{Z}\) directions}}\). Fields¶ In general, a \(field\) is a vector or scalar quantity that can be specified everywhere in space as a function of position (Note that in general a field may also be dependent on time and other custom variables). In this module, we deal with 3-dimensional spaces only. Hence, a field is defined as a function of the \(x\), \(y\) and \(z\) coordinates corresponding to a location in 3D space. For example, temperate}}\). Implementation of fields in sympy.physics.vector¶ In sympy.physics.vector, every ReferenceFrame instance is assigned basis vectors corresponding to the \(X\), \(Y\) and \(Z\) directions. These can be accessed using the attributes named x, y and z respectively. Hence, to define a vector \(\mathbf{v}\) of the form \(3\mathbf{\hat{i}} + 4\mathbf{\hat{j}} + 5\mathbf{\hat{k}}\) with respect to a given frame \(\mathbf{R}\), you would do >>> from sympy.physics.vector import ReferenceFrame >>> R = ReferenceFrame('R') >>> v = 3*R.x + 4*R.y + 5*R.z Vector math and basic calculus operations with respect to vectors have already been elaborated upon in other sections of this module’s documentation. On the other hand, base scalars (or coordinate variables) are implemented as special SymPy Symbol s assigned to every frame, one for each direction from \(X\), \(Y\) and \(Z\). For a frame R, the \(X\), \(Y\) and \(Z\) base scalar Symbol s can be accessed using the R[0], R[1] and R[2] expressions respectively. Therefore, to generate the expression for the aforementioned electric potential field \(2{x}^{2}y\), you would have to do >>> from sympy.physics.vector import ReferenceFrame >>> R = ReferenceFrame('R') >>> electric_potential = 2*R[0]**2*R[1] >>> electric_potential 2*R_x**2*R_y In string representation, R_x denotes the \(X\) base scalar assigned to ReferenceFrame R. Essentially, R_x is the string representation of R[0]. Scalar fields can be treated just as any other SymPy expression, for any math/calculus functionality. Hence, to differentiate the above electric potential with respect to \(x\) (i.e. R[0]), you would have to use the diff method. >>> from sympy.physics.vector import ReferenceFrame >>> R = ReferenceFrame('R') >>> electric_potential = 2*R[0]**2*R[1] >>> from sympy import diff >>> diff(electric_potential, R[0]) 4*R_x*R_y Like vectors (and vector fields), scalar fields can also be re-expressed in other frames of reference, apart from the one they were defined in – assuming that an orientation relationship exists between the concerned frames. This can be done using the express method, in a way similar to vectors - but with the variables parameter set to True. >>> from sympy.physics.vector import ReferenceFrame >>> R = ReferenceFrame('R') >>> electric_potential = 2*R[0]**2*R[1] >>> from sympy.physics.vector import dynamicsymbols, express >>> q = dynamicsymbols('q') >>> R1 = R.orientnew('R1', rot_type = 'Axis', amounts = [q, R.z]) >>> express(electric_potential, R1, variables=True) 2*(R1_x*sin(q(t)) + R1_y*cos(q(t)))*(R1_x*cos(q(t)) - R1_y*sin(q(t)))**2 Moreover, considering scalars can also be functions of time just as vectors, differentiation with respect to time is also possible. Depending on the Symbol s present in the expression and the frame with respect to which the time differentiation is being done, the output will change/remain the same. >>> from sympy.physics.vector import ReferenceFrame >>> R = ReferenceFrame('R') >>> electric_potential = 2*R[0]**2*R[1] >>> q = dynamicsymbols('q') >>> R1 = R.orientnew('R1', rot_type = 'Axis', amounts = [q, R.z]) >>> from sympy.physics.vector import time_derivative >>> time_derivative(electric_potential, R) 0 >>> time_derivative(electric_potential, R1).simplify() 2*(R1_x*cos(q(t)) - R1_y*sin(q(t)))*(3*R1_x**2*cos(2*q(t))/2 - R1_x**2/2 - 3*R1_x*R1_y*sin(2*q(t)) - 3*R1_y**2*cos(2*q(t))/2 - R1_y**2/2)*Derivative(q(t), t)
https://docs.sympy.org/latest/modules/physics/vector/fields.html
2018-10-15T16:45:21
CC-MAIN-2018-43
1539583509336.11
[]
docs.sympy.org
Clients:Domains Tab The Domains tab is accessed via the Clients > View/Search Clients page, select a client, then click the tab marked "Domains". It contains details for all a client's domains, as well as the ability to edit nameservers and whois details, apply and remove the Registry lock, move to another client and delete the domain. Contents - 1 Managing a Client's Domain - 2 Payment Settings - 3 Running Module Commands - 4 Domain Specific - 5 Moving a Domain to another Client - 6 Invoices - 7 Misc. Options Managing a Client's Domain You can locate products/services to manage in a number of ways: - Search for the Client in Clients > View/Search Clients, and then from the Client Summary page click the ID of the domain you want to manage from the list - Search for the Domain in Clients > Domain Registrations > Search/Filter, then click the domain ID to be taken to the domain details. - Using the Intelligent Search The Domain details page inside a clients profile allows you to view and modify all of a products settings. After making any changes, you need to click the Save Changes button to save your edits. The first few fields are fairly self-explanatory, such as domain and registrar. When a registrar is selected in the Registrar dropdown, WHMCS will query that registrar live every time the page is loaded and obtain the current details they hold about the domain - ensuing the nameservers and whois details displayed in WHMCS are always accurate. Payment Settings First Payment Amount The sum total due for the initial payment for this service. It includes the domain price + domain addons - discounts. This value will be used to generate an invoice when Registration Date = Next Due Date Recurring Amount The sum total that will be invoiced for this service on renewal. It includes domain price + domain addons - discounts. This value will be used to generate an invoice when Registration Date =/= Next Due Date Auto Recalculate on Save - This checkbox option located to the bottom right of the domain details screen updates the recurring amount field when checked - It can be used after changing the registration period or promo code to auto calculate the new recurring price - It is off by default so that any discounted rates or custom pricing are not overwritten as these aren't taken into account by it Next Due Date The date upon which the next renewal invoice is due to be paid. A renewal invoice will be generated in advance of this date in accordance with the Automation_Settings for the Recurring Amount. Registration Period The frequency with which the domain will be invoiced. For example if this is set to "1" and the Recurring amount "5.00", the client will be invoiced 5.00 once per year for this domain. Similarly if this is set to "2" and the Recurring amount "5.00", the client will be invoiced 5.00 once every other year for this domain. Changing the value will not by itself change the price the client is invoiced, to do this tick the #Auto Recalculate on Save checkbox before clicking Save Changes. Payment Method Defines the payment method used for invoices generated by this domain . With this option it's possible to use a different payment gateway for each of a client's domains. The client may ultimately pay using a different payment method if permitted in the General Settings. Promotion Code If you wish to apply a promotional discount to this domain, select it from this dropdown menu. Changing the value will not by itself change the price the client is invoiced, to do this tick the #Auto Recalculate on Save checkbox before clicking Save Changes. Running Module Commands If the domain is linked to a module via the Registrar dropdown selection, you will have a Module Commands row towards the bottom of the page. This allows you to execute any of the commands available in that module. Modules can have custom functions but the most common ones are: - Register - runs the domain registration routine and sets the product status to active - Transfer - runs the domain transfer routine and sets the product status to pending transfer. - Renew - runs the domain renewal routine for the number of years entered in the Registration Period field. - Modify Contact Details - displays a page where the whois records of the domain can be edited. These whois details are queried live from your registrar - Get EPP Code - if the domain supports EPP codes, the EPP code from the registrar will be displayed on-screen - Request Delete - if your registrar allows domains to be deleted, this button runs the domain deletion routine - Release Domain - change the IPS tag on the domain, a popup will appear into which the new IPS tag can be entered. Used for .uk domains. Domain Specific Nameservers Fields are available to specify up to five nameservers on the domain's record. Usually a minimum of two are required, but if you do not wish to specify a third, forth or fifth nameserver then leave the field empty. Upon loading the page the current nameservers will be queried from your domain registrar and displayed in the fields. To change the nameserver records simply adjust the values of the fields and click Save Changes, this will pass the new values to your selected domain registrar. Registrar Lock Tick/untick this option and click save changes to change the status of the registrar lock on the domain. The registrar lock usually needs to be disabled before modifying the nameservers or whois contact details. Management Tools Domain Addons (DNS Management, ID Protection and Email Forwarding) can be enabled/disabled from this page. Ticking/unticking the appropriate checkbox and clicking Save Changes will enable the feature within the client area and adjust the Recurring Amount accordingly. The Disable Auto Renew option is alos located here. When unticked WHMCS will invoice the domain autoamtically for renewal in accordance with the Next Due Date. When ticked the renewal invoice will not be automatically generated and the domain left to expire. For more information refer to Domain Management. Additional Domain Fields Certain domain names require additional information in order to process the registration request. Clients will be prompted to provide this information upon ordering, the values are saved locally in the WHMCS database and displayed in fields below the Management Tools section. Changing the values of these fields does not update any record with the domain registrar.. Misc. Options Admin Notes Here staff can enter private notes about the client to be displayed to other staff viewing this service under the Products/Services tab. Notes entered here are separate from those entered under any other service, domain or the client's Summary tab.. Send Message Use the dropdown located at the bottom of the page to send a 'Domain' type email template to the client, or select the "New Message" option to compose a new email from scratch. General and Product type emails can be sent using the dedicated dropdowns under the Summary and Product/Services tabs.
https://docs.whmcs.com/Clients:Domains_Tab
2018-10-15T17:17:08
CC-MAIN-2018-43
1539583509336.11
[]
docs.whmcs.com
MathJax Safe-mode¶ MathML includes the ability to include hyperlinks within your mathematics, and such links could be made to javascript: URL’s. For example, the expression <math> <mtext href="javascript:alert('Hello!')">Click Me</mtext> </math> would display the words “Click Me” that when clicked would generate an alert message in the browser. This is a powerful feature that provides authors the ability to tie actions to mathematical expressions. Similarly, MathJax provides an HTML extension for the TeX language that allows you to include hyperlinks in your TeX formulas: $E \href{javascript:alert("Einstein says so!")}{=} mc^2$ Here the equal sign will be a link that pops up the message about Einstein. Both MathML and the HTML extension for TeX allow you to add CSS styles, classes, and id’s to your math elements as well. These features can be used to produce interactive mathematical expressions to help your exposition, improve student learning, and so on. If you are using MathJax in a community setting, however, like a question-and-answer forum, a wiki, a blog with user comments, or other situations where your readers can enter mathematics, then your readers would be able to use such powerful tools to corrupt the page, or fool other readers into giving away sensitive information, or interrupt their reading experience in other ways. In such environments, you may want to limit these abilities so that your readers are protected form these kinds of malicious actions. (Authors who are writing pages that don’t allow users to enter data on the site do not have to worry about such problems, as the only mathematical content will be their own. It is only when users can contribute to the page that you have to be careful.) MathJax provides a Safe extension to help you limit your contributors’ powers. There are two ways to load it. The easiest is to add ,Safe after the configuration file when you are loading MathJax.js: <script src=""></script> This causes MathJax to load the TeX-AMS_HTML configuration file, and then the Safe configuration, which adds the Safe extension to your extensions array so that it will be loaded with the other extensions. Alternatively, if you are using in-line configuration, you could just include "Safe.js" in your extensions array directly: <script type="text/x-mathjax-config"> MathJax.Hub.Config({ jax: ["input/TeX","output/HTML-CSS"], extensions: ["tex2jax.js","Safe.js"] }); </script> <script src=""></script> The Safe extension has a number of configuration options that let you fine-tune what is allowed and what is not. See the Safe extension options for details.
http://docs.mathjax.org/en/latest/safe-mode.html
2017-08-16T15:10:14
CC-MAIN-2017-34
1502886102307.32
[]
docs.mathjax.org
Using Automatic Time-based Scaling Time-based scaling lets you control how many instances a layer should have online at certain times of day or days of the week by starting or stopping instances on a specified schedule. AWS OpsWorks Stacks checks every couple of minutes and starts or stops instances as required. You specify the schedule separately for each instance, as follows: Time of day. You can have more instances running during the day than at night, for example. Day of the week. You can have more instances running on weekdays than weekends, for example. Note You cannot specify particular dates. Adding a Time-Based Instance to a Layer You can either add a new time-based instance to a layer, or use an existing instance. To add a new time-based instance On the Instances page, click + Instance to add an instance. On the New tab, click Advanced >> and then click time-based. Configure the instance as desired. Then click Add Instance to add the instance to the layer. To add an existing time-based instance to a layer On the Time-based Instances page, click + Instance if a layer already has a time-based instance. Otherwise, click Add a time-based instance. Then click the Existing tab. On the Existing tab, select an instance from the list. The list shows only time-based instances. Note If you change your mind about using an existing instance, click the New tab to create a new instance, as described in the preceding procedure. Click Add instance to add the instance to the layer. Configuring a Time-Based Instance After you add a time-based instance to a layer, you configure its schedule as follows. To configure a time-based instance In the navigation pane, click Time-based under Instances. Specify the online periods for each time-based instance by clicking the appropriate boxes below the desired hour. To use the same schedule every day, click the Every day tab and then specify the online time periods. To use different schedules on different days, click each day and select the appropriate time periods. Note Make sure to allow for the amount of time it takes to start an instance and the fact that AWS OpsWorks Stacks checks only every few minutes to see if instances should be started or stopped. For example, if an instance should be running by 1:00 UTC, start it at 0:00 UTC. Otherwise, AWS OpsWorks Stacks might not start the instance until several minutes past 1:00 UTC, and it will take several more minutes for it to come online. You can modify an instance's online time periods at any time using the previous configuration steps. The next time AWS OpsWorks Stacks checks, it uses the new schedule to determine whether to start or stop instances. Note You can also add a new time-based instance to a layer by going to the Time-based page and clicking Add a time-based instance (if you have not yet added a time-based instance to the layer) or +Instance (if the layer already has one or more time-based instances). Then configure the instance as described in the preceding procedures.
http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-timebased.html
2017-08-16T15:30:26
CC-MAIN-2017-34
1502886102307.32
[]
docs.aws.amazon.com
What placeholders can I use in archive names and what do they mean? At some settings in BackWPup you need to choose a file name. For example, you can create a name for the archive of a job, which can consist characters and placeholders. Here is a list of available placeholders and their meaning: - %d = Day of month, two digits with leading zero - %j = Day of month, without leading zero - %m = Month number, with leading zero - %n = Month number, without leading zero - %Y = year, four digits - %y = year, two digits - %a = lowercase ante meridiem (am) and post meridiem (pm) - %A = uppercase ante meridiem (AM) and post meridiem (PM) - %B = Swatch Internet Time - %g = Hour in 12 hour format, without leading zero - %G = Hour in 24 hour format, without leading zero - %h = Hour in 12 hour format, with leading zero - %H = Hour in 24 hour format, with leading zero - %i = Minute, two digits, with leading zero - %s = Second, two digits, with leading zero The placeholders allow you to create individual file names for the backup files. Example: backwpup_611506_%Y-%m-%d_%H-%i-%s creates, if the job runs on 17th June of 2016 at 13:11:10, the archive with the name backwpup_611506_2016-06-17_13-11-10.zip.
http://docs.backwpup.com/article/128-what-placeholders-can-i-use-in-archive-names-and-what-do-they-mean
2017-08-16T14:55:29
CC-MAIN-2017-34
1502886102307.32
[]
docs.backwpup.com
Power BI Documentation Power BI amplifies your insights and the value of your data. With Power BI documentation, you get expert information and answers to address your needs, regardless of how you use Power BI. Power BI service The Power BI service is the online service where you'll find dashboards, apps and published reports. Power BI Desktop Power BI Desktop lets you build advanced queries, models, and reports that visualize data. Power BI mobile apps View and interact with your Power BI dashboards and reports on your mobile device. Power BI developer Power BI offers a wide range of options for developers. This ranges from embedding to custom visuals and streaming datasets. Power BI Report Server Create, deploy, and manage Power BI, mobile and paginated reports on-premises. Guided learning Start your learning journey through Power BI with this sequenced collection of courses.
https://docs.microsoft.com/en-sg/power-bi/
2018-05-20T17:43:39
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
Hardware Station overview and extensibility Important This topic applies to Dynamics 365 for Retail and Dynamics 365 for Finance and Operations. This topic explains how to extend Hardware Station to add support for new devices and new device types for existing devices. Retail Hardware Station overview Retail Hardware Station is used by Retail Modern POS and Cloud POS to connect to retail hardware peripherals such as printers, cash drawers, scanners, and payment terminals. Retail Hardware Station setup Before you start, use the information in Retail hardware station configuration and installation to install Hardware Station, and to get a feel of what hardware is and how it's installed. Retail Hardware Station architecture Hardware Station exposes Web API for Hardware Station application programming interfaces (APIs). Hardware Station can be extended either by implementing a new controller for a new device (for example, a cash dispenser) or by overriding an existing controller for an existing device type (for example, a new Audio Jack magnetic stripe reader (MSR) implementation). Retail Hardware Station extensibility scenarios Extensibility in Hardware Station is achieved by using Managed Extensibility Framework (MEF), which is supported by .NET. Extensibility guideline: Always write your extension in your own extension assembly. That way, you're writing a true extension, and upgrades will be much easier. There are two basic scenarios for extension: - Adding a new device – The out-of-box Hardware Station doesn't already support the device (for example, a cash dispenser). Therefore, you must add support for the new device in Hardware Station. - Adding a new device type for an existing device – The out-of-box Hardware Station implementation already supports the device (for example, an MSR), but you must add support for a specific device type (for example, an Audio Jack MSR implementation). Scenario 1: Adding a new device For this scenario, we will add support for a cash dispenser device in Hardware Station. In our example, we will create a fake cash dispenser that dispenses cash in the Notepad file. However, this example will help you understand the end-to-end extensibility of Hardware Station. - The Retail software development kit (SDK) has a cash dispenser sample. See RetailSdk\SampleExtensions\HardwareStation. - In this case, we must add a new Web API controller and helper properties/methods. - The new CashDispenser controller must extend ApiController and IHardwareStationController. - The Export attribute string here specifies the device that this controller is used for: [Export("CASHDISPENSER", typeof(IHardwareStationController))] namespace Contoso { namespace Commerce.HardwareStation.CashDispenserSample { using System; using System.Composition; using System.Web.Http; using Microsoft.Dynamics.Commerce.HardwareStation; using Microsoft.Dynamics.Retail.Diagnostics; /// <summary> /// Cash dispenser web API controller class. /// </summary> [Export("CASHDISPENSER", typeof(IHardwareStationController))] public class CashDispenserController : ApiController, IHardwareStationController { // Add your controller code here } Scenario 2: Adding a new device type for an existing device For this scenario, we will add support for a new device type for an existing device (an Audio Jack MSR implementation). - The Export attribute string specifies the device that this controller is used for: [Export("MSR", typeof(IHardwareStationController))] - Because there will be multiple controllers for MSRs, Hardware Station uses the configuration file to determine which implementation to use at run time. For more information, see the "Retail Hardware Station extensibility configuration" section later in this article. namespace Contoso { namespace Commerce.HardwareStation.RamblerService { using System; using System.Composition; using System.Threading.Tasks; using System.Web.Http; using System.Web.Http.Controllers; using Microsoft.Dynamics.Commerce.HardwareStation; using Microsoft.Dynamics.Commerce.HardwareStation.DataEntity; using Microsoft.Dynamics.Commerce.HardwareStation.Models; using Microsoft.Dynamics.Retail.Diagnostics; /// <summary> /// MSR device web API controller class. /// </summary> [Export("MSR", typeof(IHardwareStationController))] [Authorize] public class AudioJackMSRController : ApiController, IHardwareStationController { // Add controller implementation here } Retail Hardware Station extensibility configuration Configuration for IIS-hosted Hardware Station Before Hardware Station can consume your extension, the composition section in the Hardware Station Web.config file must be updated so that it includes an entry for your extension. The order of the composition targets in the configuration file determines precedence. Configuration for local IPC-based Hardware Station Before local Hardware Station can consume your extension, the composition section in the Modern POS DLLHost.exe.config file (C:\Program Files (x86)\Microsoft Dynamics AX\70\Retail Modern POS\ClientBroker) must be updated so that it includes an entry for your extension. The order of the composition targets in the configuration file determines precedence. [
https://docs.microsoft.com/es-es/dynamics365/unified-operations/retail/dev-itpro/hardware-station-extensibility
2018-05-20T18:06:36
CC-MAIN-2018-22
1526794863662.15
[array(['media/hws-dll-host-local-config.png', 'Local Hardware station config'], dtype=object)]
docs.microsoft.com
Enforcing Policy with Runtime Validations¶ Overview¶ This example shows you how to write a runtime validation in Ludwig and apply it to your processes using Fugue, ensuring your infrastructure is compliant with your organization’s policies. runtime validation. This composition demonstrates runtime validation by prohibiting resources from being created in AWS’s Canada region. If you have any running processes in ca-central-1, you’ll need to modify the walkthrough to prohibit a region you aren’t using. - Edit CanadaRegionValidation.lwon line 7 to prohibit a different AWS region. - Edit CanadaRegionFail.lwon line 8 to use the prohibited region. What We’ll Do In This Example¶ We’ll cover how to write Ludwig to make a runtime validation enforcing certain rules about your infrastructure. We’ll also discuss how to create the validation module on the Conductor, how to delete it, and how to list all validation modules on the Conductor. Finally, we’ll demonstrate how the validation prevents noncompliant processes from being created. What We’ll Have When We’re Done¶ A compliant process that creates a Virtual Private Cloud (VPC) in AWS, complete with tags. Download¶ You can download the source code for this example here: If you executed init during the Quick Setup, download the compositions to the directory where you ran the command. Get editor plug-ins here. Let’s Go!¶ Writing the Validation Module¶ Runtime validation is very similar to design-time validation, but instead of having the validation enforced locally through the compiler, it’s enforced by the Conductor. This means that anyone who tries to run a noncompliant composition will encounter an error message stating why validation failed. With design-time validation, the validation module must either be imported into the composition or specified with the --validation-modules option in lwc. With runtime validation, the validation module is created on the Conductor, so it’s not strictly necessary to import the validation module in your composition for runtime validation – but it is a best practice for local development. First, we need to write the validation module that will be applied to the Conductor. We’ll say that our organization has a policy that prohibits resources from being created in Canada ( ca-central-1). So, this validation checks AWS.Region types for a specific constructor. If the constructor AWS.Ca-central-1 is present in the composition, the validation will fail. Validation Function¶ Next up is writing the validation function. If you want a refresher on writing functions, check out the Functions Tutorial. fun noCaCentral1(region: AWS.Region) -> Validation: case region of | AWS.Ca-central-1 -> Validation.error {message: "Error: Region Ca-central-1 is prohibited"} | _ -> Validation.success We named this function noCaCentral1, and it takes an AWS.Region and returns a Validation. If the AWS.Region constructor is AWS.Ca-central-1, the function returns Validation.error and the Conductor returns the error message we specified, “Error: Region Ca-central-1 is prohibited.” If the AWS.Region constructor is anything else, then the function returns Validation.success. That means when this validation module is uploaded to the Conductor, any composition containing AWS.Ca-central-1 will fail validation and will not be run. Compositions in non-Canadian regions will pass validation and will be run. For more information on writing validation functions, see Design-Time Validations or Runtime Validations. Validation Registration¶ Now that we’ve written the validation function, we must register its name with the validate keyword so that it takes effect on all AWS.Region resources in scope. validate noCaCentral1 Registering the validation ensures that when the validation module is applied to the Conductor, all current and future instances of AWS.Region in Fugue processes will be tested for compliance. Creating the Validation Module on the Conductor¶ The next step is to create the validation module on the Conductor. This is the step that allows our validation to take effect across all current and future Fugue processes. We can accomplish this through the policy subcommand validation-add. This subcommand requires a <validation_file> filename and a --name <name> argument, which is the name of our validation module as it is known to Fugue. We’ll name it no-canada. The whole command looks like this: fugue policy validation-add --name no-canada CanadaRegionValidation.lw The CLI compiles the composition, uploads it to S3, and then asks the Conductor to create the new validation module: [fugue validation] Compiling Ludwig File CanadaRegionValidation.lw [ OK ] Successfully compiled. No errors. Uploading validation module to S3 ... [ OK ] Successfully uploaded. Requesting the Conductor create new validation module ... [ DONE ] Validation module 'no-canada' uploaded and added to the Conductor. Note: If you have any currently running processes that would fail validation, the validation module is not uploaded and you’ll see an error message. In this case, you’ll need to see our instructions in the Prerequisites for modifying the validation and test composition. Listing Validation Modules on the Conductor¶ Let’s view a list of the validation modules on the Conductor: fugue policy validation-list We’ll see our no-canada validation module, along with some information about when it was created, what the module’s filename was, and the first 8 characters of its SHA-256 hash: [fugue validation] List Validation Modules Fugue Validation Modules for user/xxxxxxxxxxxx - Fri Jul 7 2017 3:06pm Name Created File Name Sha256 --------- --------- ------------------------- -------- no-canada 3:05pm CanadaRegionValidation.lw be99a230 Testing A Noncompliant Composition¶ Looks good, right? Let’s put our new validation module to the test. We’ll run the following composition: composition import Fugue.AWS as AWS import Fugue.AWS.EC2 as EC2 simpleVpc: EC2.Vpc.new { cidrBlock: "10.0.0.0/16", region: AWS.Ca-central-1, tags: [simpleTag] } simpleTag: AWS.tag("Application", "Validation Test") As you can see, it’s a simple composition that creates a tagged VPC in the Canada region. Since our validation function tests for the constructor AWS.Ca-central-1, this composition will fail validation. Let’s give it a whirl and see what happens when we try to run it. fugue run CanadaRegionFail.lw --alias canada-vpc We see this output: [ fugue run ] Running CanadaRegionFail.lw Run Details: Account: default Alias: canada-vpc Compiling Ludwig file /Users/main-user/projects/CanadaRegionFail.lw [ OK ] Successfully compiled. No errors. Uploading compiled Ludwig composition to S3... [ OK ] Successfully uploaded. Requesting the Conductor to create and run process based on composition ... [ ERROR ] ludwig (validation error): "/tmp/784284401/composition/src/CanadaRegionFail.lw" (line 8, column 11): Validations failed: 8| region: AWS.Ca-central-1, ^^^^^^^^^^^^^^^^ - Error: Region Ca-central-1 is prohibited (from CanadaRegionValidation.noCaCentral1) Hooray, an error! That’s exactly what we expect to see. Let’s dig into what’s happening here: The CLI compiles the composition locally, uploads it to S3, and then asks the Conductor to run the composition. The Conductor then checks the composition against the uploaded validation modules, and the composition fails validation. That’s why we see an error message: “Error: Region Ca-central-1 is prohibited.” We also see where the error message comes from – we wrote it in the noCaCentral1 function in the CanadaRegionValidation.lw composition. Finally, the output tells us exactly where validation failed – line 8, column 11 of CanadaRegionFail.lw. In sum, Fugue has prevented us from running a composition that would have violated company policy. Neat, huh? Testing a Compliant Composition¶ Now let’s see what happens when we attempt to run a composition that is compliant. We’ll run this composition: composition import Fugue.AWS as AWS import Fugue.AWS.EC2 as EC2 simpleVpc: EC2.Vpc.new { cidrBlock: "10.0.0.0/16", region: AWS.Us-west-2, tags: [simpleTag] } simpleTag: AWS.tag("Application", "Validation Test") This is the exact same composition as the one we tried to run a moment ago, except the VPC is created in the company-approved Oregon region, AWS.Us-west-2. Let’s run it: fugue run CanadaRegionSuccess.lw --alias oregon-vpc We’ll see this: [ fugue run ] Running CanadaRegionSuccess.lw Run Details: Account: default Alias: oregon-vpc Compiling Ludwig file /Users/main-user/projects/CanadaRegionSuccess.lw [ OK ] Successfully compiled. No errors. Uploading compiled Ludwig composition to S3... [ OK ] Successfully uploaded. Requesting the Conductor to create and run process based on composition ... [ DONE ] Process created and running. State Updated Created Account FID Alias Flags Last Message Next Command ------- --------- --------- ------------------- ------------------------------------ ------------- ------- -------------- -------------- Running 3:12pm 3:12pm fugue-1499179370468 9f0af271-2f68-4be9-a93d-ab09287d50dc oregon-vpc -e run [ HELP ] Run the "fugue status" command to view details and status for all Fugue processes. As you can see, the CLI once again compiles the composition locally, uploads it to S3, and asks the Conductor to run it. The Conductor again checks the composition against the uploaded validation modules, and this time, the composition passes validation. The compliant process is successfully created! Deleting Validation Modules on the Conductor¶ Now that we’ve demonstrated how validation modules work on the Conductor, let’s delete the no-canada validation module we created. It’s simple: fugue policy validation-delete no-canada The CLI will prompt for confirmation and, upon y, delete the validation module: [fugue validation] Deleting Validation Module: 'no-canada' [ WARN ] Are you sure you want to delete this validation module: 'no-canada'? [y/N]: y Requesting the Conductor to delete validation module: 'no-canada' ... [ DONE ] The Conductor is deleting the validation module: 'no-canada' You can then execute fugue policy validation-list again to confirm that there are no validation modules on the Conductor. You’ll see output like this: [fugue validation] List Validation Modules Fugue Validation Modules for user/xxxxxxxxxxxx - Tue Jul 4 2017 11:52am Name Created File Name Sha256 ------ --------- ----------- -------- Killing the Fugue Process¶ Now that we’ve deleted the validation module on the Conductor, the only thing left to do is to kill the compliant, running process. We don’t need that VPC hanging around in our account anymore. fugue kill oregon-vpc -y The CLI returns this output: [ fugue kill ] Killing running process with Alias: oregon-vpc Requesting the Conductor to kill running composition with Alias: oregon-vpc... [ Done ] The conductor is killing the process with Alias: oregon-vpc All done! We’ve successfully created a validation module on the Conductor, tested it with a noncompliant composition, tested it with a compliant composition, deleted the validation module, and killed the running process. We’ve demonstrated how we can write validations to enforce company policy, and we’ve seen how Fugue protects against process noncompliance at the Conductor level. Nice job! Next Steps¶ Now that you’ve seen how runtime validation works, read all about design-time validation. Or, learn more about writing Ludwig functions at the Functions Tutorial. You can also check out our other examples and walkthroughs. And as always, reach out to [email protected] with any questions.
https://docs.fugue.co/fugue-by-example-validations.html
2018-05-20T17:33:19
CC-MAIN-2018-22
1526794863662.15
[]
docs.fugue.co
e-ATS Newsletter Get the e-ATS Newsletter RSS feed To enable the e-ATS Newsletter RSS feed, simply drag this link into your RSS reader. What is RSS? RSS (Really Simple Syndication) is an XML-based format for sharing and distributing Web content, such as news headlines. RSS provides interested scholars with convenient feeds of newly posted e-ATS Newsletter content. When new articles appear in e-ATS Newsletter, our corresponding RSS feeds are updated and your RSS reader alerts you of the new content.
http://docs.rwu.edu/atsnews/announcements.html
2016-12-03T04:34:44
CC-MAIN-2016-50
1480698540839.46
[]
docs.rwu.edu
Please note that the content on this page is currently incomplete. Please treat it as a work in progress. The 1.5 specification is described in Creating a language definition file. Additional information from a new thread So, the implications on ini file format: Using the native php ini parser for language files has many benefits including much faster performance. Basically, you have two options: Note from nikosdion: I have written a reliable Joomla! 1.5 to Joomla! 1.6 language converter which abides to these conventions. You can find it at my Snipt page);
http://docs.joomla.org/index.php?title=Specification_of_language_files&curid=4457&diff=86153&oldid=85775
2013-12-05T00:24:02
CC-MAIN-2013-48
1386163037893
[]
docs.joomla.org
This reference provides an overview of replica set configuration options and settings. Use rs.conf() in the mongo shell to retrieve this configuration. Note that default values are not explicitly displayed. The following document provides a representation of a replica set configuration document. Angle brackets (e.g. < and >) enclose all optional fields. { _id : <setname>, version: <int>, members: [ { _id : <ordinal>, host : hostname<:port>, <arbiterOnly : <boolean>,> <buildIndexes : <boolean>,> <hidden : <boolean>,> <priority: <priority>,> <tags: { <document> },> <slaveDelay : <number>,> <votes : <number>> } , ... ], <settings: { <getLastErrorDefaults : <lasterrdefaults>,> <chainingAllowed : <boolean>,> <getLastErrorModes : <modes>> }> } Type: string Value: <setname> An _id field holding the name of the replica set. This reflects the set name configured with replSet or mongod --replSet. Type: array Contains an array holding an embedded document for each member of the replica set. The members document contains a number of fields that describe the configuration of each member of the replica set. The members field in the replica set configuration document is a zero-indexed array. Type: ordinal Provides the zero-indexed identifier of every member in the replica set. Note When updating the replica configuration object, access the replica set members in the members array with the array index. The array index begins with 0. Do not confuse this index value with the value of the _id field in each document in the members array. Type: <hostname><:port> Identifies the host name of the set member with a hostname and port number. This name must be resolvable for every host in the replica set. Warning host cannot hold a value that resolves to localhost or the local interface unless all members of the set are on hosts that resolve to localhost. Optional. Type: boolean Default: false Identifies an arbiter. For arbiters, this value is true, and is automatically configured by rs.addArb()”. Optional. Type: boolean Default: true Determines whether the mongod builds indexes on this member. Do not set to false for instances that receive queries from clients. Omitting index creation, and thus this setting, may be useful, if: If set to false, secondaries configured with this option do build indexes on the _id field, to facilitate operations required for replication. Warning You may only set this value when adding a member to a replica set. You may not reconfigure a replica set to change the value of the buildIndexes field after adding the member to the set. buildIndexes is only valid when priority is 0 to prevent these members from becoming primary. Make all instances that do not build indexes hidden. Other secondaries cannot replicate from a members Optional. Type: Number, between 0 and 100.0 including decimals. Default: 1 Specify higher values to make a member more eligible to become primary, and lower values to make the member less eligible to become primary. priority and Replica Set Elections. Optional. Type: MongoDB Document Default: none Used to represent arbitrary values for describing or tagging members for the purposes of extending write concern to allow configurable data center awareness. Use in conjunction with getLastErrorModes and getLastErrorDefaults and db.getLastError() (i.e. getLastError.) For procedures on configuring tag sets, see Configure Replica Set Tag Sets. Important In tag sets, all tag values must be strings. Optional. Type: Integer. (seconds.) Default: 0 Describes the number of seconds “behind” the primary that this replica set member should “lag.” Use this option to create delayed members, that maintain a copy of the data that reflects the state of the data set at some amount of time in the past, specified in seconds. Typically such delayed members help protect against human error, and provide some measure of insurance against the unforeseen consequences of changes and updates. Optional. Type: Integer Default: 1 Controls the number of votes a server will cast in a replica set election. The number of votes each member has can be any non-negative integer, but it is highly recommended each member has 1 or 0 votes. If you need more than 7 members in one replica set, use this setting to add additional non-voting members with a votes value of 0. For most deployments and most members, use the default value, 1, for votes. Optional. Type: MongoDB Document The settings document configures options that apply to the whole replica set. Optional. Type: boolean Default: true New in version 2.2.4. When chainingAllowed is true, the replica set allows secondary members to replicate from other secondary members. When chainingAllowed is false, secondaries can replicate only from the primary. When you run rs.config() to view a replica set’s configuration, the chainingAllowed field appears only when set to false. If not set, chainingAllowed is true. See also Manage Chained Replication Optional. Type: MongoDB Document Specify arguments to the getLastError that members of this replica set will use when no arguments to getLastError has no arguments. If you specify any arguments, getLastError , ignores these defaults. Optional. Type: MongoDB Document Defines the names and combination of members for use by the application layer to guarantee write concern to database using the getLastError command to provide data-center awareness. Most modifications of replica set configuration use the mongo shell. Consider the following reconfiguration operation: Example Given the following replica set configuration: { "_id" : "rs0", "version" : 1, "members" : [ { "_id" : 0, "host" : "mongodb0.example.net:27017" }, { "_id" : 1, "host" : "mongodb1.example.net:27017" }, { "_id" : 2, "host" : "mongodb2.example.net:27017" } ] } The following reconfiguration operation updates the priority of the replica set members: cfg = rs.conf() cfg.members[0].priority = 0.5 cfg.members[1].priority = 2 cfg.members[2].priority = 2 rs.reconfig(cfg) First, this operation sets the local variable cfg to the current replica set configuration using the rs.conf() method. Then it adds priority values to the cfg document for the three sub-documents in the members array, accessing each replica set member with the array index and not the replica set member’s _id field. Finally, it calls the rs.reconfig() method with the argument of cfg to initialize this new configuration. The replica set configuration after this operation will resemble the following: { "_id" : "rs0", "version" : 1, "members" : [ { "_id" : 0, "host" : "mongodb0.example.net:27017", "priority" : 0.5 }, { "_id" : 1, "host" : "mongodb1.example.net:27017", "priority" : 2 }, { "_id" : 2, "host" : "mongodb2.example.net:27017", "priority" : 1 } ] } Using the “dot notation” demonstrated in the above example, you can modify any existing setting or specify any of optional replica set configuration variables. Until you run rs.reconfig(cfg) at the shell, no changes will take effect. You can issue cfg = rs.conf() at any time before using rs.reconfig() to undo your changes and start from the current configuration. If you issue cfg as an operation at any point, the mongo shell at any point will output the complete document with modifications for your review. The rs.reconfig() operation has a “force” option, to make it possible to reconfigure a replica set if a majority of the replica set is not visible, and there is no primary member of the set. use the following form: rs.reconfig(cfg, { force: true } ) Warning Forcing a rs.reconfig() can lead to rollback situations and other difficult to recover from situations. Exercise caution when using this option. Note The rs.reconfig() shell method can force the current primary to step down and triggers an election in some situations. When the primary steps down, all clients will disconnect. This is by design. Since this typically takes 10-20 seconds, attempt to make such changes during scheduled maintenance periods.
http://docs.mongodb.org/manual/reference/replica-configuration/
2013-12-05T00:19:01
CC-MAIN-2013-48
1386163037893
[]
docs.mongodb.org
Help Center. What happens when a user resets a BlackBerry device after you turn on content protection for the device transport key - turns off the data connection over the wireless network - suspends serial bypass connections if your organization's environment includes an enterprise Wi-Fi® network and the BlackBerry device can connect directly to a BlackBerry Router - frees the memory that is associated with all data and keys, including the decrypted principal encryption key - locks itself The BlackBerry device is designed to turn off the data connection and serial bypass connection while the content protection key is unavailable to decrypt the principal encryption key in flash memory. Until a user unlocks the BlackBerry device, the BlackBerry device cannot receive and decrypt data. The BlackBerry device does not turn off the wireless transceiver and can still receive phone calls, SMS text messages, and MMS messages. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/16648/Encrypting_device_transport_key_on_locked_BB_834472_11.jsp
2013-12-05T00:05:46
CC-MAIN-2013-48
1386163037893
[]
docs.blackberry.com
Please note that the content on this page is currently incomplete. Please treat it as a work in progress. The aim of this document is to introduce the design of Menus and Modules.. Name the menus Use names that makes sense to anyone visiting the site - obvious but not always done. The Main menu acts as the Top level for this Site Modules are pre-defined Modules that ship with Joomla! and are added to a site as part of the development process . They include some essential things for a new site, including:- Cross Reference: There. --Lorna Scammell February 2011
http://docs.joomla.org/index.php?title=J1.5:Design_appearance_using_Menus_and_Modules:_Joomla!_1.5&oldid=37353
2013-12-05T00:09:13
CC-MAIN-2013-48
1386163037893
[]
docs.joomla.org
Package and Distribute This document guides you how to package and distribute NW.js based app. Quick Start You can use following tools to automatically package your NW.js based app for distribution. - nwjs-builder-phoenix (recommended) - nw-builder Or your can build your app manually with the instructions below. Prepare Your App Before packaging, you should prepare all necessary files on hands. Check out following checklist to make sure you didn’t miss anything: - Source code and resources - Install NPM modules with npm install - Rebuild native Node modules - Build NaCl binaries - Compile source code and remove the original files - Icon used in manifest file Warning Do not assume your node_modules that target one platform work as is in all platforms. For instance node-email-templates has specific Windows & Mac OS X npm install commands. Besides, it requires python to install properly, which is not installed by default on Windows. As a rule of thumb npm install your package.json on each platform you target to ensure everything works as expected. Filename and Path On most Linux and some Mac OS X, the file system is case sensitive. That means test.js and Test.js are different files. Make sure the paths and filenames used in your app have the right case. Otherwise your app may look bad or crash on those file systems. Long Path on Windows The length of path used in your app may exceed the maximum length (260 characters) on Windows. That will cause various build failures. This usually happens during installing dependencies with npm install using older version of NPM (<3.0). Please build your app in the root directory, like C:\build\, to avoid this issue as much as possible. Prepare NW.js You have to redistribute NW.js with your app to get your app running. NW.js provided multiple build flavors for different requirements and package size. Choose the proper build flavor for your product or build it from source code. All files in the downloaded package should be redistributed with your product, except tools in SDK flavor including nwjc, payload and chromedriver. Package Your App There two options to pack your app: plain files or zip file. Package Option 1. Plain Files (Recommended) On Windows and Linux, you can put the files of your app in the same folder of NW.js binaries and then ship them to your users. Make sure nw (or nw.exe) is in the same folder as package.json. Or you can put the files of your app in a folder named package.nw in the same folder as nw (or nw.exe). On Mac, put the files of your app into a folder named app.nw in nwjs.app/Contents/Resources/ and done. It’s the recommended way to pack your app. Package Option 2. Zip File You can package all the files into a zip file and rename it as package.nw. And put it along with NW.js binaries for Windows and Linux. For Mac, name it app.nw and put it in nwjs.app/Contents/Resources/. Start Slow with Big Package or Too Many Files At starting time, NW.js will unzip the package into temp folder and load it from there. So it will start slower if your package is big or contains too many files. On Windows and Linux, you can even hide the zip file by appending the zip file to the end of nw or nw.exe. You can run following command on Windows to achieve this: copy /b nw.exe+package.nw app.exe or following command on Linux: cat nw app.nw > app && chmod +x app Platform Specific Steps Windows Icon for nw.exe can be replaced with tools like Resource Hacker, nw-builder and node-winresourcer. You can create a installer to deploy all necessary files onto end user’s system. You can use Windows Installer, NSIS or Inno Setup. Linux On Linux, you need to create proper .desktop file. To create a self-extractable installer script, you can use scripts like shar or makeself. To distribute your app through the package management system, like apt, yum, pacman etc, please follow their official documents to create the packages. Mac OS X On Mac OS X, you need to modify following files to have your own icon and bundle id: Contents/Resources/nw.icns: icon of your app. nw.icnsis in Apple Icon Image Format. You can convert your icon in PNG/JPEG format into ICNS by using tools like Image2Icon. Contents/Info.plist: the apple package description file. You can view Implementing Cocoa’s Standard About Panel on how this file will influence your app and what fields you should modify. You should sign your Mac app, or the user won’t launch the app if Gatekeeper is turned on. See Support for Mac App Store for details. References See wiki of NW.js for more tools of packaging your app.
http://docs.nwjs.io/en/latest/For%20Users/Package%20and%20Distribute/
2017-08-16T19:22:06
CC-MAIN-2017-34
1502886102393.60
[]
docs.nwjs.io
Recently Viewed Topics Restore from File If you have previously saved the Appliance configuration to a file, you can restore the configuration by selecting the file from the “Choose File” button and selecting the “Whole Appliance” or individual application to be restored from the drop-down list. If the application is not contained in the backup file selected, no restore operation will be completed. Supported versions of the backups that may be restored are listed on the screen. To restore a backup to an Appliance that has enabled applications contained in the backup: - From the Backup tab, scroll to the Restore from File section, and select "Only Nessus" from the drop down menu. - Navigate to the Nessus Only backup file, and click "Upload Backup File." - Click the "Backup Existing or Restore Backup" button. - Click the "Discard Existing and Backup" button. Note: When restoring a backup file from a previous version of Tenable software, it will be upgraded to the currently installed version on the Appliance.
https://docs.tenable.com/appliance/4_6/Content/RestoreFromFile.htm
2017-08-16T19:26:59
CC-MAIN-2017-34
1502886102393.60
[array(['Resources/Images/4_0_RestoreFromFile.gif', None], dtype=object)]
docs.tenable.com
call the static instance method hideView in the TestFairy class : UIView *view = ... [TestFairy hideView:view]; Code example @interface MyViewController : UIViewController { IBOutlet UITextField *usernameView; IBOutlet UITextField *creditCardView; IBOutlet UITextField *cvvView; } ... @implementation MyViewController - (void)viewDidLoad { [super viewDidLoad]; [TestFairy hideView:creditCardView]; [TestFairy hideView:cvvView]; } Sample video Below is a sample screen taken from a demo video. On the left, this is how the app looks like normally. On the right is a screenshot taken with the Images hidden. Notes - Views are hidden from screenshots before they are uploaded. - You may use hideViewon multiple Views. - You may add the same view multiple times, no checks needed.
https://docs.testfairy.com/iOS_SDK/Hiding_views_from_video.html
2017-08-16T19:30:49
CC-MAIN-2017-34
1502886102393.60
[array(['../../img/ios/hidden_views/iphone-with-fields.png', None], dtype=object) array(['../../img/ios/hidden_views/iphone-no-fields.png', None], dtype=object) ]
docs.testfairy.com
- Plexus » - Geometry Objects » OBJ Object OBJ Layer: Select the OBJ File or File Sequence imported through AE’s File Importer. To see how this works, watch this OBJ Import Tutorial OBJ Sequence Loop: Popup to choose between the type of loop, to loop the OBJ Sequences. OBJ Resolution: Controls the resolution/No. of Vertices of the OBJ File. Please Note: This is computationally very expensive as it needs to recalculate geometry. It is recommended to reduce poly count before importing the model into Plexus. For more details refer the Usage & Performance. Transform OBJ: Controls the position , rotation and scale of the OBJ. Color: The default color of the vertices. Opacity: The default opacity of the vertices. Import Facets: Imports the Faces from the OBJ File, which can be rendered using the Plexus Facets Render Object. Texture Co-ordinates: You can import UV co-ordinates from the OBJ file or Generate them based on the indices or Generate them based on their position. These co-ordinates are utilized when Color Maps are applied. Group: The Group which the Object belongs to in the Plexus. Performance Tips. Other Notes ∞ - Starting Plexus 3, UV co-ordinates and Normals are also imported into Plexus. - Although not officially supported, Trapcode Form imported OBJ File Footage is also known to work.
http://docs.rowbyte.com/plexus/geometry_objects/obj_object/
2017-08-16T19:22:50
CC-MAIN-2017-34
1502886102393.60
[array(['../images/obj_object.jpg', 'Plexus OBJ Object'], dtype=object)]
docs.rowbyte.com
Client API¶ taco¶ The taco module imports the Taco class from the taco.client module, allowing it to be imported as follows: from taco import Taco taco.client¶ - class taco.client. Taco(lang=None, script=None, disable_context=False)¶ Taco client class. Example: from taco import Taco taco = Taco(lang='python') taco.import_module('time', 'sleep') taco.call_function('sleep', 5) call_class_method(class_, name, *args, **kwargs)¶ Invoke a class method call in the connected server. The context (void / scalar / list) can be specified as a keyword argument “context” unless the “disable_context” attribute has been set. call_function(name, *args, **kwargs)¶ Invoke a function call in the connected server. The context (void / scalar / list) can be specified as a keyword argument “context” unless the “disable_context” attribute has been set. construct_object(class_, *args, **kwargs)¶ Invoke an object constructor. If successful, this should return a TacoObjectinstance which references the new object. The given arguments and keyword arguments are passed to the object constructor. import_module(name, *args, **kwargs)¶ Instruct the server to load the specified module. The interpretation of the arguments depends on the language of the Taco server implementation. function(name)¶ Convience method giving a function which calls call_function. This example is equivalent to that given for this class: sleep = taco.function('sleep') sleep(5) constructor(class_)¶ Convience method giving a function which calls construct_object. For example constructing multiple datetime objects: taco.import_module('datetime', 'datetime') afd = taco.construct_object('datetime', 2000, 4, 1) Could be done more easily: datetime = taco.constructor('datetime') afd = datetime(2000, 4, 1) taco.object¶ - class taco.object. TacoObject(client, number)¶ Taco object class. This class is used to represent objects by Taco actions. Instances of this class will returned by methods of Taco objects and should not normally be constructed explicitly. The objects reside on the server side and are referred to by instances of this class by their object number. When these instances are destroyed the destroy_object action is sent automatically. call_method(*args, **kwargs)¶ Invoke the given method on the object. The first argument is the method name. The context (void / scalar / list) can be specified as a keyword argument “context” unless the “disable_context” attribute of the client has been set. taco.error¶ - exception taco.error. TacoError¶ Base class for specific Taco client exceptions. Note that the client can also raise general exceptions, such as ValueError, if its methods are called with invalid arguments. - exception taco.error. TacoReceivedError¶ An exception raised by the Taco client. Raised if the client receives an exception action. The exception message will contain any message text received in the exception action.
http://taco-module-for-python.readthedocs.io/en/latest/api_client.html
2017-08-16T19:14:47
CC-MAIN-2017-34
1502886102393.60
[]
taco-module-for-python.readthedocs.io
Apigee Test is an Apigee Labs feature. It's available free of charge, functions “as is,” and is not supported. There are no service-level agreements (SLAs) for bug fixes. Get help in the Apigee Community. Update a Probe PUT Update a Probe Consider backing up a Probe before updating it. To back it up, Get a Probe and save the response as a file. Resource URL /organizations/{org_name}/probes/{probe_id}?)
http://ja.docs.apigee.com/apigee-test/apis/put/organizations/%7Borg_name%7D/probes/%7Bprobe_id%7D
2017-08-16T19:22:06
CC-MAIN-2017-34
1502886102393.60
[]
ja.docs.apigee.com
Influx DB(Database service) InfluxDB is a time series database optimized for high-write-volume use cases such as logs, sensor data, and real-time analytics. It exposes an HTTP API for client interaction. See the InfluxDB documentation for more information. Supported versions - 1.2 Relationship The format exposed in the $PLATFORM_RELATIONSHIPS environment variable: { "servicename" : [ { "scheme" : "http", "ip" : "246.0.161.240", "host" : "influx.internal", "port" : 8086 } ] } Usage example In your .platform/services.yaml: influx: type: influxdb:1.2 disk: 1024 In your .platform.app.yaml: relationships: timedb: "influx:influxdb" You can then use the service in a configuration file of your application with something like: 'PLATFORM_RELATIONSHIPS'); if (!$relationships) { return; } $relationships = json_decode(base64_decode($relationships), TRUE); foreach ($relationships['timedb'] as $endpoint) { $container->setParameter('influx_host', $endpoint['host']); $container->setParameter('influx_port', $endpoint['port']); }$relationships = getenv( Exporting data InfluxDB includes its own export mechanism. To gain access to the server from your local machine open an SSH tunnel with the Platform.sh CLI: platform tunnel:open That will open an SSH tunnel to all services on your current environment, and produce output something like the following: SSH tunnel opened on port 30000 to relationship: timedb The port may vary in your case. Then, simply run InfluxDB's export commands as desired. influx_inspect export -compress
https://docs.platform.sh/configuration/services/influxdb.html
2017-08-16T19:44:34
CC-MAIN-2017-34
1502886102393.60
[]
docs.platform.sh
New in version 2.2. - name: Snapshot volume netapp_e_snapshot_volume: ssid: "{{ ssid }}" api_url: "{{ netapp_api_url }}/" api_username: "{{ netapp_api_username }}" api_password: "{{ netapp_api_password }}" state: present storage_pool_name: "{{ snapshot_volume_storage_pool_name }}" snapshot_image_id: "{{ snapshot_volume_image_id }}" name: "{{ snapshot_volume_name }}" Common return values are documented here Return Values, the following are the fields unique to this module: Note okstatus will be returned, no other changes can be made to a pre-existing snapshot volume..
http://docs.ansible.com/ansible/latest/netapp_e_snapshot_volume_module.html
2017-08-16T19:38:28
CC-MAIN-2017-34
1502886102393.60
[]
docs.ansible.com
Dimensional Weight Dimensional weight, sometimes called volumetric weight, is a common industry practice that bases the transportation price on a combination of weight and package volume. In simple terms, dimensional weight is used to determine the shipping rate based on the amount of space a package occupies in the cargo area of the carrier. Dimensional weight is typically used when a package is relatively light compared to its volume. All major carriers now apply dimensional weight to some shipments. However, the manner in which dimensional weight pricing is applied varies from one carrier to another. We recommend that you become familiar with the method used by each carrier to determine and apply dimensional weight. If your company has a high volume of shipments, even a slight difference in shipping price can translate to thousands of dollars over the course of a year. Magento’s native shipping configuration does not include support for dimensional weight. However, WebShopApps has developed a Dimensional Shipping extension that manages rates for FedEx, UPS, and USPS. WebShopApps is a Magento Technology Partner.
http://docs.magento.com/m1/ce/user_guide/shipping/weight-dimensional.html
2017-08-16T19:22:38
CC-MAIN-2017-34
1502886102393.60
[]
docs.magento.com
Large objects¶ By default, the content of an object cannot be greater than 5 GB. However, you can use a number of smaller objects to construct a large object. The large object is comprised of two types of objects: Segment objects store the object content. You can divide your content into segments, and upload each segment into its own segment object. Segment objects do not have any special features. You create, update, download, and delete segment objects just as you would normal objects. A manifest object links the segment objects into one logical large object. When you download a manifest object, Object Storage concatenates and returns the contents of the segment objects in the response body of the request. This behavior extends to the response headers returned by GET and HEAD requests. The Content-Lengthresponse header value is the total size of all segment objects. Object Storage calculates the ETagresponse header value by taking the ETagvalue of each segment, concatenating them together, and returning the MD5 checksum of the result. The manifest object types are: - Static large objects The manifest object content is an ordered list of the names of the segment objects in JSON format. - Dynamic large objects The manifest object has a X-Object-Manifestmetadata header. The value of this header is {container}/{prefix}, where {container}is the name of the container where the segment objects are stored, and {prefix}is a string that all segment objects have in common. The manifest object should have no content. However, this is not enforced. Note¶ If you make a COPY request by using a manifest object as the source, the new object is a normal, and not a segment, object. If the total size of the source segment objects exceeds 5 GB, the COPY request fails. However, you can make a duplicate of the manifest object and this new object can be larger than 5 GB. Static large objects¶ To create a static large object, divide your content into pieces and create (upload) a segment object to contain each piece. Create a manifest object. Include the multipart-manifest=put query string at the end of the manifest object name to indicate that this is a manifest object. The body of the PUT request on the manifest object comprises a json list, where each element is an object representing a segment. These objects may contain the following attributes: path(required). The container and object name in the format: {container-name}/{object-name} etag(optional). If provided, this value must match the ETagof the segment object. This was included in the response headers when the segment was created. Generally, this will be the MD5 sum of the segment. size_bytes(optional). The size of the segment object. If provided, this value must match the Content-Lengthof that object. range(optional). The subset of the referenced object that should be used for segment data. This behaves similar to the Rangeheader. If omitted, the entire object will be used. Providing the optional etag and size_bytes attributes for each segment ensures that the upload cannot corrupt your data. Example Static large object manifest list This example shows three segment objects. You can use several containers and the object names do not have to conform to a specific pattern, in contrast to dynamic large objects. [ { completes, the Content-Length metadata is set to the total length of all the object segments. A similar situation applies to the ETag. Ifoperation fails. If everything matches, the manifest object is created. The X-Static-Large-Object metadata is set to true indicating that this is a static object manifest. Normally when you perform a GET operation on the manifest object, the response body contains the concatenated content of the segment objects. To download the manifest list, use the multipart-manifest=get query string. The resulting list is not formatted the same as the manifest you originally used in the PUT operation. If you use the DELETE operation on a manifest object, the manifest object is deleted. The segment objects are not affected. However, if you add the multipart-manifest=delete query string, the segment objects are deleted and if all are successfully deleted, the manifest object is also deleted. To change the manifest, use a PUT operation with the multipart-manifest=put query string. This request creates a manifest object. You can also update the object metadata in the usual way. Dynamic large objects¶ You must segment objects that are larger than 5 GB before you can upload them. You then upload the segment objects like you would any other object and create a dynamic large manifest object. The manifest object tells Object Storage how to find the segment objects that comprise the large object. The segments remain individually addressable, but retrieving the manifest object streams all the segments concatenated. There is no limit to the number of segments that can be a part of a single large object. To ensure the download works correctly, you must upload all the object segments to the same container and ensure that each object name is prefixed in such a way that it sorts in the order in which it should be concatenated. You also create and upload a manifest file. The manifest file is a zero-byte file with the extra X-Object-Manifest {container}/{prefix} header, where {container} is the container the object segments are in and {prefix} is the common prefix for all the segments. You must UTF-8-encode and then URL-encode the container and common prefix in the X-Object-Manifest header. It is best to upload all the segments first and then create or update the manifest. With this method, the full object is not available for downloading until the upload is complete. Also, you can upload a new set of segments to a second location and update the manifest to point to this new location. During the upload of the new segments, the original manifest is still available to download the first set of segments. Note When updating a manifest object using a POST request, a X-Object-Manifest header must be included for the object to continue to behave as a manifest object. Example Upload No response body is returned. A status code of 2``nn`` (between 200 and 299, inclusive) Next, upload the manifest you created that indicates the container the object segments reside within. Note that uploading additional segments after the manifest is created causes the concatenated object to be that much larger but you do not need to recreate the manifest file for subsequent additional segments. Example Upload manifest request: HTTP}/{prefix} Example Upload manifest response: HTTP [...] The Content-Type in the response for a GET or HEAD on the manifest is the same as the Content-Type set during the PUT request that created the manifest. You can easily change the Content-Type by reissuing the PUT request. Comparison of static and dynamic large objects¶ While static and dynamic objects have similar behavior, here are their differences: End-to-end integrity¶ With static large objects, integrity can be assured. The list of segments may include the MD5 checksum ( ETag) of each segment. You cannot upload the manifest object if the ETag in the list differs from the uploaded segment object. If a segment is somehow lost, an attempt to download the manifest object results in an error. With dynamic large objects, integrity is not guaranteed. The eventual consistency model means that although you have uploaded a segment object, it might not appear in the container listing until later. If you download the manifest before it appears in the container, it does not form part of the content returned in response to a GET request. Upload Order¶ With static large objects, you must upload the segment objects before you upload the manifest object. With dynamic large objects, you can upload manifest and segment objects in any order. In case a premature download of the manifest occurs, we recommend users upload the manifest object after the segments. However, the system does not enforce the order. Removal or addition of segment objects¶ With static large objects, you cannot add or remove segment objects from the manifest. However, you can create a completely new manifest object of the same name with a different manifest list. With dynamic large objects, you can upload new segment objects or remove existing segments. The names must simply match the {prefix} supplied in X-Object-Manifest. Segment object size and number¶ With static large objects, the segment objects must be at least 1 byte in size. However, if the segment objects are less than 1MB (by default), the SLO download is (by default) rate limited. At most, 1000 segments are supported (by default) and the manifest has a limit (by default) of 2MB in size. With dynamic large objects, segment objects can be any size. Segment object container name¶ With static large objects, the manifest list includes the container name of each object. Segment objects can be in different containers. With dynamic large objects, all segment objects must be in the same container. Manifest object metadata¶ With static large objects, the manifest object has X-Static-Large-Object set to true. You do not set this metadata directly. Instead the system sets it when you PUT a static manifest object. With dynamic large objects, the X-Object-Manifest value is the {container}/{prefix}, which indicates where the segment objects are located. You supply this request header in the PUT operation. Copying the manifest object¶ The semantics are the same for both static and dynamic large objects. When copying large objects, the COPY operation does not create a manifest object but a normal object with content same as what you would get on a GET request to the original manifest object. To copy the manifest object, you include the multipart-manifest=get query string in the COPY request. The new object contains the same manifest as the original. The segment objects are not copied. Instead, both the original and new manifest objects share the same set of segment objects.
http://docs.openstack.org/developer/swift/api/large_objects.html
2017-01-16T19:11:15
CC-MAIN-2017-04
1484560279248.16
[]
docs.openstack.org
README¶ Jenkins Job Builder takes simple descriptions of Jenkins jobs in YAML or JSON format and uses them to configure Jenkins. You can keep your job descriptions in human readable text format in a version control system to make changes and auditing easier. It also has a flexible template system, so creating many similarly configured jobs is easy. To install: $ pip install --user jenkins-job-builder Online documentation: Developers¶ Bug report: Repository: Cloning: git clone. More details on how you can contribute is available on our wiki at: Writing a patch¶ We ask that all code submissions be pep8 and pyflakes clean. The easiest way to do that is to run tox before submitting code for review in Gerrit. It will run pep8 and pyflakes in the same manner as the automated test suite that will run on proposed patchsets. When creating new YAML components, please observe the following style conventions: - All YAML identifiers (including component names and arguments) should be lower-case and multiple word identifiers should use hyphens. E.g., “build-trigger”. - The Python functions that implement components should have the same name as the YAML keyword, but should use underscores instead of hyphens. E.g., “build_trigger”. This consistency will help users avoid simple mistakes when writing YAML, as well as developers when matching YAML components to Python implementation. - Module Execution - Extending
http://docs.openstack.org/infra/jenkins-job-builder/index.html
2017-01-16T19:16:25
CC-MAIN-2017-04
1484560279248.16
[]
docs.openstack.org
powerfull build and software development tools, - A vibrant community. Supported chips¶ - Allwinner A10, A20, A31, H3, A64 - Amlogic S805 - Amlogic S905 - Actionsemi S500 - Freescale / NXP iMx6 - Marvell Armada A380 - Samsung Exynos 5422 Supported boards¶ Beelink X2, Orange Pi PC plus, Orange Pi Plus 2E, Orange Pi Lite, Roseapple Pi, NanoPi M1, pcDuino2, pcDuino3, Odroid C0/C1/C1+, Banana Pi M2+, Hummingboard 2, Odroid C2, Orange Pi 2, Orange Pi One, Orange Pi PC, Orange Pi Plus 1 & 2, Clearfog, Lemaker Guitar, Odroid XU Check download page for recently supported list. Common features¶ - (up to few minutes) than usual (20s) because it updates package list, regenerates SSH keys and expand partition to fit your SD card. It might reboot one time automatically. - Ready to compile external modules -.¶ - /tmp & /log = RAM, ramlog app saves logs to disk daily and on shut-down (Wheezy and Jessie w/o systemd) - automatic)
https://docs.armbian.com/
2017-01-16T19:07:55
CC-MAIN-2017-04
1484560279248.16
[]
docs.armbian.com
Coding style and standards? In the example provided on the page in the section 'SQL Queries' there is a multi-line query, which contains string literals and variables. It seems that the query string is broken after the line ending with .(int) $state Shouldn't there be a period (.) at the end to lock in with the remaining part of the query string??
https://docs.joomla.org/Talk:Coding_style_and_standards
2017-01-16T19:16:52
CC-MAIN-2017-04
1484560279248.16
[]
docs.joomla.org
The Newton Installation Tutorials and Guides provide instructions for installing multiple compute nodes. To make the compute nodes highly available, you must configure the environment to include multiple instances of the API and other services. As of September 2016, the OpenStack High Availability community is designing and developing an official and unified way to provide high availability for instances. We are developing automatic recovery from failures of hardware or hypervisor-related software on the compute node, or other failures that could prevent instances from functioning correctly, such as, issues with a cinder volume I/O path. More details are available in the user story co-authored by OpenStack’s HA community and Product Working Group (PWG), where this feature is identified as missing functionality in OpenStack, which should be addressed with high priority. The architectural challenges of instance HA and several currently existing solutions were presented in a talk at the Austin summit, for which slides are also available. The code for three of these solutions can be found online at the following links: Work is in progress on a unified approach, which combines the best aspects of existing upstream solutions. More details are available on the HA VMs user story wiki. To get involved with this work, see the section on the HA community. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
http://docs.openstack.org/ha-guide/compute-node-ha.html
2017-01-16T19:13:00
CC-MAIN-2017-04
1484560279248.16
[]
docs.openstack.org
Multivariate Distributions¶¶ The methods listed as below are implemented for each multivariate distribution, which provides a consistent interface to work with multivariate distributions. Computation of statistics¶ Probability evaluation¶ insupport(d, x)¶ If xis a vector, it returns whether x is within the support of d. If xis a matrix, it returns whether every column in xis within the support of d. Return the probability density of distribution devaluated) Evaluate the probability densities at columns of x, and write the results to a pre-allocated array r. logpdf(d, x)¶) Evaluate the logarithm of probability densities at columns of x, and write the results to a pre-allocated array r. Note: For multivariate distributions, the pdf value is usually very small or large, and therefore direct evaluating the pdf may cause numerical problems. It is generally advisable to perform probability computation in log-scale. Sampling¶ rand(d, n) Sample n vectors from the distribution d. This returns a matrix of size (dim(d), n), where each column is a sample. rand!(d, x) Draw samples and output them to a pre-allocated array x. Here, x can be either a vector of length dim(d)or a matrix with dim(d)rows. Node: In addition to these common methods, each multivariate distribution has its own special methods, as introduced below. Multinomial Distribution¶ Multivariate Normal Distribution¶ \mathbf{I}\). To take advantage of such special cases, we introduce a parametric type MvNormal, defined as below, which allows users to specify the special structure of the mean and covariance. immutable MvNormal{Cov<:AbstractPDMat,Mean<:Union{Vector,ZeroVector}} <: AbstractMvNormal μ::Mean Σ::Cov end Here, the mean vector can be an instance of either Vector or ZeroVector, where the latter is simply an empty type indicating a vector filled with zeros. =}} Construction¶ Generally, users don’t have to worry about these internal details. We provide a common constructor MvNormal, which will construct a distribution of appropriate type depending on the input arguments. MvNormal(mu, sig)¶ Construct a multivariate normal distribution with mean muand covariance represented by sig. MvNormal(sig) Construct a multivariate normal distribution with zero mean and covariance represented by sig. Here, sigcan be in either of the following forms (with T<:Real): - an instance of a subtype of AbstractPDMat - a symmetric matrix of type Matrix{T} - a vector of type Vector{T}: indicating a diagonal covariance as diagm(abs2(sig)). MvNormal(d, sig) Construct a multivariate normal distribution of dimension d, with zero mean, and an isotropic covariance as abs2(sig) * eye(d). Note: The constructor will choose an appropriate covariance form internally, so that special structure of the covariance can be exploited. Addition Methods¶ In addition to the methods listed in the common interface above, we also provide the followinig methods for all multivariate distributions under the base type AbstractMvNormal: sqmahal(d, x)¶) Write the squared Mahalanbobis distances from each column of x to the center of d to r. Canonical form¶: immutable MvNormalCanon{P<:AbstractPDMat,V<:Union{Vector,ZeroVector}} <: =)¶ Construct a multivariate normal distribution with potential vector hand precision matrix represented by J. MvNormalCanon(J) Construct a multivariate normal distribution with zero mean (thus zero potential vector) and precision matrix represented by J. Here, Jrepresents the precision matrix, which can be in either of the following forms ( T<:Real): - an instance of a subtype of AbstractPDMat - a square matrix of type Matrix{T} - a vector of type Vector{T}: indicating a diagonal precision matrix as diagm(J). MvNormalCanon(d, v) Construct a multivariate normal distribution of dimension d, with zero mean and a precision matrix as v * eye(d). Note: MvNormalCanon share the same set of methods as MvNormal. Multivariate Lognormal Distribution¶. The package provides an implementation, MvLogNormal, which wraps around MvNormal: immutable MvLogNormal <: AbstractMvLogNormal normal::MvNormal end Additional Methods¶ In addition to the methods listed in the common interface above, we also provide the following methods: location(d)¶ Return the location vector of the distribution (the mean of the underlying normal distribution). scale(d)¶ Return the scale matrix of the distribution (the covariance matrix of the underlying normal distribution). median(d)¶ Return the median vector of the lognormal distribution. which is strictly smaller than the mean. Conversion Methods¶ It can be necessary to calculate the parameters of the lognormal (location vector and scale matrix) from a given covariance and mean, median or mode. To that end, the following functions are provided.. location!{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix,μ::AbstractVector) Calculate the location vector (as above) and store the result in μ scale{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix) Calculate the scale parameter, as defined for the location parameter above. scale!{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix,Σ::AbstractMatrix) Calculate the scale parameter, as defined for the location parameter above and store the result in Σ. params{D<:AbstractMvLogNormal}(::Type{D},m::AbstractVector,S::AbstractMatrix) Return (scale,location) for a given mean and covariance params!{D<:AbstractMvLogNormal}(::Type{D},m::AbstractVector,S::AbstractMatrix,μ::AbstractVector,Σ::AbstractMatrix) Calculate (scale,location) for a given mean and covariance, and store the results in μand Σ Dirichlet Distribution¶ The Dirichlet distribution is often used)
http://distributionsjl.readthedocs.io/en/latest/multivariate.html
2017-04-23T15:40:51
CC-MAIN-2017-17
1492917118713.1
[]
distributionsjl.readthedocs.io
5.3.3 Log The WITSML 2 Log is very simple object, in that it is primarily just a container for one or more ChannelSets as show in Figure 5.3.3-1 . Most of the information is at the ChannelSet level. The concept of multiple ChannelSet in a single Log is significant change from WITSML where each Log represented exactly one group of curves and their data (NOTE: Technically, the WITSML 1.4.1.1 Log allowed for multiple blocks of data, but this was just to optimize transmission of sparse data for real time. With ETP, this requirement no longer exists.). In WITSML 2, each ChannelSet represents a disjoint set of channel data. There are many possible use cases that would dictate what ChannelSets would go together in a Log, and how they would relate to each other. Some of the possibilities: - A Log could contain a depth run in one ChannelSet with the data for a re-log of one section of the hole in another ChannelSet. - A single Log could contain all of the data for a run of a Production Logging Tool (PLT). - Data from multiple acquisition systems can be represented as multiple ChannelSets in a Log. The Log object does not include any specific information about how the ChannelSets relate to each other, except as can be gleaned from a comparison of the metadata for each object in the log.
http://docs.energistics.org/WITSML/WITSML_TOPICS/WITSML-000-058-0-C-sv2000.html
2021-05-06T00:33:42
CC-MAIN-2021-21
1620243988724.75
[array(['WITSML_IMAGES/WITSML-000-037-0-sv2000.png', None], dtype=object)]
docs.energistics.org
Configure SMTP for outbound emails To configure outbound emails from CiviCRM, follow these steps: Log in to the Drupal administration console as an administrator. Browse to the CiviCRM administration console and select the “Outbound Email Settings” item. Select “SMTP” as the mailer. Enter the following configuration options using the information below as a guideline. This example configures a Gmail account. Replace USERNAME and PASSWORD with your Gmail account username and password respectively. SMTP Server - ssl://smtp.gmail.com SMTP port - 465 Authentication? - Yes SMTP Username - [email protected] SMTP Password - PASSWORD Click “Save and Send Test Email” to verify that it all.
https://docs.bitnami.com/bch/apps/civicrm/configuration/configure-smtp/
2021-05-06T01:44:56
CC-MAIN-2021-21
1620243988724.75
[]
docs.bitnami.com
Since Homebrew 1.0.0 most Homebrew users (those who haven’t run a dev-cmd or set HOMEBREW_DEVELOPER=1 which is ~99.9% based on analytics data) require tags on the Homebrew/brew repository in order to get new versions of Homebrew. There are a few steps in making a new Homebrew release: masterCI job (i.e. main jobs green or green after rerunning), and that you are confident there are no major regressions on the current master, branch. brew releaseto create a new draft release. For major or minor version bumps, pass --majoror --minor, respectively. If this is a major or minor release (e.g. X.0.0 or X.Y.0) then there are a few more steps: odisabledcode, make any odeprecatedcode odisabled, uncomment any # odeprecatedcode and add any new odeprecationsthat are desired. brew release [--major|--minor]as input but have the wording adjusted to be more human readable and explain not just what has changed but why. Please do not manually create a release based on older commits on the master branch. It’s very hard to judge whether these have been sufficiently tested by users or if they will cause negative side-effects with the current state of Homebrew/homebrew-core. If a new branch is needed ASAP but there are things on master that cannot be released yet (e.g. new deprecations and you want to make a patch release) then revert the relevant PRs, follow the process above and then revert the reverted PRs to reapply them on master.
https://docs.brew.sh/Releases
2021-05-06T00:27:01
CC-MAIN-2021-21
1620243988724.75
[]
docs.brew.sh
FastLED2 (community library) Summary A packaging of FastLED 3.1 for sparkcore and photon Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully.ixel, LPD8806), Sparkfun (WS2801), and aliexpress. In addition to writing to the leds, this library also includes a number of functions for high-performing 8bit math for manipulating your RGB values, as well as low level classes for abstracting out access to pins and SPI hardware, while still keeping things as fast as possible. Quick: - Quick start for new developers - hook up your leds and go, no need to think about specifics of the led chipsets being used - Zero pain switching LED chipsets - you get some new leds that the library supports, just change the definition of LEDs you're using, et. voila! Your code is running with the new leds. - High performance - with features like zero cost global brightness scaling, high performance 8-bit math for RGB manipulation, and some of the fastest bit-bang'd SPI support around, FastLED wants to keep as many CPU cycles available for your led patterns as possible Getting help If you need help with using the library, please consider going to the google+ community first, which is at - there are hundreds of people in that group and many times you will get a quicker answer to your question there, as you will be likely to run into other people who have had the same issue. If you run into bugs with the library (compilation failures, the library doing the wrong thing), or if you'd like to request that we support a particular platform or LED chipset, then please open an issue at and we will try to figure out what is going wrong. Simple in lo-speed mode) - a 3 wire addressable led chipset - TM1809/4 - 3 wire chipset, cheaply available on aliexpress.com - TM1803 - 3 wire chipset, sold by radio shack - UCS1903 - another 3 wire led chipset, cheap - GW6205 - another 3 wire led chipset - LPD8806 - SPI based chpiset, very high speed - WS2801 - SPI based chipset, cheap and widely available - SM16716 - SPI based chipset - APA102 - SPI based chipset - P9813 - aka Cool Neon's Total Control Lighting - DMX - send rgb data out over DMX using arduino DMX libraries - SmartMatrix panels - needs the SmartMatrix library - LPD6803, HL1606, and "595"-style shift registers are no longer supported by the library. The older Version 1 of the library ("FastSPI_LED") has support for these, but is missing many of the advanced features of current versions and is no longer being maintained. Supported platforms Right now the library is supported on a variety of arduino compatable platforms. If it's ARM or AVR and uses the arduino software (or a modified version of it to build) then it is likely supported. Note that we have a long list of upcoming platforms to support, so if you don't see what you're looking for here, ask, it may be on the roadmap (or may already be supported). N.B. at the moment we are only supporting the stock compilers that ship with the arduino software. Support for upgraded compilers, as well as using AVR studio and skipping the arduino entirely, should be coming in a near future release. - Arduino & compatibles - straight up arduino devices, uno, duo, leonardo, mega, nano, etc... - What about that name? Wait, what happend to FastSPI_LED and FastSPI_LED2? The library was initially named FastSPI_LED because it was focused on very fast and efficient SPI access. However, since then, the library has expanded to support a number of LED chipsets that don't use SPI, as well as a number of math and utility functions for LED processing across the board. We decided that the name FastLED more accurately represents the totality of what the library provides, everything fast, for LEDs. For more information Check out the official site for links to documentation, issues, and news TODO - get candy Browse Library Files
https://docs.particle.io/cards/libraries/f/FastLED2/
2021-05-06T00:45:13
CC-MAIN-2021-21
1620243988724.75
[]
docs.particle.io
This screen displays the information related to you and setting options available in wordpress. Personal Options includes following fields: - Visual Editor: Check the box to disable the visual editor and use the plain HTML editor. - Syntax Highlighting: Check the box to disable the syntax highlighting when editing code. - Admin Color Scheme: Check the radio button next to color scheme for the Administration Scheme. The left two colors are menu background colors and the right two are roll-over colors. - Keyboard shortcuts: For a rapid navigation and to perform quick actions on the comments you can enable keyboard shortcuts for comment moderation. - Toolbar: Check this box to show the toolbar when viewing the site Information Email (Required): It is imperative for users to list the email address in their respective profiles. If you change this we will send you an email at your new address to confirm it. The new address will not become active until confirmed. Website: Enter the website address. ABOUT YOURSELF Biographical Information: Type in a short description about yourself. This is an optional information that can be displayed by your theme, if configured by the theme author. Profile Picture: Your picture in Gravatar is shown here. To change it, access to the. ACCOUNT MANAGEMENT: Generate Password: This is used to generate a new password for the account. It shows a new field with the generated password. If you desire to have your own password instead of the secure password, a checkbox will appear to confirm that you want to use a weak password. Logout Everywhere else: You choose to logout from all the devices that is your phones or personal computer. Click on the "Update Profile" button to save all the changes.
https://docs.darlic.com/users/how-to-view-your-profile/
2021-05-06T00:08:58
CC-MAIN-2021-21
1620243988724.75
[]
docs.darlic.com
A newer version of this page is available. Switch to the current version. TreeMapSelectionChangedEventHandler Delegate Represents a method that will handle the TreeMapControl.SelectionChanged event. Namespace: DevExpress.Xpf.TreeMap Assembly: DevExpress.Xpf.TreeMap.v18.2.dll Declaration public delegate void TreeMapSelectionChangedEventHandler( object sender, TreeMapSelectionChangedEventArgs e ); Public Delegate Sub TreeMapSelectionChangedEventHandler( sender As Object, e As TreeMapSelectionChangedEventArgs ) Parameters Remarks When creating a TreeMapSelection
https://docs.devexpress.com/WPF/DevExpress.Xpf.TreeMap.TreeMapSelectionChangedEventHandler?v=18.2
2021-05-06T00:28:09
CC-MAIN-2021-21
1620243988724.75
[]
docs.devexpress.com