content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Azure Functions development and configuration with Azure SignalR Service Azure Functions applications can leverage the Azure SignalR Service bindings to add real-time capabilities. Client applications use client SDKs available in several languages to connect to Azure SignalR Service and receive real-time messages. This article describes the concepts for developing and configuring an Azure Function app that is integrated with SignalR Service. SignalR Service configuration Azure SignalR Service can be configured in different modes. When used with Azure Functions, the service must be configured in Serverless mode. In the Azure portal, locate the Settings page of your SignalR Service resource. Set the Service mode to Serverless. Azure Functions development A serverless real-time application built with Azure Functions and Azure SignalR Service typically requires two Azure Functions: - A "negotiate" function that the client calls to obtain a valid SignalR Service access token and service endpoint URL - One or more functions that handle messages from SignalR Service and send messages or manage group membership negotiate function A client application requires a valid access token to connect to Azure SignalR Service. An access token can be anonymous or authenticated to a given user ID. Serverless SignalR Service applications require an HTTP endpoint named "negotiate" to obtain a token and other connection information, such as the SignalR Service endpoint URL. Use an HTTP triggered Azure Function and the SignalRConnectionInfo input binding to generate the connection information object. The function must have an HTTP route that ends in /negotiate. With class based model in C#, you don't need SignalRConnectionInfo input binding and can add custom claims much easier. See Negotiate experience in class based model For more information on how to create the negotiate function, see the SignalRConnectionInfo input binding reference. To learn about how to create an authenticated token, refer to Using App Service Authentication. Handle messages sent from SignalR Service Use the SignalR Trigger binding to handle messages sent from SignalR Service. You can be triggered when clients send messages or clients get connected or disconnected. For more information, see the SignalR trigger binding reference. You also need to configure your function endpoint as an upstream so that service will trigger the function where there is message from client. For more information about how to configure upstream, please refer to this doc. Sending messages and managing group membership Use the SignalR output binding to send messages to clients connected to Azure SignalR Service. You can broadcast messages to all clients, or you can send them to a subset of clients that are authenticated with a specific user ID or have been added to a specific group. Users can be added to one or more groups. You can also use the SignalR output binding to add or remove users to/from groups. For more information, see the SignalR output binding reference. SignalR Hubs SignalR has a concept of "hubs". Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces. Class based model The class based model is dedicated for C#. With class based model can have a consistent SignalR server-side programming experience. It has the following features. - Less configuration work: The class name is used as HubName, the method name is used as Eventand the Categoryis decided automatically according to method name. - Auto parameter binding: Neither ParameterNamesnor attribute [SignalRParameter]is needed. Parameters are auto bound to arguments of Azure Function method in order. - Convenient output and negotiate experience. The following codes demonstrate these features: public class SignalRTestHub : ServerlessHub { [FunctionName("negotiate")] public SignalRConnectionInfo Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req) { return Negotiate(req.Headers["x-ms-signalr-user-id"], GetClaims(req.Headers["Authorization"])); } [FunctionName(nameof(OnConnected))] public async Task OnConnected([SignalRTrigger]InvocationContext invocationContext, ILogger logger) { await Clients.All.SendAsync(NewConnectionTarget, new NewConnection(invocationContext.ConnectionId)); logger.LogInformation($"{invocationContext.ConnectionId} has connected"); } [FunctionName(nameof(Broadcast))] public async Task Broadcast([SignalRTrigger]InvocationContext invocationContext, string message, ILogger logger) { await Clients.All.SendAsync(NewMessageTarget, new NewMessage(invocationContext, message)); logger.LogInformation($"{invocationContext.ConnectionId} broadcast {message}"); } [FunctionName(nameof(OnDisconnected))] public void OnDisconnected([SignalRTrigger]InvocationContext invocationContext) { } } All functions that want to leverage class based model need to be the method of class that inherits from ServerlessHub. The class name SignalRTestHub in the sample is the hub name. Define hub method All the hub methods must have an argument of InvocationContext decorated by [SignalRTrigger] attribute and use parameterless constructor. Then the method name is treated as parameter event. By default, category=messages except the method name is one of the following names: - OnConnected: Treated as category=connections, event=connected - OnDisconnected: Treated as category=connections, event=disconnected Parameter binding experience In class based model, [SignalRParameter] is unnecessary because all the arguments are marked as [SignalRParameter] by default except it is one of the following situations: - The argument is decorated by a binding attribute. - The argument's type is ILoggeror CancellationToken - The argument is decorated by attribute [SignalRIgnore] Negotiate experience in class based model Instead of using SignalR input binding [SignalR], negotiation in class based model can be more flexible. Base class ServerlessHub has a method SignalRConnectionInfo Negotiate(string userId = null, IList<Claim> claims = null, TimeSpan? lifeTime = null) This features user customizes userId or claims during the function execution. Use SignalRFilterAttribute User can inherit and implement the abstract class SignalRFilterAttribute. If exceptions are thrown in FilterAsync, 403 Forbidden will be sent back to clients. The following sample demonstrates how to implement a customer filter that only allows the admin to invoke broadcast. [AttributeUsage(AttributeTargets.Method, AllowMultiple = true, Inherited = true)] internal class FunctionAuthorizeAttribute: SignalRFilterAttribute { private const string AdminKey = "admin"; public override Task FilterAsync(InvocationContext invocationContext, CancellationToken cancellationToken) { if (invocationContext.Claims.TryGetValue(AdminKey, out var value) && bool.TryParse(value, out var isAdmin) && isAdmin) { return Task.CompletedTask; } throw new Exception($"{invocationContext.ConnectionId} doesn't have admin role"); } } Leverage the attribute to authorize the function. [FunctionAuthorize] [FunctionName(nameof(Broadcast))] public async Task Broadcast([SignalRTrigger]InvocationContext invocationContext, string message, ILogger logger) { } Client development SignalR client applications can leverage the SignalR client SDK in one of several languages to easily connect to and receive messages from Azure SignalR Service. Configuring a client connection To connect to SignalR Service, a client must complete a successful connection negotiation that consists of these steps: - Make a request to the negotiate HTTP endpoint discussed above to obtain valid connection information - Connect to SignalR Service using the service endpoint URL and access token obtained from the negotiate endpoint SignalR client SDKs already contain the logic required to perform the negotiation handshake. Pass the negotiate endpoint's URL, minus the negotiate segment, to the SDK's HubConnectionBuilder. Here is an example in JavaScript: const connection = new signalR.HubConnectionBuilder() .withUrl('') .build() By convention, the SDK automatically appends /negotiate to the URL and uses it to begin the negotiation. Note If you are using the JavaScript/TypeScript SDK in a browser, you need to enable cross-origin resource sharing (CORS) on your Function App. For more information on how to use the SignalR client SDK, refer to the documentation for your language: Sending messages from a client to the service If you have upstream configured for your SignalR resource, you can send messages from client to your Azure Functions using any SignalR client. Here is an example in JavaScript: connection.send('method1', 'arg1', 'arg2'); Azure Functions configuration Azure Function apps that integrate with Azure SignalR Service can be deployed like any typical Azure Function app, using techniques such as continuously deployment, zip deployment, and run from package. However, there are a couple of special considerations for apps that use the SignalR Service bindings. If the client runs in a browser, CORS must be enabled. And if the app requires authentication, you can integrate the negotiate endpoint with App Service Authentication. Enabling CORS The JavaScript/TypeScript client makes HTTP requests to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the Function app or the browser will block the requests. Localhost When running the Function app on your local computer, you can add a Host section to local.settings.json to enable CORS. In the Host section, add two properties: CORS- enter the base URL that is the origin the client application CORSCredentials- set it to trueto allow "withCredentials" requests Example: { "IsEncrypted": false, "Values": { // values }, "Host": { "CORS": "", "CORSCredentials": true } } Cloud - Azure Functions CORS To enable CORS on an Azure Function app, go to the CORS configuration screen under the Platform features tab of your Function app in the Azure portal. Note CORS configuration is not yet available in Azure Functions Linux Consumption plan. Use Azure API Management to enable CORS. CORS with Access-Control-Allow-Credentials must be enabled for the SignalR client to call the negotiate function. Select the checkbox to enable it. In the Allowed origins section, add an entry with the origin base URL of your web application. Cloud - Azure API Management Azure API Management provides an API gateway that adds capabilities to existing back-end services. You can use it to add CORS to your function app. It offers a consumption tier with pay-per-action pricing and a monthly free grant. Refer to the API Management documentation for information on how to import an Azure Function app. Once imported, you can add an inbound policy to enable CORS with Access-Control-Allow-Credentials support. <cors allow- <allowed-origins> <origin></origin> </allowed-origins> <allowed-methods> <method>GET</method> <method>POST</method> </allowed-methods> <allowed-headers> <header>*</header> </allowed-headers> <expose-headers> <header>*</header> </expose-headers> </cors> Configure your SignalR clients to use the API Management URL. Using App Service Authentication Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the SignalRConnectionInfo binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the SignalR output binding that are targeted to that user ID. In the Azure portal, in your Function app's Platform features tab, open the Authentication/authorization settings window. Follow the documentation for App Service Authentication to configure authentication using an identity provider of your choice. Once configured, authenticated HTTP requests will include x-ms-client-principal-name and x-ms-client-principal-id headers containing the authenticated identity's username and user ID, respectively. You can use these headers in your SignalRConnectionInfo binding configuration to create authenticated connections. Here is an example C# negotiate function that uses the x-ms-client-principal-id header. [FunctionName("negotiate")] public static SignalRConnectionInfo Negotiate( [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, [SignalRConnectionInfo (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")] SignalRConnectionInfo connectionInfo) { // connectionInfo contains an access key token with a name identifier claim set to the authenticated user return connectionInfo; } You can then send messages to that user by setting the UserId property of a SignalR message. [FunctionName("SendMessage")] public static Task SendMessage( [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message, [SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages) { return signalRMessages.AddAsync( new SignalRMessage { // the message will only be sent to these user IDs UserId = "userId1", Target = "newMessage", Arguments = new [] { message } }); } For information on other languages, see the Azure SignalR Service bindings for Azure Functions reference. Next steps In this article, you have learned how to develop and configure serverless SignalR Service applications using Azure Functions. Try creating an application yourself using one of the quick starts or tutorials on the SignalR Service overview page.
https://docs.microsoft.com/bs-cyrl-ba/azure/azure-signalr/signalr-concept-serverless-development-config
2020-11-24T02:37:23
CC-MAIN-2020-50
1606141169606.2
[array(['media/signalr-concept-azure-functions/signalr-service-mode.png', 'SignalR Service Mode'], dtype=object) array(['media/signalr-concept-serverless-development-config/cors-settings.png', 'Configuring CORS'], dtype=object) ]
docs.microsoft.com
Adding Currency Support: edd_currencies The “edd_currencies” filter allows you to easily add support for your own additional currencies to Easy Digital Downloads. Let’s say, for example, that you want to add support for the Indian Rupee (already supported, so this is only an example). The sample function below would add the support: function pippin_extra_edd_currencies( $currencies ) { $currencies['INR'] = __('Indian Rupee', 'your_domain'); return $currencies; } add_filter('edd_currencies', 'pippin_extra_edd_currencies'); The array key is the currency code and the value is the currency label. Note, it is important that you know that not all payment gateways support all currencies. Check to make sure that your chosen gateway supports the currency you wish to use before launching your store. Currency codes may also be added without using any code through this plugin.
https://docs.easydigitaldownloads.com/article/272-adding-currency-support-eddcurrencies
2018-12-10T04:41:12
CC-MAIN-2018-51
1544376823303.28
[]
docs.easydigitaldownloads.com
QtAudioEngine.AttenuationModelInverse Defines a non-linear attenuation curve for a Sound. More... Properties Detailed Description This type is part of the QtAudioEngine 1.0 module. AttenuationModelInverse must be defined inside AudioEngine. import QtQuick 2.0 import QtAudioEngine 1.0. The default value is 1.
https://docs.ubuntu.com/phone/en/apps/api-qml-development/QtAudioEngine.AttenuationModelInverse
2018-12-10T04:54:46
CC-MAIN-2018-51
1544376823303.28
[]
docs.ubuntu.com
3.1.13. Base web application¶ 3.1.13.1. JavaScript Application¶ The client side of the web UI is written in JavaScript and based on the AngularJS framework and concepts. This is a Single Page Application” All Buildbot pages are loaded from the same path, at the master’s base URL. The actual content of the page is dictated by the fragment in the URL (the portion following the # character). Using the fragment is a common JS techique to avoid reloading the whole page over HTTP when the user changes the URI or clicks a link. AngularJS¶ The best place to learn about AngularJS is its own documentation, AngularJS strong points are: - A very powerful MVC system allowing automatic update of the UI, when data changes - A Testing Framework and philosophy - A deferred system similar to the one from Twisted. - A fast growing community and ecosystem On top of Angular we use nodeJS tools to ease development - gulp buildsystem, seemlessly build the app, can watch files for modification, rebuild and reload browser in dev mode. In production mode, the buildsystem minifies html, css and js, so that the final app is only 3 files to download (+img). - alternatively webpack build system can be used for the same purposes as gulp (in UI extensions) - coffeescript, a very expressive langage, preventing some of the major traps of JS. - pug template langage (aka jade), adds syntax sugar and readbility”) - ngGrid is a grid system for full featured searche. Extensibility¶ Buildbot UI is designed for extensibility. The base application should be pretty minimal, and only include very basic status pages. Base application cannot be disabled so any page not absolutely necessary should be put in plugins. You can also completely replace the default application by another application more suitable to your needs. The md_base application is an example rewrite of the app using material design libraries. Some Web plugins are maintained inside buildbot’s git repository, but this is. Routing¶ AngularJS uses router to match URL and choose which page to display. The router we use is ui.router. Menu is managed by guanlecoja-ui’s glMenuProvider. Please look at ui.router, and guanlecoja-ui documentation for details. Typically, a route regitration will look like following example. # ng-classify declaration. Declares a config class class State extends Config # Dependancy.1.13.2. Hacking Quick-Start¶ This section describes how to get set up quickly to hack on the JavaScript UI. It does not assume familiarity with Python, although a Python installation is required, as well as virtualenv. You will also need NodeJS, and npm installed. Prerequisites¶ Note Buildbot UI is only tested to build on node 4.x.x. Install LTS release of node.js. is a good start for windows and osx For Linux, as node.js is evolving very fast, distros versions are often too old, and sometimes distro maintainers make incompatible changes (i.e naming node binary nodejs instead of node) For Ubuntu and other Debian based distros, you want to use following method: curl -sL | sudo bash - Please feel free to update this documentation for other distros. Know good source for Linux binary distribution is: Install gulp globally. Gulp is the build system used for coffeescript development. sudo npm install -g gulp/ make frontend This will fetch a number of dependencies from pypi, the Python package repository. This will also fetch a bunch a bunch of node.js dependencies used for building the web application, and a bunch of client side js dependencies, with bower Now you’ll need to create a master instance. For a bit more detail, see the Buildbot tutorial (First Run). buildbot create-master sandbox/testmaster mv sandbox/testmaster/master.cfg.sample sandbox/testmaster/master.cfg buildbot start sandbox/testmaster If all goes well, the master will start up and begin running in the background. As you just installed www in editable mode (aka ‘develop’ mode), setup.py did build the web site in prod mode, so the everything is minified, making it hard to debug. When doing web development, you usually run: cd www/base gulp dev This will compile the base webapp in development mode, and automatically rebuild when files change.:8010, and you will access, with your own version of the UI. 3.1.13.3. Guanlecoja¶ Buildbot’s build environment has been factorized for reuse in other projects and plugins, and is callsed Guanlecoja. The documentation and meaning of this name is maintained in Guanlecoja’s own site. 3.1.13.4. Testing Setup¶ buildbot_www uses Karma to run the coffeescript test suite. This is the official test framework made for angular.js. We don’t run the front-end testsuite inside the python ‘trial’ test suite, because testing python and JS is technically very different. Karma needs a browser to run the unit test in. It supports all the major browsers. Given our current experience, we did not see any bugs yet that would only happen on a particular browser this is the reason that at the moment, only
http://docs.buildbot.net/0.9.9/developer/www-base-app.html
2018-12-10T04:00:39
CC-MAIN-2018-51
1544376823303.28
[]
docs.buildbot.net
Frontend Submissions – Shortcodes Frontend Submissions includes several shortcodes for displaying the submission forms, vendor dashboard, profile form, and more. [fes_vendor_dashboard] - This will display the main vendor dashboard. [fes_submission_form] - This will display just the product submission form. [fes_profile_form] - This will display the vendor's profile form. [fes_login_form] - This will display a login form for vendors [fes_registration_form] - This will display a registration form for new vendors. [fes_login_registration_form] - This will display both the vendor registration and login form together. [fes_vendor_contact_form] - This will display a contact form for the currently viewed vendor. A contact form for a specific vendor can be displayed with [fes_vendor_contact_form id="32"] where 32 is the ID number of the vendor.
https://docs.easydigitaldownloads.com/article/333-frontend-submissions-short-codes
2018-12-10T05:16:39
CC-MAIN-2018-51
1544376823303.28
[]
docs.easydigitaldownloads.com
States for managing Hashicorp Vault. Currently handles policies. Configuration instructions are documented in the execution module docs. New in version 2017.7.0. salt.states.vault. policy_present(name, rules)¶ Ensure a Vault policy with the given name and rules is present. demo-policy: vault.policy_present: - name: foo/bar - rules: | path "secret/top-secret/*" { policy = "deny" } path "secret/not-very-secret/*" { policy = "write" }
https://docs.saltstack.com/en/latest/ref/states/all/salt.states.vault.html
2018-12-10T04:55:47
CC-MAIN-2018-51
1544376823303.28
[]
docs.saltstack.com
QtQuick.PathAnimation Animates an item along a path More... Properties - anchorPoint : point - duration : int - easing - easing.type : enumeration - easing.amplitude : real - easing.bezierCurve : list<real> - easing.overshoot : real - easing.period : real - endRotation : real - orientation : enumeration - orientationEntryDuration : real - orientationExitDuration : real - path : Path - target : Item Detailed Description When used in a transition, the path can be specified without start or end points, for example: PathAnimation { path: Path { //no startX, startY PathCurve { x: 100; y: 100} PathCurve {} //last element is empty with no end point specified } } In the above case, the path start will be the item's current position, and the path end will be the item's target position in the target state. See also Animation and Transitions in Qt Quick and PathInterpolator. Property Documentation This property holds the anchor point for the item being animated. By default, the upper-left corner of the target (its 0,0 point) will be anchored to (or follow) the path. The anchorPoint property can be used to specify a different point for anchoring. For example, specifying an anchorPoint of 5,5 for a 10x10 item means the center of the item will follow the path. This property holds the duration of the animation, in milliseconds. The default value is 250. the easing curve used for the animation. To specify an easing curve you need to specify at least the type. For some curves you can also specify amplitude, period, overshoot or custom bezierCurve data. The default easing curve is Easing.Linear. See the PropertyAnimation::easing.type documentation for information about the different types of easing curves. This property holds the ending rotation for the target. If an orientation has been specified for the PathAnimation, and the path doesn't end with the item at the desired rotation, the endRotation property can be used to manually specify an end rotation. This property is typically used with orientationExitDuration, as specifying an endRotation without an orientationExitDuration may cause a jump to the final rotation, rather than a smooth transition. This property controls the rotation of the item as it animates along the path. If a value other than Fixed is specified, the PathAnimation will rotate the item to achieve the specified orientation as it travels along the path. - PathAnimation.Fixed (default) - the PathAnimation will not control the rotation of the item. - PathAnimation.RightFirst - The right side of the item will lead along the path. - PathAnimation.LeftFirst - The left side of the item will lead along the path. - PathAnimation.BottomFirst - The bottom of the item will lead along the path. - PathAnimation.TopFirst - The top of the item will lead along the path. This property holds the duration (in milliseconds) of the transition in to the orientation. If an orientation has been specified for the PathAnimation, and the starting rotation of the item does not match that given by the orientation, orientationEntryDuration can be used to smoothly transition from the item's starting rotation to the rotation given by the path orientation. This property holds the duration (in milliseconds) of the transition out of the orientation. If an orientation and endRotation have been specified for the PathAnimation, orientationExitDuration can be used to smoothly transition from the rotation given by the path orientation to the specified endRotation. This property holds the path to animate along. For more information on defining a path see the Path documentation. This property holds the item to animate.
https://docs.ubuntu.com/phone/en/apps/api-qml-current/QtQuick.PathAnimation
2018-12-10T04:24:36
CC-MAIN-2018-51
1544376823303.28
[]
docs.ubuntu.com
Digital Signature Algorithm (DSA and ECDSA)¶ A variant of the ElGamal signature, specified in FIPS PUB 186-4. It is based on the discrete logarithm problem in a prime finite field (DSA) or in an elliptic curve field (ECDSA). A sender can use a private key (loaded from a file) to sign a message: >>> from Crypto.Hash import SHA256 >>> from Crypto.PublicKey import ECC >>> from Crypto.Signature import DSS >>> >>> message = b'I give my permission to order #4355' >>> key = ECC.import_key(open('privkey.der').read()) >>> h = SHA256.new(message) >>> signer = DSS.new(key, 'fips-186-3') >>> signature = signer.sign(h) The receiver can use the matching public key to verify authenticity of the received message: >>> from Crypto.Hash import SHA256 >>> from Crypto.PublicKey import ECC >>> from Crypto.Signature import DSS >>> >>> key = ECC.import_key(open('pubkey.der').read()) >>> h = SHA256.new(received_message) >>> verifier = DSS.new(key, 'fips-186-3') >>> try: >>> verifier.verify(h, signature): >>> print "The message is authentic." >>> except ValueError: >>> print "The message is not authentic." Crypto.Signature.DSS. new(key, mode, encoding='binary', randfunc=None)¶ Create a signature object DSS_SigSchemethat can perform (EC)DSA signature or verification. Note Refer to NIST SP 800 Part 1 Rev 4 (or newer release) for an overview of the recommended key lengths. - class Crypto.Signature.DSS. DssSigScheme(key, encoding, order)¶ A (EC)DSA signature object. Do not instantiate directly. Use Crypto.Signature.DSS.new().
https://pycryptodome.readthedocs.io/en/latest/src/signature/dsa.html
2018-12-10T04:54:20
CC-MAIN-2018-51
1544376823303.28
[]
pycryptodome.readthedocs.io
.3 Map { RouteModel { id: routeModel } MapItemView { model: routeModel delegate: routeDelegate } Component { id: routeDelegate MapRoute { route: routeData line.color: "blue" line.width: 5 smooth: true opacity: 0.8 } } } Property Documentation This property controls whether to automatically pan and zoom the viewport to display all map items when items are added or removed. Defaults to false. This property holds the delegate which defines how each item in the model should be displayed. The Component must contain exactly one MapItem -derived object as the root object. This property holds the model that provides data used for creating the map items defined by the delegate. Only QAbstractItemModel based models are supported.
https://docs.ubuntu.com/phone/en/apps/api-qml-development/QtLocation.MapItemView
2018-12-10T05:03:25
CC-MAIN-2018-51
1544376823303.28
[]
docs.ubuntu.com
Returns. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-db-snapshot-attributes --db-snapshot-identifier <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --db-snapshot-identifier (string) The identifier for the DB snapshot to describe the attributesAttributesResult -> (structure) Contains the results of a successful call to the DescribeDBSnapshotAttributes API action. Manual DB snapshot attributes are used to authorize other AWS accounts to copy or restore a manual DB snapshot. For more information, see the ModifyDBSnapshotAttribute API action. DBSnapshotIdentifier -> (string)The identifier of the manual DB snapshot that the attributes apply to. DBSnapshotAttributes -> (list) The list of attributes and values for the manual DB snapshot. (structure) Contains the name and values of a manual DB snapshot attribute Manual DB snapshot attributes are used to authorize other AWS accounts to restore a manual DB snapshot. For more information, see the ModifyDBSnapshotAttribute API. AttributeName -> (string) The name of the manual DB snapshot attribute. The attribute named restore refers to the list of AWS accounts that have permission to copy or restore the manual DB cluster snapshot. For more information, see the ModifyDBSnapshotAttribute API action. AttributeValues -> (list) The value or values for the manual DB snapshot attribute. If the AttributeName field is set to restore , then this element returns a list of IDs of the AWS accounts that are authorized to copy or restore the manual DB snapshot. If a value of all is in the list, then the manual DB snapshot is public and available for any AWS account to copy or restore. (string)
https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-snapshot-attributes.html
2018-12-10T04:34:26
CC-MAIN-2018-51
1544376823303.28
[]
docs.aws.amazon.com
Add a custom flag To add a custom flag, perform the following steps: - Create the file that corresponds to the version of Apache or PHP that will receive the custom flag. - In the file, enter the flag that you wish to add. - EasyApache parses the flag and formats the options properly before it adds them to the data structure. - Save your changes. Example of Custom Flag with data Note: Enter only one item per line. --with-module --path-to-module=/usr/bin/module --my-option= This data results in the following command-line structure: --with-module --path-to-module=/usr/bin/module --my-option Raw opts with Apache 2.4 If you use raw opts to add statically-compiled functionality into the Apache binary, then you must change the --with-module flag to the --with-module=static flag. The module will not compile if =static is not included. Raw opts with ModSecurity If you use raw opts to call the fuzzyHash operator in ModSecurity, you must install ssdeep. You must also add the --with-ssdeep=<path/to/ssdeep> flag to your raw opts file. If you do not include this, the operator will not function. Raw opts with the make command EasyApache passes the -j2 option by default if your system uses more than one core. Set options in this file to compile EasyApache with a different number of cores. Note: If this file does not exist or is empty, EasyApache uses its default behavior. How to skip your raw opts If you experience problems with your Apache builds, you can skip your raw opts before build time. This can be useful for troubleshooting purposes. To skip your raw opts with the command line interface, perform the following steps: Run the following script before you issue the build command: /scripts/easyapache --skip-rawopts Rebuild Apache with the /scripts/easyapachescript. To skip your raw opts with the WHM interface, perform the following steps: - Click the Help link. - Select the Do not use raw opts support checkbox. - Click Submit. - Return to the previous screen and build the profile. If the build completes successfully, you must reconfigure or omit your raw opts.
https://docs.cpanel.net/display/EA/Raw+Opts
2018-12-10T03:49:38
CC-MAIN-2018-51
1544376823303.28
[]
docs.cpanel.net
This topic describes how to work with notes in your edX course. As you work through an edX course, you may want to highlight a particular passage or make a note about what you have read. In some edX courses, you can highlight passages and make notes right in the course. Note You can create notes for most text in the body of the course. However, notes are currently not available for exercises, videos, or PDF textbooks. When a course includes the notes feature, every page has a Notes page at the top and a pencil icon in the lower right corner. Your notes can contain text as well as tags that help you organize and find your notes. You can see individual notes inside the course content, or you can see a list of your notes on the Notes page. For more information, see The Notes Page. You can use either the mouse or keyboard shortcuts to create, access, and delete notes. For more information about using keyboard shortcuts, see Keyboard Shortcuts for Notes. To highlight a passage or add a note that includes text and tags, follow these steps. Select the text that you want to highlight or make the note about. You can select as much text as you want. When a pencil “edit” icon appears above the selected text, select the icon to open the note editor. When the note editor opens, enter your note and any tags that you want to add. You can also save the highlight for the passage without entering a note or tag. - To highlight a passage without adding a note or tag, select Save or press Enter. When you move your cursor over the highlighted text, the note field contains the words “no comment”. - To enter a note, select Comments, and then type the text of your note. Your note can contain as many words as you want. - To add one or more tags, select Add some tags here, and then type any tags that you want to add. Tags cannot contain spaces. If you want to add a tag that has more than one word, type multiple words as one word with no spaces, or use hyphens (-) or underscores (_) to separate words in the tag. You can view your course notes in two places. On the Notes page, you can see a list of the notes you have made in your course. You can also search the text of your notes or the tags that you added to your notes. The Notes page lists your notes by the date you created or edited them, with the most recently modified first. The page shows you both the text that you selected and the note that you made. You can also see the following information next to each note. To edit a note, follow these steps. In the course body, move your cursor over the highlighted text until your note appears. When the note appears, select the pencil icon in the upper right corner to open the note editor. In the note editor, edit your note, and then select Save. To delete a note or highlight, follow these steps. By default, you can see all of your notes. You can hide your notes, and show them again, by selecting the pencil icon in the lower right corner. When the pencil icon has a dark gray background, notes are visible. When the pencil icon has a light gray background, notes are hidden. Note If you hide notes, you cannot make new notes. To make new notes, select the pencil icon to show notes. To search your notes, follow these steps. You can use keyboard shortcuts to create, edit, and delete your notes. Note These keyboard shortcuts are for both PCs and Macintosh computers. However, you can only use these keyboard shortcuts on browsers that support caret browsing. Before you use the following keyboard shortcuts, you must make sure that notes are visible. To show or hide notes, press Ctrl + Shift + left bracket ( [). To create a note using keyboard shortcuts, follow these steps. ]) to open the note editor. The note editor opens with the cursor in the text field. To close the note editor without creating a note, press Tab to move to the Cancel button, and then press Enter. You can also press Esc to close the note editor. To edit or delete a note, follow these steps. To close the note editor without making any changes, press Esc.
https://edx.readthedocs.io/projects/edx-guide-for-students/en/latest/SFD_notes.html
2018-12-10T05:31:14
CC-MAIN-2018-51
1544376823303.28
[]
edx.readthedocs.io
- Reference > mongoShell Methods > - Database Methods > - db.upgradeCheckAllDBs() db.upgradeCheckAllDBs()¶ On this page Definition¶ db. upgradeCheckAllDBs()¶ New in version 2.6. Performs a preliminary check for upgrade preparedness to 2.6. The helper, available in the 2.6 mongoshell, can run connected to either a 2.4 or a 2.6 server in the admindatabase. The method cycles through all the databases and checks for: - documents with index keys longer than the index key limit, - documents with illegal field names, - collections without an _idindex, and - indexes with invalid specifications, such as an index key with an empty or illegal field name. Additional 2.6 changes that affect compatibility with older versions require manual checks and intervention. See Compatibility Changes in MongoDB 2.6 for details. See also Behavior¶ db.upgradeCheckAllDBs() performs collection scans and has an impact on performance. To mitigate the performance impact: - For sharded clusters, configure to read from secondaries and run the command on the mongos. - For replica sets, run the command on the secondary members. db.upgradeCheckAllDBs() can miss new data during the check when run on a live system with active write operations. For index validation, db.upgradeCheckAllDBs() only supports the check of version 1 indexes and skips the check of version 0 indexes. The db.upgradeCheckAllDBs() checks all of the data stored in the mongod instance: the time to run db.upgradeCheckAllDBs() depends on the quantity of data stored by mongod. Required Access¶ On systems running with authorization, a user must have access that includes the listDatabases action on all databases and the find action on all collections, including the system collections. You must run the db.upgradeCheckAllDBs() operation in the admin database. Example¶ The following example connects to a secondary running on localhost and runs db.upgradeCheckAllDBs() against the admin database. Because the output from the method can be quite large, the example pipes the output to a file. Error Output¶ The upgrade check can return the following errors when it encounters incompatibilities in your data: Index Key Exceed Limit¶ To resolve, remove the document. Ensure that the query to remove the document does not specify a condition on the invalid field or field. Documents with Illegal Field Names¶ To resolve, remove the document and re-insert with the appropriate corrections. Index Specification Invalid¶ To resolve, remove the invalid index and recreate with a valid index specification.
https://docs.mongodb.com/v3.2/reference/method/db.upgradeCheckAllDBs/
2018-05-20T13:38:13
CC-MAIN-2018-22
1526794863570.21
[]
docs.mongodb.com
You can configure display resolution settings for your virtual machine to enable 3D accelerated graphics, retina display support, and single window and full screen settings. Enable Accelerated 3D GraphicsOn certain virtual machines, Fusion provides support for accelerated 3D graphics. Enable Retina Display SupportThe Retina display options control the appearance of virtual machines on displays with high pixel density. Configure Resolution Settings for Virtual Machine DisplayYou can configure the resolution settings that determine how a virtual machine is displayed. Parent topic: Configuring Your Virtual Machines
https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-CAB5888F-1008-4F12-8210-0AA603089749.html
2018-05-20T14:04:29
CC-MAIN-2018-22
1526794863570.21
[]
docs.vmware.com
java.lang.Object org.springframework.context.support.ApplicationObjectSupportorg.springframework.context.support.ApplicationObjectSupport org.springframework.web.context.support.WebApplicationObjectSupportorg.springframework.web.context.support.WebApplicationObjectSupport org.springframework.web.servlet.view.BeanNameViewResolverorg.springframework.web.servlet.view.BeanNameViewResolver public class BeanNameViewResolver Simple implementation of ViewResolver that interprets a view name as bean name in the current application context, i.e. in the XML file of the executing DispatcherServlet. public BeanNameViewResolver() public void setOrder(int order) public int getOrder() Ordered Normally starting with 0 or 1, with Ordered. getOrderin interface Ordered Ordered.LOWEST_PRECEDENCE
http://docs.spring.io/spring/docs/3.0.0.M1/javadoc-api/org/springframework/web/servlet/view/BeanNameViewResolver.html
2016-07-23T15:32:43
CC-MAIN-2016-30
1469257823072.2
[]
docs.spring.io
From the Form Design tab you can you select widgets to add to your form. - Pointer Pointer, switches to the widget selection mode. Note Selecting any widget, will switch to the widget add mode. Then you can click anywhere on the form to place the widget. - Label A Label widget displays predefined information on a form. Usually it is used as a caption next to other data-aware widgets. - Text Box A Text Box is a single line container for data contained in your table. - Text Editor A Text Editor is a multiline container for data contained in your table. - Combo Box A Combo Box displays a list of options to choose from. - Check Box A Check Box holds two or three states of data (e.g. On/Off) - Image Box An Image Box holds an image, bound to a field in a table. - Button A Button allows you to define actions to be executed upon clicking on it. - Frame A Frame is used as a container for other widgets. - Group Box A Group Box is used to group other widgets and control their state. - Tab Widget A Tab Widget is used as a container for other widgets and can have many pages that contain different widgets. - Line A Line is used as a logical separator between different parts of a form. - Web Browser A Web Browser is a widget that allows to display a web page inside the form. - Assign Action Assign Action is used to assign an action to be executed when an event occurs (e.g. clicking on a button).
https://docs.kde.org/stable4/en/calligra/kexi/the-form-design-tab.html
2016-07-23T15:01:24
CC-MAIN-2016-30
1469257823072.2
[array(['/stable4/common/top-kde.jpg', None], dtype=object) array(['kexi_form_design_tab.png', None], dtype=object)]
docs.kde.org
Letter from FDR to the Royal Couple Showing his Appreciation for their Visit THE WHITE HOUSE WASHINGTON June 15, 1939 His Majesty King George VI SS Empress of Britain , Halifax, June 15. I cannot allow you and the Queen to sail for home without expressing once more the extreme pleasure which your all too brief visit to the United States gave us. The warmth of the welcome accorded you everywhere you visited in this country was the spontaneous outpouring of Americans who were deeply touched by the tact, the graciousness, and the understanding hearts of your guests. I shall always like to think that you felt the sincerity of this manifestation of the friendship of the American people. Mrs. Roosevelt joins me in parting felicitations to Your Majesties and best wishes for a safe and pleasant voyage. Franklin D. Roosevelt (Initialed) FDR
http://docs.fdrlibrary.marist.edu/bonvoyag.html
2016-07-23T15:02:48
CC-MAIN-2016-30
1469257823072.2
[]
docs.fdrlibrary.marist.edu
This form sets forth a formal notice from a company to its employees regarding changes to an incentive payment plan, as well as an acknowledgment and agreement by the employee to such changes. It provides language to advise employees of the effective date of the change as well as notify them as to the exact changes and their effect on the employee. This document contains standard terms as well as opportunities to provide optional language. This document is useful to employers seeking to communicate changes to incentive pay plans. Get Unlimited Access to Our Complete Business Library Plus Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium?
http://premium.docstoc.com/docs/111143726/Change-of-Incentive-Pay-Plan-Notice-and-Agreement
2013-05-18T21:53:44
CC-MAIN-2013-20
1368696382892
[]
premium.docstoc.com
Feature and Technical Overview Local Navigation Integrating with the BlackBerry Messenger menu With the BlackBerry Messenger SDK, you can add menu items to the BlackBerry Messenger menu. From within BBM, BlackBerry device users can click the menu items to perform actions that are specific to your application. This tight integration increases the accessibility of your application for users. For example, you can add a menu item to let a user open your application from within BBM. In a game application, you might add a menu item to let the user view your game's leaderboard from within BBM. Or, you could let a user send a message only to those contacts who have your application installed on their devices. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/36639/Adding_menu_items_into_BBM_1327371_11.jsp
2013-05-18T21:20:16
CC-MAIN-2013-20
1368696382892
[]
docs.blackberry.com
Introduction is a lightweight protocol intended for exchanging structured information in a decentralized, distributed environment. Groovy has a SOAP implementation based on which allows you to create a SOAP server and/or make calls to remote SOAP servers using Groovy.. -<<
http://docs.codehaus.org/pages/diffpages.action?pageId=49064&originalId=228168381
2013-05-18T21:38:43
CC-MAIN-2013-20
1368696382892
[array(['/download/attachments/49064/GroovySOAP.png?version=1&modificationDate=1148501980402', None], dtype=object) ]
docs.codehaus.org
java.lang.Object org.modeshape.sequencer.image.ImageMetadataorg.modeshape.sequencer.image.ImageMetadata public class ImageMetadata Get file format, image resolution, number of bits per pixel and optionally number of images, comments and physical resolution from JPEG, GIF, BMP, PCX, PNG, IFF, RAS, PBM, PGM, PPM and PSD files (or input streams). Use the class like this: ImageMetadata ii = new ImageMetadata(); ii.setInput(in); // in can be InputStream or RandomAccessFile ii.setDetermineImageNumber(true); // default is false ii.setCollectComments(true); // default is false if (!ii.check()) { System.err.println("Not a supported image file format."); return; } System.out.println(ii.getFormatName() + ", " + ii.getMimeType() + ", " + ii.getWidth() + " x " + ii.getHeight() + " pixels, " + ii.getBitsPerPixel() + " bits per pixel, " + ii.getNumberOfImages() + " image(s), " + ii.getNumberOfComments() + " comment(s)."); // there are other properties, check out the API documentationYou can also use this class as a command line program. Call it with a number of image file names and URLs as parameters: java ImageMetadata *.jpg *.png *.gif call it without parameters and pipe data to it: java ImageMetadata < image.jpg Known limitations: Requirements: The latest version can be found at. Written by Marco Schmidt. This class is contributed to the Public Domain. Use it at your own risk. History: setDetermineImageNumber(boolean)with trueas argument to identify animated GIFs ( getNumberOfImages()will return a value larger than 1). setCollectComments(boolean). That new method lets the user specify whether textual comments are to be stored in an internal list when encountered in an input image file / stream. Added two methods to return the physical width and height of the image in dpi: getPhysicalWidthDpi()and getPhysicalHeightDpi(). If the physical resolution could not be retrieved, these methods return -1. isProgressive()returns whether ImageMetadata has found that the storage type is progressive (or interlaced). Thanks to Joe Germuska for suggesting the feature. Bug fix: BMP physical resolution is now correctly determined. Released as 1.5. Vector<String>, and removed any unnecessary casting. Also removed the unnecessary else statements where the previous block ended in a return. Also renamed to ImageMetadata. public static final int FORMAT_JPEG getFormat()for JPEG streams. ImageMetadata can extract physical resolution and comments from JPEGs (only from APP0 headers). Only one image can be stored in a file. It is determined whether the JPEG stream is progressive (see isProgressive()). public static final int FORMAT_GIF getFormat()for GIF streams. ImageMetadata can extract comments from GIFs and count the number of images (GIFs with more than one image are animations). It is determined whether the GIF stream is interlaced (see isProgressive()). public static final int FORMAT_PNG getFormat()for PNG streams. PNG only supports one image per file. Both physical resolution and comments can be stored with PNG, but ImageMetadata is currently not able to extract those. It is determined whether the PNG stream is interlaced (see isProgressive()). public static final int FORMAT_BMP getFormat()for BMP streams. BMP only supports one image per file. BMP does not allow for comments. The physical resolution can be stored. public static final int FORMAT_PCX getFormat()for PCX streams. PCX does not allow for comments or more than one image per file. However, the physical resolution can be stored. public static final int FORMAT_IFF getFormat()for IFF streams. public static final int FORMAT_RAS getFormat()for RAS streams. Sun Raster allows for one image per file only and is not able to store physical resolution or comments. public static final int FORMAT_PBM getFormat()for PBM streams. public static final int FORMAT_PGM getFormat()for PGM streams. public static final int FORMAT_PPM getFormat()for PPM streams. public static final int FORMAT_PSD getFormat()for PSD streams. public ImageMetadata() public boolean check() setInput(InputStream)or setInput(DataInput). If true is returned, the file format was known and information on the file's content can be retrieved using the various getXyz methods. public int getBitsPerPixel() check()was successful, returns the image's number of bits per pixel. Does not include transparency information like the alpha channel. public String getComment(int index) index- int index of comment to return IllegalArgumentException- if index is smaller than 0 or larger than or equal to the number of comments retrieved getNumberOfComments() public int getFormat() check()was successful, returns the image format as one of the FORMAT_xyz constants from this class. Use getFormatName()to get a textual description of the file format. public String getFormatName() check()was successful, returns the image format's name. Use getFormat()to get a unique number. public int getHeight() check()was successful, returns one the image's vertical resolution in pixels. public String getMimeType() check()was successful, returns a String with the MIME type of the format. image/jpeg public int getNumberOfComments() check()was successful and setCollectComments(boolean)was called with trueas argument, returns the number of comments retrieved from the input image stream / file. Any number >= 0 and smaller than this number of comments is then a valid argument for the getComment(int)method. public int getNumberOfImages() setDetermineImageNumber(true);was called before a successful call to check(). This value can currently be only different from 1for GIF images. public int getPhysicalHeightDpi() check()was successful. Returns -1on failure. getPhysicalWidthDpi(), getPhysicalHeightInch() public float getPhysicalHeightInch() check()was successful, returns the physical width of this image in dpi (dots per inch) or -1 if no value could be found. getPhysicalHeightDpi(), getPhysicalWidthDpi(), getPhysicalWidthInch() public int getPhysicalWidthDpi() check()was successful, returns the physical width of this image in dpi (dots per inch) or -1 if no value could be found. getPhysicalHeightDpi(), getPhysicalWidthInch(), getPhysicalHeightInch() public float getPhysicalWidthInch() -1.0fif width information is not available. Assumes that check()has been called successfully. -1.0fon failure getPhysicalWidthDpi(), getPhysicalHeightInch() public int getWidth() check()was successful, returns one the image's horizontal resolution in pixels. public boolean isProgressive() public static void main(String[] args) args- the program arguments which must be file names public void setCollectComments(boolean newValue) false. If enabled, comments will be added to an internal list. newValue- if true, this class will read comments getNumberOfComments(), getComment(int) public void setDetermineImageNumber(boolean newValue) false. This is a special option because some file formats require running over the entire file to find out the number of images, a rather time-consuming task. Not all file formats support more than one image. If this method is called with trueas argument, the actual number of images can be queried via getNumberOfImages()after a successful call to check(). newValue- will the number of images be determined? getNumberOfImages() public void setInput(DataInput dataInput) RandomAccessFileimplements DataInput. dataInput- the input stream to read from public void setInput(InputStream inputStream) inputStream- the input stream to read from
http://docs.jboss.org/modeshape/2.4.0.Final/api/org/modeshape/sequencer/image/ImageMetadata.html
2013-05-18T21:40:34
CC-MAIN-2013-20
1368696382892
[]
docs.jboss.org
Talk:Adding?????????????????); }
http://docs.joomla.org/index.php?title=Talk:Adding_a_multiple_item_select_list_parameter_type&oldid=64513
2013-05-18T21:40:22
CC-MAIN-2013-20
1368696382892
[]
docs.joomla.org
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up. Trans 200.04 History History: 1-2-56; am. (1), Register, June, 1959, No. 42 , eff. 7-1-59; renum. from Hy 10.04 and am., Register, July, 1980, No. 295 , eff. 8-1-80; am. (1) and (2), Register, March, 1984, No. 339 , eff. 4-1-84. Trans 200.05 Trans 200.05 Warning signs for underground transmission lines. Trans 200.05(1) (1) Subject to the conditions set forth in this chapter and in compliance with the provisions of s. 86.16 , Stats., the department may grant permits to public utility companies and cooperatives to erect on highway right of way signs giving notice of the presence of underground conduit, cables or pipe for the transmission of electric power, communications or liquid or gaseous fuels. Trans 200.05(2) (2) When warning signs are permitted in accordance with this chapter, they shall be placed on highway right of way within 2 feet of the fence or right of way line in such a manner that the face of the sign roughly parallels the highway centerline and shall be so adjusted as to height that they will in no way impair vision at intersections, curves, railroad crossings or private entrances. Signs may be erected at the following prescribed locations: Trans 200.05(2)(a) (a) On one or both sides of a public highway or railroad right of way which the underground transmission line crosses. Trans 200.05(2)(b) (b) On one or both sides of a stream wider than 50 feet. In the case of navigable streams or channels, additional signs may be permitted in the stream at such locations approved by the authority having control of navigation. Trans 200.05(2)(c) (c) On one side of a small stream or drainage ditch. Trans 200.05(2)(d) (d) At such intermediate points that signs will be located at intervals of approximately one-half mile.‶x 18‶ when mounted horizontally or not larger than 12‶ x 18‶ when mounted vertically. Roof-type aerial markers shall not exceed 24‶ x 18‶measured on the plane connecting the 4 lower corners of the marker with a maximum vertical dimension of 8‶ x 12/g administrativecode/Trans 200.06(1)(g) administrativecode/Trans 200.06?
http://docs.legis.wisconsin.gov/code/admin_code/trans/200/06/1/g
2013-05-18T21:47:28
CC-MAIN-2013-20
1368696382892
[]
docs.legis.wisconsin.gov
You can use nack with consume, using those flags : - On consume method : AMQP_NOPARAM - On nack method : AMQP_REQUEUE AMQPQueue::nack (PECL amqp >= Unknown) AMQPQueue::nack — Mark a message as explicitly not acknowledged. Beschreibung . Parameter-Liste delivery_tag The delivery tag by which to identify the message. flags A bitmask of flags. Fehler/Exceptions Throws AMQPChannelException if the channel is not open. Throws AMQPConnectionException if the connection to the broker was lost. Rückgabewerte Gibt bei Erfolg TRUE zurück. Im Fehlerfall wird FALSE zurückgegeben.
http://docs.php.net/manual/de/amqpqueue.nack.php
2013-05-18T21:38:24
CC-MAIN-2013-20
1368696382892
[array(['/images/notes-add.gif', 'add a note'], dtype=object)]
docs.php.net
. List comprehensions provide a concise way to create]] >>> functions with more than one argument and to nested functions: >>> [str(round(355/113.0, i)) for i in range(1,6)] ['3.1', '3.14', '3.142', '3.1416', '3.14159'] There alway.. Another useful data type built into Python is the dictionary. their append() and extend() methods, as well as slice and indexed assignments.') vec]) # use a list comprehension {2: 4, 4: 16, 6: 36} When looping through dictionaries, the key and corresponding value can be retrieved at the same time using the iteritems() method. >>> knights = {'gallahad': 'the pure', 'robin': 'the brave'} >>> for k, v in knights.iteritems(): ... %s? It is %s.' % (q, a) ... What is your name? It is lancelot. What is your quest? It is the holy grail. What is your favorite color? It is blue. The. In general, the return value of a short-circuit. Some examples of comparisons between sequences with the same types: is legal. The outcome is deterministic but arbitrary: the types are ordered by their name. Thus, a list is always smaller than a string, a string is always smaller than a tuple, etc. Mixed numeric types are compared according to their numeric value, so 0 equals 0.0, etc.5.1
http://docs.python.org/release/2.3.5/tut/node7.html
2013-05-18T21:31:04
CC-MAIN-2013-20
1368696382892
[]
docs.python.org
time range picker noun A tool that enables you to quickly define the time range of a search when using Splunk Web. In the Search app, the time range picker appears as a menu at the end of the search bar. You can also define time range pickers with custom sets of time ranges for forms in views and dashboards. The time range picker enables you to run a search for a preset specified time period, such as Last 15 minutes or Yesterday. You can also use the time range picker to define your own custom time range for a search, or set up a data collection window for a real-time search. The time range picker is set to All time by default. This searches the entire set of data in your index, from the oldest events to the most current. If you have a lot of data, All time searches can take a lot of time. They're probably overkill if you just want to know about something that happened in the very recent past. For more information In the User Manual: - Change the time range to narrow your search - Search and report in real time - Change the time range (tutorial) In the Developer Manual:
http://docs.splunk.com/Splexicon:Timerangepicker
2013-05-18T21:18:17
CC-MAIN-2013-20
1368696382892
[]
docs.splunk.com
User Guide Local Navigation Search This Document Download a free or trial item You must be logged in to the BlackBerry App World storefront with your BlackBerry ID before you download free or trial items. BlackBerry smartphone, or in one of the following folders on the home screen of your smartphone: Downloads, Games, Applications, or Instant Messaging. Next topic: Buy an item Previous topic: About downloading items Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/40444/Download_a_free_or_trial_item_643942_11.jsp
2013-05-18T21:55:53
CC-MAIN-2013-20
1368696382892
[]
docs.blackberry.com
Groovy Monkey is a dynamic scripting tool for the Eclipse Platform that enables you to automate tasks, explore the Eclipse API and engage in rapid prototyping. In fact, I think that if you are working on automating tasks in Eclipse or doing Plugin development in general, this tool is one for you. Groovy Monkey can allow you to try out code and do rapid prototyping without the overhead of deploying a plugin or creating a seperate runtime instance. Groovy Monkey is based on the Eclipse Jobs API, which enables you to monitor the progress in the platform seemlessly and allows you to write your scripts so that users can cancel them midway. Groovy Monkey is also based on the Bean Scripting Framework (BSF) so that you can write your Groovy Monkey scripts in a number of languages ( particularly Groovy ). In fact you can write in Groovy, Beanshell, Ruby or Python. The project update site is located at the Groovy-Monkey SourceForge site ( update sites: Eclipse v3.2 or Eclipse v3.1.2 ). Direct download of Groovy Monkey directly goto Requirements Eclipse Version compatibility Eclipse 3.1 : working update site Eclipse 3.2 : working update site Java Version compatibility 1.4 5.0 6.0 Addition one: metadata keywords LANG metadata keyword First, there is a new metadata keyword called LANG, which as is implied, determines what scripting language you wish to use. Here is an example of an Groovy Monkey base example ported to Groovy: Notice the LANG tag, that is all there is to that. There is also a New Groovy Monkey Script wizard available that has the legal values in pulldown menus. Job metadata keyword The Job metadata tag allows you to specify what kind of Eclipse Job that your Groovy Monkey script will be run in. By default it is set to Job, but UIJob and WorkspaceJob are also available. In Eclipse it is best to run almost all of your code from outside the UI Thread so UIJob is not recommended. To enable you to access UI elements from within your script there is a Runner DOM that enables your script to pass a Runnable object that can be called from the asyncExec() or syncExec() methods. For Groovy the JFace DOM allows you to pass a Closure directly to be invoked from either asyncExec() or syncExec(). Exec-Mode metadata keyword The Exec-Mode metadata keyword allows you to specify whether the script should be run in the background (default ) or foreground. The foreground mode has Eclipse directly pop up a modal dialog box that shows the user the progress of the script, the background node does not. Include metadata keyword The Include metadata keyword allows you to specify a resource in your workspace and to directly add it to the classloader of your Groovy Monkey script. Examples would obviously include jar files or directories. Include-Bundle metadata keyword The Include-Bundle metadata keyword allows you to have an installed bundle be directly added to the classloader of your Groovy Monkey script.. You can double click on a type in the outline view and have it open the source directly in your editor, if you have included external plugins in your Java search path. There is also an "Installed DOMs" view that shows the installed DOM plugins currently in your Eclipse workbench. The editor also includes a right click command to display a dialog that lists the and will install available DOMs to your script.. The Groovy-SWT DOM is now included by default when you have the net.sf.groovyMonkey.groovy fragment installed. The net.sf.groovyMonkey.groovy fragment contains Groovy Monkey's support for the Groovy language.. 1 CommentHide/Show Comments Jun 14, 2010 Araceli Sargent That is well known that money makes us autonomous. But how to act when someone doesn't have money? The one way is to get the mortgage loans and financial loan.
http://docs.codehaus.org/display/GROOVY/Groovy+Monkey?showComments=true%20showCommentArea=true
2013-05-18T21:18:51
CC-MAIN-2013-20
1368696382892
[]
docs.codehaus.org
Design appearance using Menus and Modules: Joomla! 1.5 Please note that the content on this page is currently incomplete. Please treat it as a work in progress. - This article was last edited by LornaS (talk| contribs) 2 years ago. (Purge) The aim of this document is to introduce the design of Menus and Modules. Background to creating a new Joomla! 1.5 web site with menu modules. - The Menu Manager enables the creation and editing of menus. - The Menu Item Manager enables the creation and editing of menu items - The Module controls the display of the menu on the page - You control the placement Menu Item Manager picture as menu items on sample site.) Plan the Menu and Menu Items -. -.. Where Next? Further information Index to other documents in this series --Lorna Scammell February 2011
http://docs.joomla.org/index.php?title=Design_appearance_using_Menus_and_Modules:_Joomla!_1.5&oldid=37342
2013-05-18T21:21:07
CC-MAIN-2013-20
1368696382892
[]
docs.joomla.org
Your best way to get started with Python on Mac OS X is through the PythonIDE integrated development environment, see section 1.2. AppleWorks or any other word processor that can save files in ASCII is also a possibility, including TextEdit which is included with OS X. To run your script from the Terminal window you must make sure that /usr/local/bin is in your shell search path. To run your script from the Finder you have two options: PythonLauncher has various preferences to control how your script is launched. Option-dragging allows you to change these for one invocation, or use its Preferences menu to change things globally. See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.4.1/mac/node5.html
2013-05-18T21:19:18
CC-MAIN-2013-20
1368696382892
[]
docs.python.org
Extensions¶ TextBlob supports adding custom models and new languages through “extensions”. Extensions can be installed from the PyPI. $ pip install textblob-name where “name” is the name of the package. Available extensions¶ Languages¶ - textblob-fr: French - textblob-de: German Part-of-speech Taggers¶ - textblob-aptagger: A fast and accurate tagger based on the Averaged Perceptron. Interested in creating an extension? See the Contributing guide.
https://textblob.readthedocs.io/en/latest/extensions.html
2021-06-12T23:47:05
CC-MAIN-2021-25
1623487586465.3
[]
textblob.readthedocs.io
iOS SDK Debugging Guide iOS Debug Mind Map Confirm Certificate Please confirm the availability of the certificate on the "Application Details Page": Development Environment Test Before testing the JPush iOS development environment, make sure that 3 are unified:: App is a package of development environment (development certificate Development) Upload the development certificate and verify it Release Environmental Test Before testing the JPush iOS production environment, make sure that 3 are unified: App is an ad-hoc package or App Store version (issued certificate Production) Upload the certificate and verify it Other Issues that may Exist The message received is not stable enough JPush iOS is a supplement to the push of the original official APNs, and it is encapsulated to help developers use APNs more easily. Because the APNs themselves do not promise to guarantee the arrival of messages, the connectivity between the client network and the server side has a great influence on whether the APNs could receive the message in a timely manner.
https://docs.jiguang.cn/en/jpush/client/iOS/ios_debug_guide/
2018-07-15T21:03:19
CC-MAIN-2018-30
1531676588972.37
[array(['../../image/JPushiOS.png', 'jpush_ios'], dtype=object) array(['../../image/ios_tut_cert_ok.png', 'jpush_ios_5'], dtype=object)]
docs.jiguang.cn
Fundamentally, integrations occur when two or more systems need to communicate with each other. Appian provides a comprehensive set of objects and features that enable Designers to easily integrate with external systems. These topics gather all of the relevant information dealing with integrations and puts them into an easy-to-navigate list. If you are completely new to integrations, check out the How to Think About Integrations article before continuing to any of the object-specific references. These topics are focused on the shared data and services between systems. As such, the following topics will not be discussed in this guide. Data stores are objects that connect to third-party relational database systems, via a JDBC connection. Therefore, it technically is an integration. However, because most of it’s configuration occurs by a system administrator, data stores will not be included as a topic in this guide. This guide will not include articles referring to embedded interfaces. While embedded interfaces requires integration with other system, and in their own right are integrations, the primary goal of it is to embed Appian UIs into another webpage. The focus of this integration guide is data-centric integrations. Sections are broken up into This section covers topics related to the fundamentals of an integration. How to Think About Integrations guides readers through the basics of integrations and is meant to be an introduction to the topic. Choosing the Right Type of Integration helps designers figure out the right object to use in Appian to set up a successful integration. Web APIs should be considered as a primary way for other systems to call Appian. They allow other systems to get data from, send data to and execute actions within Appian via a HTTP RESTful service Integrations are objects in Appian that can get data or invoke services via a HTTP Restful service. This section contains all of the related references and how-to content to build an integration. While web APIs and integrations could be considered the primary means for interacting with third party services, they are not the only ways to do it. These topics are broken up into a few areas: The final section contains tutorials that walk designers through step-by-step instructions for specific integration methods. In addition to being used as a learning device, users will have a functional set of object at the end of a tutorial.
https://docs.appian.com/suite/help/18.3/Getting_Started_with_Connecting_Appian.html
2019-03-18T18:03:00
CC-MAIN-2019-13
1552912201521.60
[]
docs.appian.com
Network configuration This chapter contains information about network configuration that must be in place before installing Dynamics GP. Domain To use Dynamics GP, your Web server, back office server, Remote Desktop Services (if applicable), and client workstations must belong to a domain. A domain is a group of computers that are part of a network and share a common directory database. A domain is administered as a unit with common rules and procedures. Each domain has a unique name. Network protocol tuning To optimize your network for Microsoft SQL Server and Dynamics GP, refer to the following guidelines. Limit the network to one protocol. (TCP/IP is required.) Remove unused network protocols. Use 1 GB Ethernet for optimal performance. For more information, see System requirements. Use switches rather than hubs, if optimal performance is required. TCP/IP Transport Control Protocol/Internet Protocol (TCP/IP) is required for Dynamics GP. If you’re using TCP/IP, review the information in this section to be sure that the network is set up properly. Then use your networking protocol documentation to install and test the protocol on all clients and servers before you install Dynamics GP. IP addresses Each computer that you use with Dynamics GP must have a unique IP number (Internet Protocol address) associated with it. For more information about IP addresses, consult your network administrator or refer to your networking protocol software documentation. TCP/IP name resolution You should use some type of name resolution in your network, so that each computer is identified by a unique host name. Name resolution is a method of identifying each computer and can be accomplished by having a specific server act as a domain name server or putting a hosts file on each client and server. For more information about name resolution using either a domain name server or hosts files, consult your network administrator or refer to your networking protocol software documentation. Testing TCP/IP connectivity To test connectivity between clients and servers, use an application distributed with most TCP/IP packages called ping. The ping application will attempt to send a network message or set of messages to a designated computer and inform you whether the message arrived at that computer. Be sure you ping the host name and the ID address of every computer in your system before installing Dynamics GP. Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dynamics-gp/installation/network-configuration
2019-03-18T17:49:13
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
RetrieveMultipleRequest Class Applies To: Dynamics 365 (online), Dynamics 365 (on-premises), Dynamics CRM 2016, Dynamics CRM Online Contains the data that is needed to retrieve a collection of records that satisfy the specified query criteria. Namespace: Microsoft.Xrm.Sdk.Messages Assembly: Microsoft.Xrm.Sdk (in Microsoft.Xrm.Sdk.dll) Inheritance Hierarchy System.Object Microsoft.Xrm.Sdk.OrganizationRequest Microsoft.Xrm.Sdk.Messages.RetrieveMultipleRequest Syntax [DataContractAttribute(Namespace = "")] public sealed class RetrieveMultipleRequest : OrganizationRequest <DataContractAttribute(Namespace := "")> Public NotInheritable Class RetrieveMultipleRequest Inherits OrganizationRequest Constructors Properties MethodsMultipleResponse class. Privileges and Access Rights To perform this action, the caller must have privileges on the specified entities in the Query property and read access rights on the records that are returned from the query. For a list of the required privileges, see RetrieveMultiple message privileges. Notes for Callers This message supports queries that use Query Expression and FetchXML. For more information, see Retrieve data with queries using SDK assemblies. Microsoft Dynamics 365. Supported Entities You can use this method to retrieve any records for entities that support the RetrieveMultiple. Examples The following example shows how to use this message. For this sample to work correctly, you must be connected to the server to get an IOrganizationService interface. For the complete sample, see the link later in this topic. ' #Region "Retrieve records from an intersect table via QueryExpression" 'Create Query Expression. Dim query As New QueryExpression() With { .EntityName = "role", .ColumnSet = New ColumnSet("name") } Dim queryLinkEntity As New LinkEntity With { .LinkToEntityName = SystemUserRoles.EntityLogicalName, .LinkFromAttributeName = "roleid", .LinkToAttributeName = "roleid" } queryLinkEntity.LinkCriteria().AddCondition( "systemuserid", ConditionOperator.Equal, _userId) queryLinkEntity.LinkCriteria().FilterOperator = LogicalOperator.And query.LinkEntities().Add(queryLinkEntity) ' Obtain results from the query expression. Dim ec As EntityCollection = _serviceProxy.RetrieveMultiple(query) ' Display results. For i As Integer = 0 To ec.Entities.Count - 1 Console.WriteLine("Query Expression Retrieved: {0}", (CType(ec.Entities(i), Role)).Name) Next i ' #End Region Thread Safety Any public static ( Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. See Also RetrieveMultipleResponse Microsoft.Xrm.Sdk.Messages Namespace Retrieve data with queries using SDK assemblies Build queries with QueryExpression Build queries with FetchXML Retrieve records for many-to-many relationships using intersect entities Sample: Retrieve records from an intersect table Return to top Microsoft Dynamics 365
https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/gg327661(v=crm.8)
2019-03-18T18:32:28
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
How Does an Effect Work? An effect always needs a drawing connection and sometimes a matte or shape connection. A matte provides drawing information that determines the area on which the effect will be applied on the drawing. The details and colours within the matte drawing.
https://docs.toonboom.com/help/harmony-11/draw-network/Content/_CORE/_Workflow/031_Effects/017_H1_How_Does_An_Effect_Module_Work__.html
2019-03-18T17:57:32
CC-MAIN-2019-13
1552912201521.60
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Effects/FX_WORK.jpg', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Effects/tone_param.png', None], dtype=object) ]
docs.toonboom.com
Resource The term Resource refers to the fundamental structural unit of a configuration management system. Clarive is such a configuration management system, keeping track of all its configuration entities as Resources in its database. In Clarive, examples of Resources can be Topics (i.e. releases, changesets, requirements, test cases etc.), servers, User IDs, agents, middleware (such as JBoss or Apache), cloud instances, mainframe nodes etc. Clarive oversees the life of the Resources through a combination of processes and tools by implementing and enabling the fundamental elements of security, change management and audits. Any activity over Resources (create, update and delete) is registered and is displayed in a Resources grid. Every Resource has its unique Master ID (MID) in Clarive.
https://docs.clarive.com/concepts/resource/
2019-03-18T17:57:19
CC-MAIN-2019-13
1552912201521.60
[]
docs.clarive.com
Data protection API (Windows Runtime apps) [ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ] public key contained in an X.509 certificate. The owner of the private key can decrypt the data. - You can protect data by using a symmetric key. This works, for example, to protect data to a non-AD principal such as Live ID. - You can protect data to the credentials (password) used during logon to a website. The DataProtectionProvider class contains two constructors and four methods.
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh464970(v=win.10)
2019-03-18T17:49:17
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
add the /NONAME option. This will stop your license name details from being displayed in the OtsAV title bar. It should look like this: C:\OtsLabs\OtsAVDJ.exe /NONAME Click the Apply button. Click the OK button. Note: The command line needs to reflect the version of OtsAV you are using. OtsAVDJ.exe for OtsAV DJ Product License owners OtsAVRD.exe for OtsAV Radio Product License owners OtsAVTV.exe for OtsAV TV Product License owners How to create a new OtsAV icon Command Line feature
http://docs.otslabs.com/OtsAV/help/using_otsav/control_features/command_line_feature/how_to_remove_the_license_name_from_the_title_bar.htm
2019-03-18T18:26:33
CC-MAIN-2019-13
1552912201521.60
[]
docs.otslabs.com
... This document outlines the design of a REST-based API for Aspose Cloud, it covers both the structure of REST URLs as well as specific behavior linked to the API such as Authentication, Request Queuing, and Storage. The Aspose for Cloud API will give developers access to all the key functions of the downloadable Aspose componenets through a Software as a Service hosted model. ... - Platform - this covers elements of the Aspose for for for Cloud APIs. Base URL Aspose for Cloud API requests are made by sending a correctly constructed HTTP requests to the following address, with arguments being generally submitted as HTTP get or post arguments and data files being sent as HTTP post method where necessary. ... Default 'empty' request redirects to service start page with a link to some helpful examples. Please check following articles for information regarding Request and Response Format and how to authenticate Aspose for Cloud API request.
https://docs.aspose.cloud/pages/diffpagesbyversion.action?pageId=80317514&selectedPageVersions=2&selectedPageVersions=1
2019-03-18T18:50:57
CC-MAIN-2019-13
1552912201521.60
[]
docs.aspose.cloud
- Communatarism in Great Britain and the French Republican model of integration: With which method should the public policies be controlled while dealing with a multicultural society Résumé de l'exposé The issues of national identity, citizenship and multiculturalism are definitely in the middle of nowadays debates. In his work, John Rex develops the approach that the development of multiculturalism policies might diminish tensions between dominant and minority groups within a contemporary society. Nevertheless, to which multiculturalist policies does he refer? Indeed, it appears that various ways to handle with a multicultural society exist. This study will stress the two big models benchmark in Europe: the British conception of integration and the French one. Focusing on these two models might be relevant given that the migration of millions of people, and notably from Muslim majority countries to Western Europe has raised crucial questions for public policies about how to integrate these people; in other words, about how to make them being utterly part of the reception society. Sommaire de l'exposé - Introduction - Historical comparative perspective between two models: The English Communautarism versus the French republican model of integration - The French republican model of integration from the Jacobin origins to his current application - The British historical view of a multicultural society - Threats and failure of these models: The necessity to rethink these models on the 21st century? - The failure of the Republican pact: integration non-achieved in France - The British society evolving to a segregationist society? - Conclusion - Sources Extraits de l'exposé [...] Historical comparative perspective between two models: the English Communautarism versus the French republican model of integration Firstly, an historical review on the fundamental differences within the two national conceptions of a multicultural society (interethnic relations ) to understand the origins of these models / ideologies, is relevant. The French republican model of integration from the Jacobin origins to his current application From the Jacobin doctrine which has historically influenced and conceived the French republican model . French society has a specific tradition of integration closely linked to its national history. [...] [...] At first, only a minority of these immigrants expected to settle permanently and to stay in Britain more than a few years. In addition, the British government decided in 1962 about a bill designed to curtail New Commonwealth immigration. This Commonwealth Immigrant Act was clearly made in order to restrict nonwhite immigration. In spite of this Act, the 1950s has clearly changed the demographic, ethnic and socio-economic profile of Great-Britain. Indeed, the British tradition accords places to ?social orders, classes and particularistic communities? (William Safran). There is a recognition and a valorisation of ?cultural differences?. [...] [...] Didn't this non-recognition of minorities as an essential component of the French society lead to dissimulation of the discriminations experienced by the individuals because of their ethnic origin? Dominique Schnapper explains that is well-know and obvious that we better heal what we call?. The myth of an achieved equality between each citizen would serve to hide reality of racism. The current tendency is an increasing taking into account of particularism. This is Jean-Pierre Chevènement, a republican minister, who asks for national police which would better represent the whole nation through a diversified recruitment? in 1999. [...] [...] New questions appeared and the British society realized that it might have to rethink the way it handles with its multicultural society. Some even wanted to go back to views Margaret Thatcher shared: a forced cultural assimilationism to the benefit of the ?national culture?. N. Tebbit, from the conservative party, said that ?most of the British people didn't want to live in a multicultural society, but it has been imposed to them?. Latter, in July 1995, the Daily Mail wrote on page one ?Teach them to be British!? and wrote an article saying should teach the children of the immigrants how to be British whatever their cultural or ethnic backgrounds is in the name of collective identity?. [...] [...] ?Communautarism? in Great Britain, ?French Republican model? of integration: which way to control and deal with a multicultural society by the public policies? Sources - SAFRAN William, Pluralism and Multiculturalism in France: Post-Jacobin transformations in Political Science Quarterly, volume 118, number - FAVELL Adrian, Philosophies of Integration: Immigration and the Idea of Citizenship in France and Britain, Macmillan Press - SCHNAPPER Dominique, La République face aux communautarismes in Etudes Revue de culture contemporaine, March 2004 - REX John, Ethnic minorities in the nation-state: working in the theory of multiculturalism and political integration, Macmillan Press - MESSINA Anthony, Immigration as a political dilemma in Britain in Policy Studies Journal, volume 23, number - Various articles in a special report: La politique républicaine de l'identité in Mouvements, number 38, March-April 2005 - BROWN Judith, TALBOT Ian, Making a new home in the Diaspora: opportunities and dilemmas in the British South Asian experience in Contemporary South Asia, volume 15, number 2 - PEACH Ceri, Migration and settlement in Great-Britain in Contemporary South Asia, volume 15, number 2 - LASSALLE Didier, Les minorités ethniques en Grande-Bretagne, Ellipses - STEPAN Alfred, SULEIMAN Ezra, The French Republican model fuels alienation rather than democratic integration in the Taipei Times, 18th of November 2005 - Integration has to be voluntary in The Observer, 6th of November 2005 Issues of national identity, citizenship and multiculturalism are definitely in the middle of nowadays debates. [...] À propos de l'auteurSalima D.étudiante - Niveau - Avancé - Etude suivie - sciences... - Ecole, université - Sciences Po Descriptif de l'exposé - Date de publication - 2007-03-16 - Date de mise à jour - 2007-03-16 - Langue - anglais - Format - Word - Type - dissertation - Nombre de pages - 4 pages - Niveau - avancé - Téléchargé - 11 fois - Validé par - le comité de lecture Derniers documents en sciences politiques >
https://docs.school/sciences-politiques-economiques-administratives/sciences-politiques/dissertation/communautarisme-grande-bretagne-modele-republicain-francais-integration-autorites-publiques-controlent-24328.html
2019-03-18T18:15:25
CC-MAIN-2019-13
1552912201521.60
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-SP.png', None], dtype=object) ]
docs.school
Manual categorization involves creating new categories, if they are not already present, and assigning Ots Album files, Wave files, or MP3 files to the created categories. For instructions on how to create and manage categories click here. Use this method for categorizing Wave, MP3, and Ots Items. To learn what an Item is click here. Select the All Items tab in the Media Library area. Select the Item(s) you wish to categorize. Right-click, select Category -> Add to -> select the category you wish to add the Item to. The Item will be added to the category. Use this method when categorizing an entire Ots Album file or single Wave and MP3 files. To learn what an Album is click here. Select the Albums tab in the Media Library area. Select the Album(s) you wish to categorize. Right-click, select Category -> Add to -> select the category you wish to add the Album to. You will be prompted by the following: "This operation will affect all individual items contained within the album(s) you have selected. Do you wish to proceed?" Click the Yes button. How to remove an Item from a category How to remove an Album from a category How to categorize your music
http://docs.otslabs.com/OtsAV/help/using_otsav/media_library/categorizing_music/manual_categorization.htm
2019-03-18T18:26:17
CC-MAIN-2019-13
1552912201521.60
[]
docs.otslabs.com
MouseUp event From Xojo Documentation Event The mouse button was released. Use the x and y parameters to determine if the mouse button was released within the control's boundaries. Notes The parameters x and y are local coordinates, i.e. they represent the position of the mouse click relative to the upper-left corner or the Control. Mouse clicks that are released to the left or above a control are negative.
http://docs.xojo.com/index.php?title=MouseUp_event&printable=yes
2019-03-18T18:50:39
CC-MAIN-2019-13
1552912201521.60
[]
docs.xojo.com
Shuffling for GroupBy and Join¶ Operations like groupby, join, and set_index have special performance considerations that are different from normal Pandas due to the parallel, larger-than-memory, and distributed nature of Dask DataFrame. Easy Case¶ To start off, common groupby operations like df.groupby(columns).reduction() for known reductions like mean, sum, std, var, count, nunique are all quite fast and efficient, even if partitions are not cleanly divided with known divisions. This is the common case. Additionally, if divisions are known, then applying an arbitrary function to groups is efficient when the grouping columns include the index. Joins are also quite fast when joining a Dask DataFrame to a Pandas DataFrame or when joining two Dask DataFrames along their index. No special considerations need to be made when operating in these common cases. So, if you’re doing common groupby and join operations, then you can stop reading this. Everything will scale nicely. Fortunately, this is true most of the time: >>> df.groupby(columns).known_reduction() # Fast and common case >>> df.groupby(columns_with_index).apply(user_fn) # Fast and common case >>> dask_df.join(pandas_df, on=column) # Fast and common case >>> lhs.join(rhs) # Fast and common case >>> lhs.merge(rhs, on=columns_with_index) # Fast and common case Difficult Cases¶ In some cases, such as when applying an arbitrary function to groups (when not grouping on index with known divisions), when joining along non-index columns, or when explicitly setting an unsorted column to be the index, we may need to trigger a full dataset shuffle: >>> df.groupby(columns_no_index).apply(user_fn) # Requires shuffle >>> lhs.join(rhs, on=columns_no_index) # Requires shuffle >>> df.set_index(column) # Requires shuffle A shuffle is necessary when we need to re-sort our data along a new index. For example, if we have banking records that are organized by time and we now want to organize them by user ID, then we’ll need to move a lot of data around. In Pandas all of this data fits in memory, so this operation was easy. Now that we don’t assume that all data fits in memory, we must be a bit more careful. Re-sorting the data can be avoided by restricting yourself to the easy cases mentioned above. Shuffle Methods¶ There are currently two strategies to shuffle data depending on whether you are on a single machine or on a distributed cluster: shuffle on disk and shuffle over the network. Shuffle on Disk¶ When operating on larger-than-memory data on a single machine, we shuffle by dumping intermediate results to disk. This is done using the partd project for on-disk shuffles. Shuffle over the Network¶ When operating on a distributed cluster, the Dask workers may not have access to a shared hard drive. In this case, we shuffle data by breaking input partitions into many pieces based on where they will end up and moving these pieces throughout the network. This prolific expansion of intermediate partitions can stress the task scheduler. To manage for many-partitioned datasets we sometimes shuffle in stages, causing undue copies but reducing the n**2 effect of shuffling to something closer to n log(n) with log(n) copies. Selecting methods¶ Dask will use on-disk shuffling by default, but will switch to task-based distributed shuffling if the default scheduler is set to use a dask.distributed.Client, such as would be the case if the user sets the Client as default: client = Client('scheduler:8786', set_as_default=True) Alternatively, if you prefer to avoid defaults, you can configure the global shuffling method by using the dask.config.set(shuffle=...) command. This can be done globally: dask.config.set(shuffle='tasks') df.groupby(...).apply(...) or as a context manager: with dask.config.set(shuffle='tasks'): df.groupby(...).apply(...) In addition, set_index also accepts a shuffle keyword argument that can be used to select either on-disk or task-based shuffling: df.set_index(column, shuffle='disk') df.set_index(column, shuffle='tasks') Aggregate¶ Dask supports Pandas’ aggregate syntax to run multiple reductions on the same groups. Common reductions such as max, sum, and mean are directly supported: >>> df.groupby(columns).aggregate(['sum', 'mean', 'max', 'min']) Dask also supports user defined reductions. To ensure proper performance, the reduction has to be formulated in terms of three independent steps. The chunk step is applied to each partition independently and reduces the data within a partition. The aggregate combines the within partition results. The optional finalize step combines the results returned from the aggregate step and should return a single final column. For Dask to recognize the reduction, it has to be passed as an instance of dask.dataframe.Aggregation. For example, sum could be implemented as: custom_sum = dd.Aggregation('custom_sum', lambda s: s.sum(), lambda s0: s0.sum()) df.groupby('g').agg(custom_sum) The name argument should be different from existing reductions to avoid data corruption. The arguments to each function are pre-grouped series objects, similar to df.groupby('g')['value']. Many reductions can only be implemented with multiple temporaries. To implement these reductions, the steps should return tuples and expect multiple arguments. A mean function can be implemented as: custom_mean = dd.Aggregation( 'custom_mean', lambda s: (s.count(), s.sum()), lambda count, sum: (count.sum(), sum.sum()), lambda count, sum: sum / count, ) df.groupby('g').agg(custom_mean)
https://docs.dask.org/en/latest/dataframe-groupby.html
2019-03-18T18:19:50
CC-MAIN-2019-13
1552912201521.60
[]
docs.dask.org
In order to publish a version of the Passport App for members under your name and organisation, a few steps must be followed to get your iTunes development account. This will eliminate any problems setting up your accounts with Apple. You must have a D-U-N-S Number When. Step 2: Inviting Nexudus to your iTunes Connect account Next visit: If you’re prompted to login use your newly created Apple ID to sign in. On the iTunes Connect homepage, click Users and Roles Click the plus sign. Enter the individual’s user information (first name, last name, and email address), and click Next. First Name: Nexudus Last Name: Deployment Step 3: Provide us with the branding assets. In order for us to make the app look like your own app, we will need some branding material from you. Namely: The icon for the app as a (not words) A long description The hex/CSS code for the primary, secondary and tertiary colors to be used in the app icons and buttons. URL for Marketing Website: URL for Contact Page: URL for privacy policy: If you have any questions, please contact us at [email protected]
http://docs.nexudus.com:8090/display/NSKE/White-label+Mobile+App
2019-03-18T18:20:24
CC-MAIN-2019-13
1552912201521.60
[]
docs.nexudus.com:8090
Click on the icon on the toolbar. In the Import/Refresh Files to Media Library dialog box click on the Import/Refresh... Folder button. Note: If you wish to import folders, that are contained within the folder you are going to select, make sure that the Recurse sub-folders option is checked. Navigate to the folder that contains your files, the default folder for Ots Album files is C:\OtsLabs\OtsFiles\MyMusic . Click the folder. Click the OK button. Your folder will be scanned for Ots Album, Wave, and MP3 files. If any of these files are found they will be imported into the Media Library.. Tip: In most cases the Easy Scan operation is easier and achieves the same result. Easy Scan How to Import/Refresh File(s) Re-Link Unavailable Albums (File) Importing Files to the Media Library
http://docs.otslabs.com/OtsAV/help/using_otsav/media_library/importing_ots_albums_to_the_media_library/how_to_import_refresh_folder.htm
2019-03-18T18:26:00
CC-MAIN-2019-13
1552912201521.60
[]
docs.otslabs.com
CSS Grid is a game-changing approach to creating flexible and responsive layouts for the web. And with the new Pinegrow you can use powerful visual tools to work with all aspects of the Grid. Introduction Read about why CSS Grid is here to stay and how Pinegrow can help you take full advantage of the grid. A Quick Overview See this 3 minute video to get a basic idea of how CSS grid works in Pinegrow: Summer Nights CSS Grid Tutorial A step by step guide on creating a responsive CSS grid layout with fallback for older browsers. Learn CSS Grid with Pinegrow A free online course about CSS Grid that will help you to quickly start using grid in your projects.
https://docs.pinegrow.com/docs/css-grid/
2019-03-18T18:07:51
CC-MAIN-2019-13
1552912201521.60
[]
docs.pinegrow.com
Get a UAB Mathworks key From UABgrid Documentation Revision as of 10:23, 11 February 2011 by [email protected] (Talk | contribs) Print these steps or keep them open in another browser window - Go to the UAB software library page. -.
https://docs.uabgrid.uab.edu/w/index.php?title=Get_a_UAB_Mathworks_key&oldid=2479
2019-03-18T17:22:17
CC-MAIN-2019-13
1552912201521.60
[]
docs.uabgrid.uab.edu
TextFont property From Xojo Documentation(Redirected from ReportField.TextFont) a<See Below>.TextFont = newStringValue or StringValue = a<See Below>.TextFont Supported for all project types and targets. or StringValue = a<See Below>.TextFont Supported for all project types and targets. Name of the font used to display the caption or text content.". Classes implementing the TextFont property Sample Code This code sets the TextFont property.
http://docs.xojo.com/ReportField.TextFont
2019-03-18T19:00:49
CC-MAIN-2019-13
1552912201521.60
[]
docs.xojo.com
Sparse Arrays¶ By swapping out in-memory NumPy arrays with in-memory sparse arrays, we can reuse the blocked algorithms of Dask’s Array to achieve parallel and distributed sparse arrays. The blocked algorithms in Dask Array normally parallelize around in-memory NumPy arrays. However, if another in-memory array library supports the NumPy interface, then it too can take advantage of Dask Array’s parallel algorithms. In particular the sparse array library satisfies a subset of the NumPy API and works well with (and is tested against) Dask Array. Example¶ Say we have a Dask array with mostly zeros: x = da.random.random((100000, 100000), chunks=(1000, 1000)) x[x < 0.95] = 0 We can convert each of these chunks of NumPy arrays into a sparse.COO array: import sparse s = x.map_blocks(sparse.COO) Now, our array is not composed of many NumPy arrays, but rather of many sparse arrays. Semantically, this does not change anything. Operations that work will continue to work identically (assuming that the behavior of numpy and sparse are identical), but performance characteristics and storage costs may change significantly: >>> s.sum(axis=0)[:100].compute() <COO: shape=(100,), dtype=float64, nnz=100> >>> _.todense() array([ 4803.06859272, 4913.94964525, 4877.13266438, 4860.7470773 , 4938.94446802, 4849.51326473, 4858.83977856, 4847.81468485, ... ]) Requirements¶ Any in-memory library that copies the NumPy ndarray interface should work here. The sparse library is a minimal example. In particular, an in-memory library should implement at least the following operations: - Simple slicing with slices, lists, and elements (for slicing, rechunking, reshaping, etc) - A concatenatefunction matching the interface of np.concatenate. This must be registered in dask.array.core.concatenate_lookup - All ufuncs must support the full ufunc interface, including dtype=and out=parameters (even if they don’t function properly) - All reductions must support the full axis=and keepdims=keywords and behave like NumPy in this respect - The array class should follow the __array_priority__protocol and be prepared to respond to other arrays of lower priority - If dotsupport is desired, a tensordotfunction matching the interface of np.tensordotshould be registered in dask.array.core.tensordot_lookup The implementation of other operations like reshape, transpose, etc., should follow standard NumPy conventions regarding shape and dtype. Not implementing these is fine; the parallel dask.array will err at runtime if these operations are attempted. Mixed Arrays¶ Dask’s Array supports mixing different kinds of in-memory arrays. This relies on the in-memory arrays knowing how to interact with each other when necessary. When two arrays interact, the functions from the array with the highest __array_priority__ will take precedence (for example, for concatenate, tensordot, etc.).
https://docs.dask.org/en/latest/array-sparse.html
2019-03-18T17:50:59
CC-MAIN-2019-13
1552912201521.60
[]
docs.dask.org
Connecting to the web client To have the Dynamics GP web client installation work as efficiently as possible, it’s important that users follow the proper process for connecting to the web client. Information about this is divided into the following topics: - Disconnecting from a session Reconnecting to a session - Web browser security settings To sign in to the web client, use the following procedure. Open Internet Explorer or other support web browser. Enter the URL of the Dynamics GP web client site. The default address of the site is: ServerName is the fully-qualified domain name (FQDN) for the server that is hosting the web Dynamics GP web client looks similar to the following: View the sign-in page. If the site displays a security certificate error, report the issue to your system administrator, and do not continue the sign on process. Enter your user credentials (User Name and Password). These are either domain user credentials or machine user credentials, depending on how the web client installation is configured. The User Name will have the format: domain\username or machine\username Warning These are not your Dynamics GP login name and password. Specify the security level for the session. You can click Show descriptions to display details of the two security options. This is a public computer Choose this option if the computer is public or is shared by multiple users. Be sure that you close the browser window when you are finished with your session. This is a private computer Choose this option if this is a private computer that only you have access to. When you choose this security level, you can mark the Remember me check box to save your user name and password. These will be used to automatically sign in to Dynamics GP the next time you access this page for the web client. Click Sign In. If you are using a multitenant configuration, and have access to more than one tenant, you will be prompted to choose the tenant (installation of Dynamics GP) that you want to connect to. Choose the Create Session action for the tenant. If you are using a single tenant configuration, or have access to only one tenant, no prompt will be displayed. A session will be created. The window you see first will depend on settings for your Dynamics GP user. - If your Dynamics GP user ID has only SQL Server Account information, the Dynamics GP login window will be displayed. Log in with your Dynamics GP login name and password. - If your Dynamics GP user ID has Windows Account information, the Web Client SQL User will be used to access Dynamics GP data. One company If you have access to only one company, that company will automatically be used. The first page you see in Dynamics GP will be the Home page. Multiple companies If you have access to multiple companies, the Company Login window will be displayed, allowing you to select the company to use. Disconnecting from a session In general, you should avoid closing the web browser when you have an active connection to the Dynamics GP web client. When you close the web browser while connected, your web client session remains active on the server. The resources used by your session are still dedicated to it and cannot be used for other web client sessions. To help prevent you from accidentally closing the web browser, the following message is displayed when you attempt to close the web page or navigate away to another web page. If you accidentally navigated away from the web client session, click Cancel to return to the web client. Sometimes, disconnecting from a session by closing the web browser can be useful. The following are two examples: Assume you need to shut down your computer, but you have multiple windows open in the Dynamics GP web client with data displayed in them. Disconnecting from the session allows those windows to remain open. When you reconnect to the session, which is discussed in the next section, the windows will have maintained their state information. A long-running process that you want to allow to finish is another reason to close the web browser and leave the session running on the server. After the process has started, it will continue processing, even after the web browser has been closed. Reconnecting to a session Reconnecting to an existing session is just like the process of signing in to the web client. To reconnect to a session: Open Internet Explorer. Enter the URL of the Dynamics GP web client site. At the sign-in page, enter your user credentials, and click Sign In. The Session Central Service will find any existing session that you had disconnected from. These sessions will be listed. - Select the one of the sessions in the list and then click Connect to Existing Session to reconnect. The web client will restore as many of the existing session’s session characteristics as possible. The web client will does not know which area page had been displayed. It will restore the correct set of windows, though the exact placement of the windows may not match the configuration that existed when you disconnected from the session. To exit the web client, click Exit GP in the upper-right corner of the web browser. You will be logged out of Dynamics GP, the web client session will end on the server, and the exit page will be displayed. To go back into Dynamics GP, click Enter Dynamics GP. If you had chosen the option to remember your user name and password on the sign-in page, you will not be prompted for them. If you want to remove the stored user name and password from the machine, click Sign Out, in the upper-right corner of the page. When you have finished working with Dynamics GP, it’s a good idea to exit Dynamics GP, rather than to just disconnect from the session. Some of the advantages of exiting include the following: System resources are made available for other web client sessions. It releases a Dynamics GP user in the system, so you are less likely to encounter the user limit. It reduces the possibility of data loss that might be caused the web client session had to be forcibly ended. For some special circumstances, you may need to sign in to the Dynamics GP web client as the “sa” or “DYNSA” user. If you don’t have your Windows account information associated with a Dynamics GP user, you will see the standard Dynamics GP log in window that you can use to sign in as “sa” or “DYNSA”. If your Windows account information is associated with a Dynamics GP user, you will be automatically signed in as that GP user. The standard Dynamics GP log in window will not be displayed. In this situation, use the following procedure to sign in as the “sa” or “DYNSA” user: - After the initial sign in operation is complete, click the user name in the lower- left corner of the web client. - The login window will be displayed. Choose SQL Server Account as the authentication type. - Specify “sa” or “DYNSA” as the User ID. Enter the appropriate password, and then click OK. Web browser security settings You may be accessing the Dynamics GP web client through an intranet or over the Internet. Depending on the access method, you may need to adjust your web browser security settings to allow printing and local file access to work properly. Do this in the Internet Options window for Internet Explorer. You will have to do one or both of the following actions:. Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dynamics-gp/web-components/connecting-to-the-web-client
2019-03-18T18:26:16
CC-MAIN-2019-13
1552912201521.60
[array(['media/install-web-login-03.png', 'GP login shows the login screen to dynamics gp in the browser.'], dtype=object) array(['media/install-web-login-01.png', 'GP login shows the login screen to dynamics gp using sql login.'], dtype=object) array(['media/install-web-login-02.png', 'GP login shows the login screen to a dynamics gp company.'], dtype=object) array(['media/using-web-disconnect.png', 'GP connection warning shows a warning if you close the browser without disconnecting from dynamics gp.'], dtype=object) array(['media/using-web-reconnect.png', 'GP connection shows a screen for reconnectng to earlier sessions of dynamics gp.'], dtype=object) array(['media/using-web-signout.png', 'GP connection warning shows a confirmation that you have signed out from dynamics gp.'], dtype=object) array(['media/using-web-sa-dynsa.png', 'GP login shows the standard dynamics gp log in window.'], dtype=object) array(['media/install-web-login-01.png', 'GP login shows the login screen to dynamics gp using sql login.'], dtype=object) array(['media/using-web-browser-settings.png', "Internet Options shows how you can modify the browser's settings."], dtype=object) ]
docs.microsoft.com
Pinegrow comes with a powerful and flexible user interface that gets out of your way when you don’t need it. Let’s get familiar with the user interface. This topic is also available as a video: Watch the video about the User Interface In addition, watch the video explaining how the floating panels can be used on multiple screens: Watch the video about Floating Panels Collapsing and expanding panels Double click on any tab to collapse its panel. This is useful for putting away the panels that you don’t need for the task at hand. Click on any tab in the collapse panel to expand it back to its original size. Rearanging panels Drag tabs from panel to panel, or drag them to a border between the existing panels to open them as separate panels. Drag the empty area of the panel header (on the right side of its tabs) to drag the whole panel to the new position. The page area with page views is not draggable. But its tabs can be rearanged and dragged to the side in order to split the page area. Floating panels and using multiple screens Click on in the panel header to open the panel group in a floating window. Floating panels can be freely moved around the screen(s) and resized like normal windows. Individual panels or panel groups can be dragged between windows. Close the window to dock its panels back to the main. Hidding and showing the UI Use icon in the top toolbar or press TAB (when not in code editor or in field input) to hide the UI. All panels will be collapsed. Repeat the operation to show the panels again. Toggle icon in panel headers to make that panel always visible, even when the UI is hidden. Quick windows Some tools open in quick windows. Quick windows are meant to be open for a short time, just for a specific task. You can move them around, resize them and double click the header to reposition them on screen. The most useful quick window are: - + opens the quick insert Library window. - P opens quick Element properties. - CMD + H or C brings up the quick Element code editor. - CMD + L opens the Assign classes quick window. Quick windows are handy for getting access to important features while the relevant panels are collapsed or if the whole UI is hidden. Make the UI smaller or larger Use setting to make the whole Pinegrow user interface smaller or larger. Making it larger helps with accessibility issues, while making it smaller lets you fit more panels and page views on the screen. That’s especially handy on small screens and on retina screens. Zooming the user interface affects everything in the app window, including page views. Workspaces A workspace is a certain arrangement of panels in the UI. Use Workspace menu on the top right side to switch between the predefined layouts and your own saved workspace layouts. If things get messed up using the Workspace menu, if everything else fails. The point is, don’t worry about messing things up. Any problems are easy to fix.
https://docs.pinegrow.com/docs/master-pinegrow/getting-the-most-out-of-pinegrow-ui/
2019-03-18T17:21:59
CC-MAIN-2019-13
1552912201521.60
[array(['http://download.pinegrow.com/docsmedia/master/master-ui-collapse.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui-expand.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui2-float.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui2-move-and-arange.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui2-floating-move.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui2-2-screens.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-tools-workspace.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui-hide-ui.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui-hide-ui-big.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui-hide-show.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-ui-quick-wins.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-tools-zoom-ui.jpg', None], dtype=object) array(['http://download.pinegrow.com/docsmedia/master/master-tools-workspace.jpg', None], dtype=object) ]
docs.pinegrow.com
Contents IT Operations Management Previous Topic Next Topic Example: create a blueprint for configuration management provider integration Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configuration Management provider integration example for Cloud Management You can provision cloud resources based on resource types on your Chef server and manage newly provisioned resources through both the Chef Server and your ServiceNow instance. Chef integration involves registering your Chef server to the instance, running Discovery, and creating customized blueprints and application profiles. Requirements Role required: cloud_admin Install and configure a Chef server What to do Create a configuration management provider and run Discovery First create a configuration management provider record with the necessary credentials that can access the server. Then run Discovery on the configuration management provider to populate the CMDB. See Create a configuration management provider and run Discovery for instructions. Create an application profile and template Your users can select a virtual machine that is based on an application profile, which in turn is based on a Chef resource. You must map an application profile template to a cloud resource profile of type Application. See Example: create an application profile template for configuration management provider integration and Create a cloud resource profile for instructions. Create a blueprint Create a new blueprint with BootstrapNode and ExecuteConfigPackages resource operations and customize the form to allow the user to select the application profile template, organization, and credential ID. See Example: create a blueprint for configuration management provider integration for an example. Next steps After a user provisions a resource from the blueprint, the Stack Status indicates how the system runs through the Create node, Bootstrap, and ExecuteConfigPackage steps. You can obtain the IP address of a virtual machine in the User Portal by going to Stacks > {category} and selecting the new virtual machine. Open the Chef server to see the newly provisioned resource on the node the user specified. Create a configuration management provider and run Discovery Create a configuration management provider like Chef or Ansible and run discovery on the provider to discover the resources for the provider. Before you begin Let's take an example of a chef server provider and run Discovery. Role required: cloud_admin A Chef server Chef server credentials Procedure Navigate to Cloud Management > Cloud Admin Portal > Manage > Config Management. Click New. Fill out the form fields (see table). Obtain most of this information from your Chef server etup. Field Description Name Enter a descriptive name. URL Enter the URL of your Chef server, including port number. Organization Enter the Chef organization for access control. Version Select a version. Server Type Select a server type. Service Category Select one of the Cloud Service categories. Provider Type Select Chef Server12 or Ansible Tower. Credential Select the Chef credentials Click Submit. Click Discover Now. The discovered resources appear under Entities. The discovered resources for Chef are Chef Server Cookbook, Chef Server Node, and Cfg Installable. The discovered resources for Ansible are Ansible Inventory and Cfg Installable. Example: create an application profile template for configuration management provider integration Create an application profile template from discovered resources in a Chef server. You can use these to allow users to select the type of application profile when they provision a virtual machine. Before you begin Let's take an example of a chef server provider. Role required: cloud_admin A Chef server provider with Discovery already run. Procedure Navigate to Cloud Management > Cloud Admin Portal > Manage > Resource Profiles > Application Profile. Click New. Fill out the form fields (see table). Field Description Name Provide a descriptive name. Template ID Enter an ID to use for the template. Config Installable Click the lookup icon and select a value from tTable Name and choose the records for the table from the Document field. Config runlist provider Select a value Provider Instance Select a Chef server or Ansible Tower provider. Right-click the header and select Save. Example: create a blueprint for configuration management provider integration This example shows you how to create a custom blueprint that you can use in a Chef integration. Before you beginRole required: cloud_admin Procedure Navigate to Cloud Management > Cloud Service Design > Blueprints. Click New. Create a deployment model with a container, virtual machine, and a datacenter. Click the Operations tab, and then click Steps. Add these resource operations based on the Virtual Server: Bootstrap Node Register Node ExecuteConfigPackages You should have five operations: Blueprint Container Resource.Provision Virtual Server.Provision Bootstrap Node Register Node ExecuteConfigPackages Make the following changes to these operations and click Save after each change: Operation Make this change Blueprint Container Resource.Provision Enable WorkloadConfigProvider and WorkloadConfigProvider Type. The user uses this field to select the configuration management provider. Management Attributes Every operation will have it's own management attributes. Ensure that for the form parameters, Virtual_Server_ApplicationProfile, Virtual_Server_ConfigurationOverrides, and Virtual_Server_ManagementAttributes, the value in the Form UI Group field is set to General Info. This enables the data to load in the order catalog form for these attributes. Publish the catalog item and make it active. ResultYour users can provision a resource based on this blueprint. See Configuration Management provider integration example for Cloud Management for a high-level overview of what happens during an example Chef integration and provisioning. Related TasksCreate a Cloud Management blueprintPublish a blueprint as a cloud service catalog itemExport a catalog itemImport a blueprintRelated ReferenceBlueprint attributes On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/cloud-management-v2/concept/cloud-provioning-with-chef.html
2019-03-18T18:15:32
CC-MAIN-2019-13
1552912201521.60
[]
docs.servicenow.com
For Each...Next From Xojo Documentation(Redirected from In) Language Keyword Loops through the elements of a one-dimensional array or a class that implements the Iterable interface. Usage For Each element [As datatype] In array [statements] [ Continue [For] ] [ Exit [For] ] [ Continue [For] ] Next [element] Notes A For...Each loop does not guarantee that it will loop through the values in the array in index order. Do not make any assumptions of the traversal order as it is subject to change in the future. As with other block statements, variables declared within the For...Each loop go out of scope when the loop finishes or exits. Sample Code Iterate through an array: Dim days() As Text = Array("One", "Two", "Three", "Four", "Five") Dim output As Text For Each d As Text In days output = output + d Dim output As Text For Each d As Text In days output = output + d Calculate the sum of values in an array: Dim values() As Double = Array(1.5, 5.5, 8.0, 45.0, 22.5) Dim sum As Double For Each d As Double In values sum = sum + d // sum = 82.5 Dim sum As Double For Each d As Double In values sum = sum + d // sum = 82.5 See Also For...Next, Dim statements; Array keyword
https://docs.xojo.com/In
2019-03-18T19:01:30
CC-MAIN-2019-13
1552912201521.60
[]
docs.xojo.com
The SITL (Software In The Loop) simulator allows you to run APM without any hardware. It is a build of the autopilot code using an ordinary C++ compiler and giving you a native executable that allows you to test the behaviour of the code without hardware. Refer to the APM wiki for installation instructions. sim_vehicle.sh -w Install just the command line tools for OSX including GCC then get modules installed to Python: sudo easy_install pip sudo pip install pexpect sudo pip install pyserial Install MAVLink and MAVProxy: sudo pip install pymavlink MAVProxy (or) install MAVLink and MAVProxy from source git clone git clone run this setup command in both directories sudo python setup.py build install Get the APM branch that compiles in Mac OS (this commit is relevant): git clone cd ardupilot git checkout macos cd ArduCopter make configure make sitl Source: drones-discuss
http://docs.erlerobotics.com/brains/discontinued/erle-brain-2/sofware/apm/tools/sitl
2019-03-18T17:44:53
CC-MAIN-2019-13
1552912201521.60
[]
docs.erlerobotics.com
Tmate. Peer Console Development (tmate) Screencast [email protected] Interacting with Tmate ^b + c create a new window ^b + n move to next window ^b + p move to previous window
https://docs.cloudposse.com/local-dev-environments/tmate/
2019-03-18T17:43:05
CC-MAIN-2019-13
1552912201521.60
[array(['https://cloudposse.com/wp-content/uploads/sites/29/2018/01/tmate-linuxdescomplicado.gif', None], dtype=object) ]
docs.cloudposse.com
An abstract base class for parameter objects that can be added to an AudioProcessor. More... An abstract base class for parameter objects that can be added to an AudioProcessor. Destructor. Called by the host to find out the value of this parameter. Hosts will expect the value returned to be between 0 and 1.0. This could be called quite frequently, so try to make your code efficient. It's also likely to be called by non-UI threads, so the code in here should be thread-aware. The host will call this method to change the value of a parameter. The host may call this at any time, including during the audio processing callback, so your implementation has to process this very efficiently and avoid any kind of locking.. The value passed will be between 0 and 1.0. A processor should call this when it needs to change one of its parameters. This could happen when the editor or some other internal operation changes a parameter. This method will call the setValue() method to change the value, and will then send a message to the host telling it about the change. Note that to make sure the host correctly handles automation, you should call the beginChangeGesture() and endChangeGesture() methods to tell the host when the user has started and stopped changing the parameter. Sends a signal to the host to tell it that the user is about to start changing this parameter. This allows the host to know when a parameter is actively being held by the user, and it may use this information to help it record automation. If you call this, it must be matched by a later call to endChangeGesture(). Tells the host that the user has finished changing this parameter. This allows the host to know when a parameter is actively being held by the user, and it may use this information to help it record automation. A call to this method must follow a call to beginChangeGesture(). This should return the default value for this parameter. Implemented in AudioProcessorValueTreeState::Parameter. Returns the name to display for this parameter, which should be made to fit within the given string length. Some parameters may be able to return a label string for their units. For example "Hz" or "%". Returns the number of steps that this parameter's range should be quantised into. If you want a continuous range of values, don't override this method, and allow the default implementation to return AudioProcessor::getDefaultNumParameterSteps(). If your parameter is boolean, then you may want to make this return 2. The value that is returned may or may not be used, depending on the host. If you want the host to display stepped automation values, rather than a continuous interpolation between successive values, you should override isDiscrete to return true. Reimplemented in AudioProcessorValueTreeState::Parameter, and RangedAudioParameter. Returns whether the parameter uses discrete values, based on the result of getNumSteps, or allows the host to select values continuously. This information may or may not be used, depending on the host. If you want the host to display stepped automation values, rather than a continuous interpolation between successive values, override this method to return true. Reimplemented in AudioProcessorValueTreeState::Parameter. Returns whether the parameter represents a boolean switch, typically with "On" and "Off" states. This information may or may not be used, depending on the host. If you want the host to display a switch, rather than a two item dropdown menu, override this method to return true. You also need to override isDiscrete() to return true and getNumSteps() to return 2. Reimplemented in AudioProcessorValueTreeState::Parameter. Returns a textual version of the supplied normalised parameter value. The default implementation just returns the floating point value as a string, but this could do anything you need for a custom type of value. Reimplemented in AudioPluginInstance::Parameter. Should parse a string and return the appropriate value for it. Implemented in AudioPluginInstance::Parameter. This can be overridden to tell the host that this parameter operates in the reverse direction. (Not all plugin formats or hosts will actually use this information). Returns true if the host can automate this parameter. By default, this returns true. Reimplemented in AudioProcessorValueTreeState::Parameter. Should return true if this parameter is a "meta" parameter. A meta-parameter is a parameter that changes other params. It is used by some hosts (e.g. AudioUnit hosts). By default this returns false. Reimplemented in AudioProcessorValueTreeState::Parameter. Returns the parameter's category. Returns the index of this parameter in its parent processor's parameter list. Returns the current value of the parameter as a String. This function can be called when you are hosting plug-ins to get a more specialsed textual represenation of the current value from the plug-in, for example "On" rather than "1.0". If you are implementing a plug-in then you should ignore this function and instead override getText. Registers a listener to receive events when the parameter's state changes. If the listener is already registered, this will not register it again. Removes a previously registered parameter listener.
https://docs.juce.com/master/classAudioProcessorParameter.html
2019-03-18T18:10:35
CC-MAIN-2019-13
1552912201521.60
[]
docs.juce.com
Difference between revisions of "JDatabaseQuerySQLAzure::length" From Joomla! Documentation Latest revision as of 20:19,::length Description Description:JDatabaseQuerySQLAzure::length [Edit Descripton] public function length ( $field ) - Returns Length function for the field - Defined on line 512 of libraries/joomla/database/database/sqlazurequery.php - Since See also JDatabaseQuerySQLAzure::length source code on BitBucket Class JDatabaseQuerySQLAzure Subpackage Database - Other versions of JDatabaseQuerySQLAzure::length SeeAlso:JDatabaseQuerySQLAzure::length [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JDatabaseQuerySQLAzure::length&diff=cur&oldid=56476
2015-10-04T13:21:43
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Changes related to "Latest News module with date" ← Latest News module with date This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20131017080916&target=Latest_News_module_with_date
2015-10-04T12:44:15
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Difference between revisions of "JHtmlGrid::checkedOut" From Joomla! Documentation Revision as of 18Grid::checkedOut Description Description:JHtmlGrid::checkedOut [Edit Descripton] public static function checkedOut ( &$row $i $identifier= 'id' ) - Returns - Defined on line 142 of libraries/joomla/html/html/grid.php See also JHtmlGrid::checkedOut source code on BitBucket Class JHtmlGrid Subpackage Html - Other versions of JHtmlGrid::checkedOut SeeAlso:JHtmlGrid::checkedOut [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JHtmlGrid::checkedOut&diff=next&oldid=57020
2015-10-04T13:43:50
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
With the FWA Order Ingestion API, you can retrieve financial data you need for your accounting system. This document outlines the details of the web service calls available in the API. API DETAILS The Financial Bridge contains both the most recent version of the API, as well as older legacy versions (e.g., v1 and v4). New users of the API should restrict their calls only to the most recent major version of the API. When intending to migrate to a newer version of the API (e.g., v4 to future v5), contact TAPS Support to coordinate the migration to the newer version of the API Schema. Fields whose titles include ID are context sensitive. (For example, an ID within Vendor call is Vendor ID, while an ID within GET Client Call is the Client ID.) IDs are internal STRATA IDs and cannot be updated. Fields that are Codes can be updated and are intended to refer to external system codes.
https://api-docs.freewheel.tv/financial/reference/fwa-order-ingestion-financial-api
2022-09-25T08:48:04
CC-MAIN-2022-40
1664030334515.14
[]
api-docs.freewheel.tv
projects/{project_owner}/{project_name}/files This call lists the files in the specified project. It is an alias for the call GET /files and redirects to that path. Alias call Note that since this is an alias for another call, the call cannot be made straightforwardly from all applications. In particular, to send the call using cURL, use the cURL option -Lto redirect the path of this call to `?{project_owner}/{project}. Alternatively, to list all files, you can simply use the call GET /files.{project_owner}/{project}/files Request Example request Referring to your project Note that project_owneris always case-sensitive, and that projectis not the project's name but its ID, or short name. For full details of identifying objects using the API, please see the API overview. GET /v2/projects/rfranklin/my-project/files HTTP/1.1 Host: cgc-api.sbgenomics.com X-SBG-Auth-Token: 3210a98c1db9318fa9d9273156740f74 curl -L "" -s -H "X-SBG-Auth-Token: 3210a98c1db9318fa9d9273156740f74" -H "content-type: application/json" -X GET "" Header Fields Path parameters Query parameters Response See a list of CGC-specific response codes that may be contained in the body of the response. Example response body { "href": "", "items": [ { "href": "", "id": "568cf5dce3b0307bc0462060", "name": "my_reference.vcf", "project": "RFranklin/my-project" }, { "href": "", "id": "566aad1de4b0c560b469ea90", "name": "_1_unsorted.bam", "project": "RFranklin/my-project" }, { "href": "", "id": "568cf5f4e4b0307b30462062", "name": "unsorted.bam", "project": "RFranklin/my-project" } ], "links": [] }
https://docs.cancergenomicscloud.org/reference/list-files-secondary-method
2022-09-25T08:00:12
CC-MAIN-2022-40
1664030334515.14
[]
docs.cancergenomicscloud.org
Application feelpp_hm_heat_moisture devlopment As is it coded yet, the application feelpp_hm_heat_moisture can only be used for one of the previous benchmarks at a time (drying-layer, and moisture-uptake) : the values of the parameters are distinguished with macros, and we have to build again if we want to chang e, which is not a good strategy ! The idea is now to adapt it to make it work for all types of problems we want to model, using a JSON file. 1. Complete variationnal problem This system of equations is satisfied on the domain \(\Omega\). We add those boundary counditions : We have \(\partial\Omega=\Gamma_{T,D}\cup\Gamma_{T,N}\), and \(\partial\Omega=\Gamma_{\phi,D}\cup\Gamma_{\phi,N}\). We make the usual time derivate approximation : \(\dfrac{\partial T}{\partial t}\approx\dfrac{T^{n+1}-T^n}{\Delta t}\) Let \(v\) be a test function associated to \(T\). The first equation \(\ref{eq:T}\) become, after making a partial integration : As \(v=0\) on \(\Gamma_{T,D}\) and on \(\Gamma_{\phi,D}\), and also there are the two Neimann conditions, this equation become : with : \(a(T^{n+1},v) = \displaystyle\int_\Omega (\rho C_p)_\mathrm{eff}^n\dfrac{T^{n+1}}{\Delta t}v + \int_\Omega(k_\mathrm{eff}^n\nabla T^{n+1})\nabla v\) \(b(\phi^{n+1},v) = \displaystyle\int_\Omega L_v^n\delta_\mathrm{p}^np_\mathrm{sat}^n\nabla\phi^{n+1}\nabla v + \int_\Omega L_v^n\delta_\mathrm{p}^n\nabla p_\mathrm{sat}^n \phi^{n+1}\nabla v - \int_{\Gamma_{\phi,N}}\left(L_v^n\delta_\mathrm{p}^n\nabla p_\mathrm{sat}^n\phi^{n+1}\cdot\mathbf{n}\right)v\) \(f(v) = \displaystyle\int_\Omega \left((\rho C_p)_\mathrm{eff}^n\dfrac{T^n}{\Delta t} + Q^{n+1}\right)v + \int_{\Gamma_{T,N}} k_\mathrm{eff}^n N_T^{n+1} v + \int_{\Gamma_{\phi,N}} L_v^n\delta_\mathrm{p}^n\nabla p_\mathrm{sat}^n N_\phi^{n+1} v\) Doing the same process with a test function \(q\) associated to \(\phi\), the equation \(\ref{eq:phi}\) become : with : \(d(\phi^{n+1},q) = \displaystyle\int_\Omega\xi^n\frac{\phi^{n+1}}{\Delta t}q + \int_\Omega\xi^nD_\mathrm{w}^n\nabla\phi^{n+1}\nabla q + \int_\Omega\delta_\mathrm{p}^n\nabla\phi^{n+1}p_\mathrm{sat}^n\nabla q + \int_\Omega\delta_\mathrm{p}^n\phi^{n+1}\nabla p_\mathrm{sat}^{n}\nabla q - \int_{\Gamma_{N,\phi}}\left(\delta_\mathrm{p}^n\phi^{n+1}\nabla p_\mathrm{sat}^n\cdot\mathbf{n}\right)q\) \(g(q) = \displaystyle\int_\Omega\left(\xi^n\dfrac{\phi^n}{\Delta t} + G^{n+1}\right)q + \int_{\Gamma_{N,\phi}}\left(\xi^nD_\mathrm{w}^n+\delta_\mathrm{p}^np_\mathrm{sat}^n\right)N_\phi^{n+1}q\) 2. Model properties As explained in the Feel Tutorial Dev, the same set of equation allow solving a lot of different problems. That is why we define a model in Feel. Models are represented by a class ModelProperties which contains sub-classes corresponding to the different sections of a JSON file. For example, I will show how to do it with boundary conditions, but the process is quite similar. This object is stored as a member of the class HeatMoisture, named props_. 3. Example of usage for boundary conditions "BoundaryConditions": { "field": (1) { "BC_type": (2) { "marker": (3) { "expr":"value" (4) } } } The boundary conditions are used in the code at the definition of the forms, in the functions update of the two classes (see section 5.3 of the project report) We distinguich three type of physics used in the model : heat, meaning that we solve only the heat problem moisture, meaning that we solve only the moisture problem heat-moisture, combining the two problems We define the physics in the JSON file in a section Materials, on associated markers, allowing to combine physics in a more complex geometry : "Materials": { "Omega": { "physics":["moisture"], "markers":["Omega"], } } Let’s focus on the implementation in C++ of the boundary conditions. Here is the (compacted) code of the update function in the class Heat. void update( double t, PhiT const& phi, double dt ) { this->a_.zero() ; this->l_.zero() ; for ( auto const& [name, mat] : this->props_.materials() ) (1) { if ( mat.hasPhysics( "heat" ) || mat.hasPhysics( "heat-moisture" ) ) (2) { // don't depend on boundary conditions this->a_ += integrate( _range = markedelements( this->mesh(), mat.meshMarkers() ), _expr = ... ); this->l_ += integrate( _range = markedelements( this->mesh(), mat.meshMarkers() ), _expr = ... ); if ( mat.hasPhysics( "moisture" ) || mat.hasPhysics( "heat-moisture" ) ) (2) { this->l_ += integrate( _range = markedelements( this->mesh(), mat.meshMarkers() ), _expr = ... ); } // depend on the boundary conditions for (auto const& [bcid, bc] : this->props_.boundaryConditions2().flatten()) (3) { if (bcid.type() == "Neumann") { if (bcid.field() == "T") // for heat this->l_ += integrate( _range = markedfaces( this->mesh(), bc.markers() ), _expr = ... ); if ( bcid.field() == "phi" && (mat.hasPhysics( "moisture" ) || mat.hasPhysics( "heat-moisture" )) ) // for moisture this->l_ += integrate( _range = markedfaces( this->mesh(), bc.markers() ), _expr = .... ); } } for (auto const& [bcid, bc] : this->props_.boundaryConditions2().flatten()) (4) { if (bcid.type() == "Dirichlet" && bcid.field() == "T") { auto Ts = expr( bc.expression() ) ; Ts.setParameterValues({"t",t}) ; this->a_ += on( _range=markedfaces(this->mesh(), bc.markers()), _rhs=this->l_, _element=T_, _expr=Ts) ; } } } } } 4. Difficulties encountered With all those modifications, the application work on the first try on the case drying-layer involving only the moisture transport process, but didn’t worked on moisture-uptake : after a few steps, the linear solver of Feel++ didn’t converge, and therefore the results were totally false ! When we try to compare the values of the coefficients got from this simulation and the one got on the application when it was working (without the json file, it correspond to this this state in the history), we see that atfer the first step there are mistakes. The main hypothesis of such error is because we use here expression depending on other expression : for instance, K depends on w, and w depends on phi. There is a fonctionnality in Feel++ allowing to do refactoring, which is still in development : we have to be in the branch features/refactoring of the repository feelpp. And, of course, there were parenthesis mistakes that made the solutions false ! Once corrected, all worked quite correctly ! 5. Documentation of the application 6. Test on several physics We watch here a created case, with no real correspondance in physic. It is a mix between the two previous cases, moisture-uptake and drying-layer. The geometry is represented on figure 1 : on the left domain \(\Omega_M\), the physic moisture of the drying-layer case is applied, and on the right one \(\Omega_{HM}\), the physic heat-moisture is applied. Moreover, we add a Dirichlet condition on the edge \(\Gamma_\mathrm{mid}\) with \(\phi=0.95\). What we see is not relevant in physic, but we figure that the results of the application are coherent.
https://docs.cemosis.fr/ibat/latest/reports/thomas-saigre-tardif/appli.html
2022-09-25T07:53:54
CC-MAIN-2022-40
1664030334515.14
[array(['../_images/thomas-saigre/geo_mult.png', 'geo mult'], dtype=object) array(['../_images/thomas-saigre/res_mult.png', 'res mult'], dtype=object)]
docs.cemosis.fr
Getting Started with the Clear API Learn how to get data in and out of Clear through its RESTful API. This is exactly how APIs work With the Clear API, you can access many objects that you can access through the site itself, such as regions and registrations. Authentication For most endpoints, you will need an API token and secret. If you do not have one, we have provided a public-access token/secret pair with limited capabilities: - Token: 1YZiGaj3baaLU8IKVsASRIWaNF2oJNg0 - Secret: 1COMnWyGnGBsNqkhaZ6WMBWB9UWZw6QZ Permissions For some endpoints, you will also need a special permission, such as admin and internal. These are granted to applications that have access to sensitive data or can perform destructive actions. If you really think you need one of these, ask @tjhorner on slack. API Reference Everything you can do with the Clear API is located at the API reference! Updated about 5 years ago
https://docs.clear.codeday.org/docs
2022-09-25T07:38:17
CC-MAIN-2022-40
1664030334515.14
[array(['https://files.readme.io/6851b5a-funny_illustration.png', 'funny_illustration.png 1000'], dtype=object) array(['https://files.readme.io/6851b5a-funny_illustration.png', 'Click to close... 1000'], dtype=object) ]
docs.clear.codeday.org
Use: Extract a plugin using the shim wrapper - Move the project to an external repo. We recommend preserving the path structure: for example, if your plugin was located at plugins/inputs/cpuin the Telegraf repo, move it to plugins/inputs/cpuin the new repo. - Copy main.go into your project under the cmdfolder. This serves as the entry point to the plugin when run as a stand-alone program. The shim isn’t designed to run multiple plugins at the same time, so include only one plugin per repo. - Edit the main.gofile to import your plugin. For example, _ "github.com/me/my-plugin-telegraf/plugins/inputs/cpu". See an example of where to edit main.gohere. - Add a plugin.conf for configuration specific to your plugin. This config file must be separate from the rest of the config for Telegraf, and must not be in a shared directory with other Telegraf configs. Test and run your plugin - Build the cmd/main.gousing the following command with your plugin name: go build -o plugin-name cmd/main.go - Test the binary: - If you’re building a processor or output, first feed valid metrics in on STDIN. Skip this step if you’re building an input. - Test out the binary by running it (for example, ./project-name -config plugin.conf). Metrics will be written to STDOUT. You might need to hit enter or wait for your poll duration to elapse to see data. Ctrl-Cto end your test. - Configure Telegraf to call your new plugin binary. For an input, this would look something like: [[inputs.execd]] command = ["/path/to/rand", "-config", "/path/to/plugin.conf"] signal = "none" Refer to the execd plugin documentation for more information. Publish your plugin Publishing your plugin to GitHub and open a Pull Request back to the Telegraf repo letting us know about the availability of your external plugin..
https://docs.influxdata.com/telegraf/v1.24/configure_plugins/external_plugins/shim/
2022-09-25T08:24:23
CC-MAIN-2022-40
1664030334515.14
[]
docs.influxdata.com
lbuild module: modm:communication:sab The SAB (**S**ensor **A**ctuator **B**us) is a simple master-slave bus system. It is primarily used to query simple sensors and control actuators inside our robots. One master can communicate with up to 32 slaves. The slaves are only allowed to transmit after a direct request by the master. They may signal an event by an extra IO line, but this depends on the slave. Features: SYNC- Synchronization byte (always 0x54) LENGTH- Length of the payload (without header, command and CRC byte) HEADER- Address of the slave and two flag bits COMMAND- Command code DATA- Up to 32 byte of payload CRC- CRC-8 checksum (iButton) The second bit is always false when the master is transmitting. In the other direction, when the slaves are responding, the second bit has to following meaning: true- Message is an positive response and may contain a payload false- Message signals an error condition and carries only one byte of payload. This byte is an error code. Between different boards CAN transceivers are used. Compared to RS485 the CAN transceivers have the advantage to work without a separate direction input. You can just connected the transceiver directly to the UART of your microcontroller. Within a single PCB, standard digital levels are used (either 0-3,3V or 0-5V) in a multi-drop configuration. Meaning it does not allow multiple drivers but multiple receivers. The idle state of a UART transmission line is high, so standard TTL-AND gates have to be used for bundling transmission lines from multiple slaves. Both approaches can be combined to reduce the number of needed CAN transceivers on a single board. Between two boards you should always use transceivers and therefore differential signaling to improve noise immunity. The signal lines to indicate events by the slave are strict optional, you may or may not use them (if the slave provides them). Define a sab::Action. Example: A complete example is available in the example/sab folder. Error code. Error codes below 0x20 are reserved for the system. Every other code may be used by user.
https://docs.modm.io/develop/api/attiny85v-20su/group__modm__communication__sab.html
2022-09-25T07:16:48
CC-MAIN-2022-40
1664030334515.14
[]
docs.modm.io
Tokens reference Tokens are placeholders in a check definition that the agent replaces with entity information before executing the check. You can use tokens to fine-tune check attributes (like alert thresholds) on a per-entity level while reusing the check definition. When a check is scheduled to be executed by an agent, it first goes through a token substitution step. The agent replaces any tokens with matching attributes from the entity definition, and then the check is executed. Invalid templates or unmatched tokens return an error, which is logged and sent to the Sensu backend message transport. Checks with token-matching errors are not executed. Token substitution is supported for check, hook, and dynamic runtime asset definitions. Only entity attributes are available for substitution. Token substitution is not available for event filters because filters already have access to the entity. Available entity attributes will always have string values, such as labels and annotations. Example: Token substitution for check thresholds This example demonstrates a reusable disk usage check. The check command includes -w (warning) and -c (critical) arguments with default values for the thresholds (as percentages) for generating warning or critical events. The check will compare every subscribed entity’s disk space against the default threshold values to determine whether to generate a warning or critical event. However, the check command also includes token substitution, which means you can add entity labels that correspond to the check command tokens to specify different warning and critical values for individual entities. Instead of creating a different check for every set of thresholds, you can use the same check to apply the defaults in most cases and the token-substituted values for specific entities. Follow this example to set up a reusable check for disk usage: Add the sensu/check-disk-usage dynamic runtime asset, which includes the command you will need for your check: sensuctl asset add sensu/check-disk-usage:0.6.0 You will receive a response to confirm that the asset was added: fetching bonsai asset: sensu/check-disk-usage:0.6.0 added asset: sensu/check-disk-usage:0.6.0 You have successfully added the Sensu asset resource, but the asset will not get downloaded until it's invoked by another Sensu resource (ex. check). To add this runtime asset to the appropriate resource, populate the "runtime_assets" field with ["sensu/check-disk-usage]. Create the check-disk-usagecheck: This check will run on every entity with the subscription system. According to the default values in the command, the check will generate a warning event at 80% disk usage and a critical event at 90% disk usage. To receive alerts at different thresholds for an existing entity with the systemsubscription, add disk_warningand disk_criticallabels to the entity. Use sensuctl to open an existing entity in a text editor: sensuctl edit entity ENTITY_NAME And add the following labels in the entity metadata: labels: disk_warning: "65" disk_critical: "75" After you save your changes, the check-disk-usage check will substitute the disk_warning and disk_critical label values to generate events at 65% and 75% of disk usage, respectively, for this entity only. The check will continue to use the 80% and 90% default values for other subscribed entities. Add a hook that uses token substitution Now you have a resusable check that will send disk usage alerts at default or entity-specific thresholds. You may want to add a hook to list more details about disk usage for warning and critical events. The hook in this example will list disk usage in human-readable format, with error messages filtered from the hook output. By default, the hook will list details for the top directory and the first layer of subdirectories. As with the check-disk-usage check, you can add a disk_usage_root label to individual entities to specify a different directory for the hook via token substitution. Add the hook definition: cat << EOF | sensuctl create --- type: HookConfig api_version: core/v2 metadata: name: disk_usage_details spec: command: du -h --max-depth=1 -c {{index .labels "disk_usage_root" | default "/"}} 2>/dev/null runtime_assets: null stdin: false timeout: 60 EOF cat << EOF | sensuctl create { "type": "HookConfig", "api_version": "core/v2", "metadata": { "name": "disk_usage_details" }, "spec": { "command": "du -h --max-depth=1 -c {{index .labels "disk_usage_root" | default \"/\"}} 2>/dev/null", "runtime_assets": null, "stdin": false, "timeout": 60 } } EOF Add the hook to the check-disk-usagecheck. Use sensuctl to open the check in a text editor: sensuctl edit check check-disk-usage Update the check definition to include the disk_usage_detailshook for non-zeroevents: check_hooks: - non-zero: - disk_usage_details As with the disk usage check command, the hook command includes a token substitution option. To use a specific directory instead of the default for specific entities, edit the entity definition to add a disk_usage_rootlabel and specify the directory: Use sensuctl to open the entity in a text editor: sensuctl edit entity ENTITY_NAME Add the disk_usage_rootlabel with the desired substitute directory in the entity metadata: labels: disk_usage_root: "/substitute-directory" After you save your changes, for this entity, the hook will substitute the directory you specified for the disk_usage_root label to provide additional disk usage details for every non-zero event the check-disk-usage check generates. Manage entity labels You can use token substitution with any defined entity attributes, including custom labels. read the entities reference for information about managing entity labels for proxy entities and agent entities. Manage dynamic runtime assets You can use token substitution in the URLs of your your dynamic runtime asset definitions. Token substitution allows you to host your dynamic runtime assets at different URLs (such as at different datacenters) without duplicating your assets, as shown in the following example: ---" } ] } } With this asset definition, which includes the .labels.asset_url token substitution, checks and hooks can include sensu-go-hello-world as a dynamic runtime assets and Sensu Go will use the token substitution for the agent’s entity. Handlers and mutators can also include sensu-go-hello-world as a dynamic runtime asset, but Sensu Go will use the token subtitution for the backend’s entity instead of the agent’s entity. You can also use token substitution to customize dynamic runtime asset headers (for example, to include secure information for authentication). Sensu also provides an assetPath function that allows you to substitute a dynamic runtime asset’s local path on disk. NOTE: To maintain security, you cannot use token substitution for a dynamic runtime asset’s SHA512 value. Token specification Sensu Go uses the Go template package to implement token substitution. Use double curly braces around the token and a dot before the attribute to be substituted: {{ .system.hostname }}. Token substitution syntax Tokens are invoked by wrapping references to entity attributes and labels with double curly braces, such as {{ .name }} to substitute an entity’s name. Access nested Sensu entity attributes with dot notation (for example, system.arch). {{ .name }}would be replaced with the entity nameattribute {{ .labels.url }}would be replaced with a custom label called url {{ .labels.disk_warning }}would be replaced with a custom label called disk_warning {{ index .labels "disk_warning" }}would be replaced with a custom label called disk_warning {{ index .labels "cpu.threshold" }}would be replaced with a custom label called cpu.threshold NOTE: When an annotation or label name has a dot (for example, cpu.threshold), you must use the template index function syntax to ensure correct processing because the dot notation is also used for object nesting. Token substitution default values If an attribute is not provided by the entity, a token’s default value will be substituted. Token default values are separated by a pipe character and the word “default” ( | default). Use token default values to provide a fallback value for entities that are missing a specified token attribute. For example, {{.labels.url | default ""}} would be replaced with a custom label called url. If no such attribute called url is included in the entity definition, the default (or fallback) value of will be used to substitute the token. Token substitution with quoted strings You can escape quotes to express quoted strings in token substitution templates as shown in the Go template package examples. For example, to provide "substitution" as a default value for entities that are missing the website attribute (including the quotation marks): {{ .labels.website | default "\"substitution\"" }} Unmatched tokens If a token is unmatched during check preparation, the agent check handler will return an error, and the check will not be executed. Unmatched token errors are similar to this example: error: unmatched token: template: :1:22: executing "" at <.system.hostname>: map has no entry for key "System" Check config token errors are logged by the agent and sent to Sensu backend message transport as check failures. Token data type limitations As part of the substitution process, Sensu converts all tokens to strings. This means that token substitution cannot be applied to any non-string values like numbers or Booleans, although it can be applied to strings that are nested inside objects and arrays. For example, token substitution cannot be used for specifying a check interval because the interval attribute requires an integer value. Token substitution can be used for alerting thresholds because those values are included within the command string.
https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-schedule/tokens/
2022-09-25T08:46:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.sensu.io
Create a basic chart In this example you compare the counts of user actions by calculating information about the actions customers have taken on the online store website. - The number of times each product is viewed - The number of times each product is added to the cart - The number of times each product is purchased Prerequisite This example requires the productName field from the Enabling field lookups section. You must complete all of those steps before continuing with this section. If you do not configure the field lookups, the searches will not produce the correct results. Steps - Start a new search. - Set the time range to All time. - Run the followingcommand to count the number of events that are action=purchaseand action=addtocart. The search then uses the renamecommand to rename the fields that appear in the results. The chartcommand is a transforming command. The results of the search appear on the Statistics tab. - Click the Visualization tab. The search results appear in a Pie chart. - Change the display to a Column chart. Next step Create an overlay chart and explore visualization options See also chart command in the Search Reference rename command in the Search Reference Transforming commands!
https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchTutorial/Basicchart
2022-09-25T08:41:29
CC-MAIN-2022-40
1664030334515.14
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
IEnchantment Link to ienchantment An IEnchantment essentially is an IEnchantmentDefinition and an enchantment level. Importing the package Link to importing-the-package It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import. import crafttweaker.enchantments.IEnchantment; ZenGetters/ZenSetters Link to zengetterszensetters ZenMethods Link to zenmethods Retrieve the Enchantment as NBT Link to retrieve-the-enchantment-as-nbt You might want to get the Enchantment's NBT-Tag. You can either cast it as IData or use the method: ZenScriptCopy ench.makeTag(); ench as crafttweaker.data.IData;
https://docs.blamejared.com/1.12/en/Vanilla/Enchantments/IEnchantment
2022-09-25T09:15:25
CC-MAIN-2022-40
1664030334515.14
[]
docs.blamejared.com
RenderTick Link to rendertick The RenderTick Event is fired for every render tick on the client. Event Class Link to event-class You will need to cast the event in the function header as this class: crafttweaker.event.RenderTickEvent You can, of course, also import the class before and use that name then. Event interface extensions Link to event-interface-extensions WorldTick Events implement the following interfaces and are able to call all of their methods/getters/setters as well: ZenGetters Link to zengetters The following additional information can be retrieved from the event:
https://docs.blamejared.com/1.12/en/Vanilla/Events/Events/RenderTick
2022-09-25T07:14:41
CC-MAIN-2022-40
1664030334515.14
[]
docs.blamejared.com
Agents in Zendesk use the Zendesk Time tracking app to record time they spend on a ticket. "Zendesk Ticket Time Tracking to NetSuite Support Case Time Tracking" data flow syncs Zendesk ticket - time tracking information to NetSuite. With this information you can track how much time an Agent has spent on a ticket. The Zendesk Ticket only time tracking information recorded by the Zendesk Time tracking app is supported. Other time tracking apps are not supported. The following screens depict the data flow: 1. Update Time Tracking in Zendesk. 2. The record is updated. In the Integrator.io data flow settings section, ensure that the data flow is turned on. The real time flow is triggered automatically (when the Zendesk record is saved). The status of the data flow is visible in the dashboard. The record is now updated in NetSuite. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/230011607-Zendesk-Ticket-Time-Tracking-To-NetSuite-Support-Case-Time-Tracking
2022-09-25T07:37:00
CC-MAIN-2022-40
1664030334515.14
[array(['/hc/en-us/article_attachments/214810087/2016-10-04_14-42-50a.png', None], dtype=object) array(['/hc/en-us/article_attachments/214732688/2016-10-04_14-43-20a.png', None], dtype=object) array(['/hc/article_attachments/360093164612/TS_flows.png', 'TS_flows.png'], dtype=object) array(['/hc/en-us/article_attachments/214810167/2016-10-04_14-46-34a.png', None], dtype=object) ]
docs.celigo.com
Sub-Document Operations with the Java SDK Sub-Document operations can be used to efficiently access and change parts of Path syntax. Considering the document: { "title": "Ayr (Scotland)", "name": "Enterkine House Hotel", "address": "by Annbank. Ayrshire", "directions": "5 miles off A77, follow B742 to Mossblown then Annbank", "phone": "+44 1292 520580", "tollfree": null, "email": null, "fax": null, "url": "", "checkin": "2.00pm", "checkout": "11am", "price": "from £100", "geo": { "lat": 55.48034590743372, "lon": -4.51612114906311, "accuracy": "ROOFTOP" }, "type": "hotel", "id": 1368, "country": "United Kingdom", "city": "South Ayrshire", "state": null, "reviews": [], "public_likes": ["], "vacancy": true, "description": "four star country house hotel situated in 350 acres of woodland estate yet only 10 mins from Prestwick ,Ayr and Troon. Award winning food by Paul Moffat and team", "alias": null, "pets_ok": false, "free_breakfast": true, "free_internet": false, "free_parking": false } The paths name, geo.lat and public_likes[0] are all valid paths. Retrieving The lookupIn operations query the document for certain path(s); these path(s) are then returned. You have a choice of actually retrieving the document path using the get Sub-Document operation, or simply querying the existence of the path using the exists Sub-Document operation. The latter saves even more bandwidth by not retrieving the contents of the path if it is not needed. The examples use the following imports: import static com.couchbase.client.java.kv.LookupInSpec.exists; import static com.couchbase.client.java.kv.LookupInSpec.get; import static com.couchbase.client.java.kv.MutateInOptions.mutateInOptions; import static com.couchbase.client.java.kv.MutateInSpec.arrayAddUnique; import static com.couchbase.client.java.kv.MutateInSpec.arrayAppend; import static com.couchbase.client.java.kv.MutateInSpec.arrayInsert; import static com.couchbase.client.java.kv.MutateInSpec.arrayPrepend; import static com.couchbase.client.java.kv.MutateInSpec.decrement; import static com.couchbase.client.java.kv.MutateInSpec.increment; import static com.couchbase.client.java.kv.MutateInSpec.insert; import static com.couchbase.client.java.kv.MutateInSpec.remove; import static com.couchbase.client.java.kv.MutateInSpec.upsert; import java.time.Duration; import java.util.Arrays; import java.util.Collections; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; import com.couchbase.client.core.error.CasMismatchException; import com.couchbase.client.core.error.DurabilityImpossibleException; import com.couchbase.client.core.error.subdoc.PathExistsException; import com.couchbase.client.core.error.subdoc.PathNotFoundException; import com.couchbase.client.core.msg.kv.DurabilityLevel; import com.couchbase.client.java.Bucket; import com.couchbase.client.java.Cluster; import com.couchbase.client.java.Collection; import com.couchbase.client.java.Scope; import com.couchbase.client.java.json.JsonArray; import com.couchbase.client.java.json.JsonObject; import com.couchbase.client.java.kv.GetResult; import com.couchbase.client.java.kv.LookupInResult; import com.couchbase.client.java.kv.MutateInResult; import com.couchbase.client.java.kv.MutateInSpec; import com.couchbase.client.java.kv.MutationResult; import com.couchbase.client.java.kv.PersistTo; import com.couchbase.client.java.kv.ReplicateTo; import reactor.core.publisher.Mono; LookupInResult result = collection.lookupIn("hotel_1368", Collections.singletonList(get("geo.lat"))); String str = result.contentAs(0, String.class); System.out.println("getFunc: Latitude = " + str); try { LookupInResult result = collection.lookupIn("hotel_1368", Collections.singletonList(exists("address.does_not_exist"))); } catch (PathNotFoundException e) { System.out.println("existsFunc: " + e); } Multiple operations can be combined: try { LookupInResult result = collection.lookupIn("hotel_1368", Arrays.asList(get("geo.lat"), exists("address.does_not_exist"))); } catch (PathNotFoundException e) { System.out.println("combine: " + e); } Choosing an API The Java SDK provides three APIs for all operations. There’s the simple blocking one you’ve already seen, then this asynchronous variant that returns Java CompletableFuture: CompletableFuture<LookupInResult> future = collection.async().lookupIn("hotel_1368", Collections.singletonList(get("geo.lat"))); try { LookupInResult result = future.get(); System.out.println("future: Latitude: " + result.contentAs(0, Number.class)); } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); } And a third that uses reactive programming primitives from Project Reactor: Mono<LookupInResult> mono = collection.reactive().lookupIn("hotel_1368", Collections.singletonList(get("geo.lat"))); //("hotel_1368", Arrays.asList(upsert("email", "[email protected]"))); Likewise, the insert operation will only add the new value to the path if it does not exist: try { collection.mutateIn("hotel_1368", Collections.singletonList(insert("alt_email", "[email protected]"))); } catch (PathExistsException err) { System.out.println("insertFunc: exception caught, path already exists"); } Dictionary values can also be replaced or removed, and you may combine any number of mutation operations within the same general mutateIn API. Here’s an example of one which replaces one path and removes another. collection.mutateIn("hotel_1368", Arrays.asList(remove("tz"), insert("alt_email", "hotel84: MutationResult result = collection.mutateIn("hotel_1368", Collections.singletonList(arrayAppend("public_likes", Collections.singletonList("Mike Rutherford")))); /* public_likes is now: ["] */ MutationResult result = collection.mutateIn("hotel_1368", Collections.singletonList(arrayPrepend("public_likes", Collections.singletonList("John Smith")))); /* public_likes is now: ["John Smith", "] */ If your document only needs to contain an array, you do not have to create a top-level object wrapper to contain it. Simply initialize the document with an empty array and then use the empty path for subsequent Sub-Document array operations:: MutateInResult result = collection.mutateIn("hotel_14225",("hotel_14226", Collections.singletonList(arrayAddUnique("unique", 95))); try { collection.mutateIn("hotel_14226", Collections.singletonList(arrayAddUnique("unique", 95))); throw new RuntimeException("should have thrown PathExistsException"); } catch (PathExistsException err) { System.out.println("arrayUnique: caught exception, path already exists"); } Note that currently the arrayAddUnique will fail with a PathMismatchException if the array contains JSON floats, objects, or arrays. The arrayAddUnique operation will also fail with CannotInsertValueException: MutateInResult result = collection.mutateIn("hotel_1501", Collections.singletonList(arrayInsert("foo[1]", Collections.singletonList( increment and decrement full-document operations: MutateInResult result = collection.mutateIn("hotel_1368", Collections.singletonList(increment("logins", 1))); // Counter operations return the updated count Long count = result.contentAs(0, Long.class); The increment and decrement operations perform simple arithmetic against a numeric value. The updated value is returned. MutateInResult result = collection.mutateIn("hotel_1368", Collections.singletonList(decrement("logouts", 150))); // Counter operations return the updated count Long count = result.contentAs(0, Long.class);In or mutateIn command, the server will execute all the operations with the same version of the document. When submitting multiple mutation operations within a single mutateIn command, those operations are considered to be part of a single transaction: if any of the mutation operations fail, the server will logically roll-back any other mutation operations performed within the mutateIn, even if those commands would have been successful had another command not failed. When submitting multiple retrieval operations within a single lookupIn command, the status of each command does not affect any other command. This means that it is possible for some retrieval operations to succeed and others to fail. While their statuses are independent of each other, you should note that operations submitted within a single lookupIn are all executed against the same version of the document. Creating Paths Sub-Document mutation operations such as upsert orPath option may be used. MutateInResult result = collection.mutateIn("hotel_1368", thread1 = new Thread() { public void run() { collection.mutateIn("hotel_1501", Collections.singletonList(arrayAppend("foo", Collections.singletonList(99)))); } }; Thread thread2 = new Thread() { public void run() { collection.mutateIn("hotel_1501", Collections.singletonList(arrayAppend("foo", Collections.singletonList(101)))); } }; thread1.start(); thread2.start(); Even when modifying the same part of the document, operations will not necessarily conflict. For example, two concurrent arrayAppend operations to the same array will both succeed, never overwriting the other. So in some cases the application will not need to supply a CAS value to protect against concurrent modifications. If CAS is required then it can be provided like this: GetResult doc = collection.get("hotel_1368"); MutationResult result = collection.mutateIn("hotel_1368", Collections.singletonList(decrement("logouts", 150)), mutateInOptions().cas(doc.cas())); Durability Couchbase’s traditional 'client verified' durability, using PersistTo and ReplicateTo, is still available, particularly for talking to Couchbase Server 7.0 and earlier: MutationResult result = collection.mutateIn("hotel_1368", Collections.singletonList(MutateInSpec.upsert("foo", "bar")), mutateInOptions().durability(PersistTo.ACTIVE, ReplicateTo.ONE)); In Couchbase Server 7.0 and up, this is built upon with Durable Writes, which uses the concept of majority to indicate the number of configured Data Service nodes to which commitment is required: MutationResult result = collection.mutateIn("hotel_1368", Collections.singletonList(MutateInSpec.upsert().. JsonObject docContent = JsonObject.create().put("body", "value"); collection.mutateIn("hotel_14006", Arrays.asList(MutateInSpec.upsert("foo", "bar").xattr().createPath(), MutateInSpec.replace("", docContent))); The full document can be replaced using the Sub-Doc API. In the above snippet, the full document is replaced, whilst xattrs are updated with the same command. The empty "" in MutateInSpec.replace("", docContent) represents the full document.
https://docs.couchbase.com/java-sdk/current/howtos/subdocument-operations.html
2022-09-25T08:38:47
CC-MAIN-2022-40
1664030334515.14
[]
docs.couchbase.com
Remote Code Editing with Visual Studio Code and other editors What we'll cover In this tutorial, you will learn how to easily link your robot’s code directories into your local machine over SSH where local updates are securely sent to the robot instantly. Instead of using Remote Desktop, you can also use Freedom's remote SSH feature to edit code on your robot using VScode or PyCharm Professional. Since it's not forwarding graphics, the connection is much faster and so it feels like you're editing code locally! That's also useful for fixing a bug in your code while your robot is doing work outside and only has an LTE connection. Even when your robot is in the office, with DHCP its IP address might change every other week or so. In this tutorial, we'll set it up for the free open-source code editor VScode, other editors with SSH follow a similar configuration. Setup steps Install the VScode Remote SSH Extension Installing the extension only takes a few moments. The VScode docs do a good job walking you through it. Modifying your local SSH configuration file VScode uses your local ~/.ssh/config (location might differ for different Windows and Mac) to connect to the remote host and makes it easy to edit it from within the application. In the bottom left, click on the green 'Open a remote window' button and click 'Remote-SSH: Open Configuration File'. Finding the SSH config file If you haven't used ssh before, the config file might not exist yet. Create the file yourself following this guide. The instructions are similar for Windows and MacOS. Copy the following default into that file: Host myRobot User userNameOnYourRobot Port 12345 # get this number from the remote ssh page Hostname tunnel.freedomrobotics.ai # this might be different for you too! ServerAliveInterval 30 # optional: keep the connection alive longer on inactivity [min] Create an SSH tunnel to your robot While you can fill in the IP address of the robot if you're in the same Local Area Network network, that IP address can change and doesn't work if you're remote or connected to a different network connection. Instead, log in to Freedom Robotics, select the device you've installed on and 'enable remote ssh' in SETTINGS> REMOTE SSH. Once you've enabled SSH, you should see something like ssh [email protected] -p 12345. Use those values to populate the config file you opened earlier in VScode. On the 'Remote Explorer' icon on the sidebar on the left and you should see the entry you just added to the config file! Click on the small icon 'Connect to Host in new Window' next to it. It will open a new window and ask you for the robot's password. Once you're in, click the file explorer button on the top left. Pick a folder you want to open, and start coding! Editing code on a robot hundreds of miles away using VScode and Freedom remote SSH. Tunnel expires! Note that this tunnel is temporary and will expire if not used for a while for security reasons. If you disconnect for a longer time or reboot the robot, enable remote ssh again from the app to get the new credentials. That's it! I typically use VScode alongside a terminal window that's ssh'ed in as well. Since you've set up your config file, you can now SSH in using ssh myRobot! Updated over 1 year ago
https://docs.freedomrobotics.ai/docs/remote-code-editing-with-vscode
2022-09-25T07:26:58
CC-MAIN-2022-40
1664030334515.14
[array(['https://files.readme.io/dc263a2-remote-status-bar.png', 'remote-status-bar.png 370'], dtype=object) array(['https://files.readme.io/dc263a2-remote-status-bar.png', 'Click to close... 370'], dtype=object) array(['https://files.readme.io/0ea5d72-Screen_Shot_2020-07-23_at_12.28.44_PM.png', 'Screen Shot 2020-07-23 at 12.28.44 PM.png 550'], dtype=object) array(['https://files.readme.io/0ea5d72-Screen_Shot_2020-07-23_at_12.28.44_PM.png', 'Click to close... 550'], dtype=object) array(['https://files.readme.io/345929f-enable_remote_ssh.png', 'enable remote ssh.png 779'], dtype=object) array(['https://files.readme.io/345929f-enable_remote_ssh.png', 'Click to close... 779'], dtype=object) array(['https://files.readme.io/75382d7-Screen_Shot_2020-07-23_at_12.55.04_PM.png', 'Screen Shot 2020-07-23 at 12.55.04 PM.png 1608'], dtype=object) array(['https://files.readme.io/75382d7-Screen_Shot_2020-07-23_at_12.55.04_PM.png', 'Click to close... 1608'], dtype=object) ]
docs.freedomrobotics.ai
Breaking changes in 5.4 Versions 5.4.0 and forwards require SQL Server 2016 for on-premise Core deployments (UI + Log service) Updating from pre 5.3 versions: API Trigger query parameters are now typed when passed to the Process, this might affect Processes which expected them to be strings. Upcoming breaking changes in 5.5 Support for Service Bus for Windows Server is removed, when updating to 5.5 switching to Rabbit for on-premise deployments in mandatory. 5.4.4 3rd of December 2021 This service release contains the following changes/fixes: General Added support for managed hosting Frends infrastructure in City Cloud in Sweden Added support for OpenStack Ceph as Object Storage Added support for Agent Group specific RabbitMQ users that the UI can create Managed cloud Agents are not currently supported for tenants deployed in City Cloud Process Bug: Local Subprocesses are now executed concurrently (requires Process recompiling) Bug: Process Task parameters/results with JSON 'undefined' values are now logged as 'null' instead because 'undefined' is not valid JSON and will break the UI JavaScript UI It is now possible to choose which API Processes are deployed when deploying an API UI will no longer send configuration messages to out-of-date (different major or minor version) Agents to prevent errors. Dashboard's error list now shows the timestamp in the browser's timezone Bug: Process Instance List view no longer shows Development Agent Group's instances instead of the intended Agent Group's instances in some rare cases Bug: Deleted Agent Group's cleanup settings are now removed correctly Bug: It is now possible to remove the last Tag from a Process Bug: Task page no longer fails to get updated Task information from the official Task feed when there's a large amount of Tasks installed Agent Bug: Shared state 'Try remove' action no longer throws an error when it succeeds Bug: Cross-platform Agent no longer logs Trigger parameters when 'Log trigger parameters and result' is disabled Bug: API Key rate limiting now correctly refreshes quota Bug: Cross-platform Agent now reports Triggers in use correctly when running in limited mode 5.4.3 30th of August 2021 This service release contains the following changes/fixes: UI Environment Variable import now supports importing secrets It is now possible to disable/enable Monitoring rules Process variable names are now validated more rigorously Bug: Process editor no longer gets stuck when viewing old Process versions with Firefox Bug: Clicking on Task parameter tab on old Process version with Firefox no longer navigates to the Dashboard Bug: Process editor’s Task update button now updates the parameter schema correctly Bug: Improved code generation to fix building Processes targeting .NET Standard which would sometimes fail due to differences between compilers Agent Canceled Processes are shown more clearly and distinguished from canceled or timed out Tasks Bug: Agent no longer crashes when reporting File Watch Trigger initialization failure Bug: Local subprocesses are now correctly cancelled when the main Process cancels 5.4.2 25th of May 2021 This service release contains the following changes/fixes: UI Bug: Index used for Process instance deletion modified to fix slow instance cleanup Bug: Fixed slow database migration when there's a huge number of Process instances It is now possible to view Environment variable values in the Process editor Agent Bug: Cross-platform Agent now correctly logs request and response values 5.4.1 15th of April 2021 This service release contains the following changes/fixes: UI Bug: Deleting manual trigger parameter no longer misplaces description Bug: Deleting an element from a list type Environment Variable no longer always deletes the last element of the list Bug: Monitoring rules now support ending at midnight Process log retention has a default upper limit of 60 days which can altered by Frends support Log everything is disabled for new environments by default, it can be enabled by Frends support Cross-platform Agent auto-sync is less aggressive, it only sends configurations if the Agent has processed all previous configuration messages Bug: Process instance list filtering by promoted variables was done in memory instead of database Agent Bug: Fixed an issue where the Gateway Agent would not start Failing to cleanup temporary Process setup files does not fail the Process deployment any more 5.4.0 New feature: Shared state cache Agents inside of an Agent Group now have a shared cache they can access. This cache is stored in the shared database of the Agent Group and allows storing and fetching key-value based data with a time-to-live. New feature:. New feature: Note: This is the last version that will support editing old frends 4.2 style Processes, the next version (frends 5.5), they will no longer be supported. If you have old 4.2 style Processes, consider converting them to the new format.
https://docs.frends.com/en/articles/4953793-5-4-release-notes
2022-09-25T08:04:11
CC-MAIN-2022-40
1664030334515.14
[]
docs.frends.com
coreboot building Intro This documents describes the procedure for compiling coreboot for NovaCustom NV4X. Requirements - Docker - Git Procedure The easiest way to build coreboot is to use the official Docker image. Obtain the image: docker pull coreboot/coreboot-sdk:0ad5fbd48d Clone the coreboot repository: git clone Navigate to the source code directory and checkout to the desired revision: Replace the REVISION with one of the: - novacustom_nv4x/releasefor the latest released version - novacustom_nv4x_vVERSION(e.g. v1.2.1) for the given release cd coreboot git remote add dasharo git submodule update --init --recursive --checkout git fetch dasharo git checkout REVISION Build the firmware: ./build.sh build The resulting coreboot image will be placed in artifacts/dasharo_novacustom_nv4x_VERSION.rom. Warning: Do not run ./build.sh as root. This command uses docker and should be executed as your current user. If you're having trouble running build.sh on your user account, follow the Docker instructions outlined in Requirements.
https://docs.dasharo.com/variants/novacustom_nv4x/building/
2022-09-25T07:45:26
CC-MAIN-2022-40
1664030334515.14
[]
docs.dasharo.com
toctoc Outbound InterceptionOutbound Interception Automatically decrypt sensitive data after it leaves your app and before it reaches your trusted destination. By including our Node.js SDK or Python SDK, we will automatically route all requests from your backend to third-party APIs through the Evervault edge network and decrypt any fields that we detect are encrypted. This means that fields can be encrypted before they reach your backend, stored in your database and sent to third-party APIs without writing any logic for decryption, or worrying about storing the data in a secure way. How does Outbound Interception work?How does Outbound Interception work? Relay can be used to pass data to third-party services and APIs using the Relay HTTP CONNECT Proxy on relay.evervault.com:443. Relay intercepts outbound allow you to automatically forward all requests to Relay with the Proxy-Authorization header included and the Relay Root CA trusted. No additional configuration is required. Test with curlTest with curl Send an encrypted string outbound through Relay without integrating an SDK: curl -x <your destination url> -H 'Content-Type: application/json' -H 'Proxy-Authorization: <your Team's api key>' -X POST -d '{ "key": "<an Evervault encrypted string>"}' -kv Strict ModeStrict Mode When your app sends an outbound request through Relay, any encrypted fields in the payload will be decrypted. It's important that these requests only go to destinations you trust. With Strict Mode on, only requests to destinations you have chosen will be allowed through Relay. If a request is sent to non-trusted destination, Relay will respond with a 403 HTTP status and Evervault error header: x-evervault-error-code: forbidden-destination. You can configure in the Dashboard what traffic should be allowed through Relay by going to Settings -> Strict Mode.
https://docs.evervault.com/concepts/relay/outbound-interception
2022-09-25T07:53:59
CC-MAIN-2022-40
1664030334515.14
[]
docs.evervault.com
): with \(N = S + I + R\) and \(\mu\) is the birth rate and \(\nu\) \(\frac{\beta}{\gamma}\): In EMOD, the state changes at fixed time steps. The size of the time step, denoted \(\delta\), is selected to be small compared to the characteristic timescale of the disease dynamics. By default, \(\delta\) =, \(\lambda = \frac{\beta I}{N}\). Expose: Each susceptible individual becomes infected with probability \(p_i = 1-\mathrm{e}^{\lambda\delta}\), where \(\delta\), \(p_r = 1 - \text{exp}(-\gamma\delta)\). Birth: For birthrate \(\mu\), the number of new susceptible individuals will be Poisson distributed with rate \(p_b = \mu N\delta\). Natural Death: The probability of death for each individual is \(p_d = 1 - \text{exp}(-\nu\delta)\). Putting these pieces together over the course of a time step, let: With these numbers in mind and assuming that only one state transition event happens to each individual in a time step:
https://docs.idmod.org/projects/emod-vector/en/2.21_a/model-compartments.html
2022-09-25T07:40:06
CC-MAIN-2022-40
1664030334515.14
[]
docs.idmod.org
Next: Inheritance, Previous: Properties, Up: classdef Classes [Contents][Index] All class methods must be defined within methods blocks. An exception to this rule is described at the end of this subsection. Those methods blocks can have additional attributes specifying the access rights or whether the methods are static, i.e., methods that can be called without creating an object of that class. classdef some_class methods function obj = some_class () disp ("New instance created."); endfunction function disp (obj) disp ("Here is some_class."); endfunction endmethods methods (Access = mode) function r = func (obj, r) r = 2 * r; endfunction endmethods methods (Static = true) function c = circumference (radius) c = 2 * pi () .* radius; endfunction endmethods endclassdef The constructor of the class is declared in the methods block and must have the same name as the class and exactly one output argument which is an object of its class. It is also possible to overload built-in or inherited methods, like the disp function in the example above to tell Octave how objects of some_class should be displayed (see Class Methods). In general, the first argument in a method definition is always the object that it is called from. Class methods can either be called by passing the object as the first argument to that method or by calling the object followed by a dot (" .") and the method’s name with subsequent arguments: >> obj = some_class (); New instance created. >> disp (obj); # both are >> obj.disp (); # equal In some_class, the method func is defined within a methods block setting the Access attribute to mode, which is one of: public The methods can be accessed from everywhere. private The methods can only be accessed from other class methods. Subclasses of that class cannot access them. protected The methods can only be accessed from other class methods and from subclasses of that class. The default access for methods is public. Finally, the method circumference is defined in a static methods block and can be used without creating an object of some_class. This is useful for methods, that do not depend on any class properties. The class name and the name of the static method, separated by a dot (" ."), call this static method. In contrast to non-static methods, the object is not passed as first argument even if called using an object of some_class. >> some_class.circumference (3) ⇒ ans = 18.850 >> obj = some_class (); New instance created. >> obj.circumference (3) ⇒ ans = 18.850 Additionally, class methods can be defined as functions in a folder of the same name as the class prepended with the ‘@’ symbol (see Creating a Class). The main classdef file has to be stored in this class folder as well. Next: Inheritance, Previous: Properties, Up: classdef Classes [Contents][Index]
https://docs.octave.org/v5.1.0/Methods.html
2022-09-25T08:22:59
CC-MAIN-2022-40
1664030334515.14
[]
docs.octave.org
Configures the scheduled RX algorithm. #include < rail_types.h> Configures the scheduled RX algorithm. Defines the start and end times of the receive window created for a scheduled receive. If either start or end times are disabled, they will be ignored. Definition at line 3047 of file rail_types.h. Field Documentation ◆ end The time to end receive. See endMode for more information about the types of end times you can specify. Definition at line 3065 of file rail_types.h. ◆ endMode How to interpret the time value specified in the end parameter. See the RAIL_TimeMode_t documentation for more information. Note that, in this API, if you specify a RAIL_TIME_DELAY, it is relative to the start time if given and relative to now if none is specified. Also, using RAIL_TIME_DISABLED means that this window will not end unless you explicitly call RAIL_Idle() or add an end event through a future update to this configuration. Definition at line 3075 of file rail_types.h. ◆ hardWindowEnd This setting tells RAIL what to do with a packet being received when the window end event occurs. If set to 0, such a packet will be allowed to complete. Any other setting will cause that packet to be aborted. In either situation, any posting of RAIL_EVENT_RX_SCHEDULED_RX_END is deferred briefly to when the packet's corresponding RAIL_EVENTS_RX_COMPLETION occurs. Definition at line 3104 of file rail_types.h. ◆ rxTransitionEndSchedule While in scheduled RX, you can still control the radio state via state transitions. This option configures whether a transition to RX goes back to scheduled RX or to the normal RX state. Once in the normal RX state, you will effectively end the scheduled RX window and can continue to receive indefinitely depending on the state transitions. Set to 1 to transition to normal RX and 0 to stay in the scheduled RX. This setting also influences the posting of RAIL_EVENT_RX_SCHEDULED_RX_END when the scheduled Rx window is implicitly ended by a packet receive (any of the RAIL_EVENTS_RX_COMPLETION events). See that event for details. - Note - An Rx transition to Idle state will always terminate the scheduled Rx window, regardless of this setting. This can be used to ensure Scheduled RX terminates on the first packet received (or first successful packet if the RX error transition is to Rx while the Rx success transition is to Idle). Definition at line 3095 of file rail_types.h. ◆ start The time to start receive. See startMode for more information about the types of start times that you can specify. Definition at line 3052 of file rail_types.h. ◆ startMode How to interpret the time value specified in the start parameter. See the RAIL_TimeMode_t documentation for more information. Use RAIL_TIME_ABSOLUTE for absolute times, RAIL_TIME_DELAY for times relative to the current time and RAIL_TIME_DISABLED to ignore the start time. Definition at line 3060 of file rail_types.h. The documentation for this struct was generated from the following file: - common/ rail_types.h
https://docs.silabs.com/rail/2.12/struct-r-a-i-l-schedule-rx-config-t
2022-09-25T09:14:58
CC-MAIN-2022-40
1664030334515.14
[]
docs.silabs.com
Version Notes¶ These notes will only include major changes. 0.13¶ Overhauled micro moment API Product-specific demographics Passthrough calculations Added problem results methods to simulation results Profit Hessian computation Checks of pricing second order conditions Newton-based methods for computing equilibrium prices Large speedups for supply-side and micro moment derivatives Universal display for fixed point iteration progress Support adjusting for simulation error in moment covariances 0.12¶ Refactored micro moment API Custom micro moments Properly scale micro moment covariances Pickling support 0.11¶ Elasticities and diversion ratios with respect to mean utility Willingness to pay calculations 0.10¶ Simplify micro moment API Second choice or diversion micro moments Add share clipping to make fixed point more robust Report covariance matrix estimates in addition to Cholesky root Approximation to the pure characteristics model Add option to always use finite differences 0.9¶ More control over matrices of instruments Split off fixed effect absorption into companion package PyHDFE Scrambled Halton and Modified Latin Hypercube Sampling (MLHS) integration Importance sampling Quantity dependent marginal costs Speed up various matrix construction routines Option to do initial GMM update at starting values Update BLP example data to better replicate original paper Lognormal random coefficients Removed outdated default parameter bounds Change default objective scaling for more comparable objective values across problem sizes Add post-estimation routines to simplify integration error comparison 0.8¶ Micro moments that match product and agent characteristic covariances Extended use of pseudo-inverses Added more information to error messages More flexible simulation interface Alternative way to simulate data with specified prices and shares Tests of overidentifying and model restrictions Report projected gradients and reduced Hessians Change objective gradient scaling Switch to a lower-triangular covariance matrix to fix a bug with off-diagonal parameters 0.7¶ Support more fixed point and optimization solvers Hessian computation with finite differences Simplified interface for firm changes Construction of differentiation instruments Add collinearity checks Update notation and explanations 0.6¶ Optimal instrument estimation Structured all results as classes Additional information in progress reports Parametric bootstrapping of post-estimation outputs Replaced all examples in the documentation with Jupyter notebooks Updated the instruments for the BLP example problem Improved support for multiple equation GMM Made concentrating out linear parameters optional Better support for larger nesting parameters Improved robustness to overflow 0.5¶ Estimation of nesting parameters Performance improvements for matrix algebra and matrix construction Support for Python 3.7 Computation of reasonable default bounds on nonlinear parameters Additional information in progress updates Improved error handling and documentation Simplified multiprocessing interface Cancelled out delta in the nonlinear contraction to improve performance Additional example data and improvements to the example problems Cleaned up covariance estimation Added type annotations and overhauled the testing suite 0.4¶ Estimation of a Logit benchmark model Support for fixing of all nonlinear parameters More efficient two-way fixed effect absorption Clustered standard errors 0.3¶ Patsy- and SymPy-backed R-style formula API More informative errors and displays of information Absorption of arbitrary fixed effects Reduction of memory footprint 0.2¶ Improved support for longdouble precision Custom ownership matrices New benchmarking statistics Supply-side gradient computation Improved configuration for the automobile example problem 0.1¶ Initial release
https://pyblp.readthedocs.io/en/latest/versions.html
2022-09-25T08:47:30
CC-MAIN-2022-40
1664030334515.14
[]
pyblp.readthedocs.io
View source for Permissions for Group/nl ← Help4.x:Permissions for Group/nl You do not have permission to edit this page, for the following reasons below: - Editing of this page is limited to registered doc users in the group: Email Confirmed. - This page cannot be updated manually. This page is a translation of the page Help4.x:Permissions for Group and the translation can be updated using the translation tool. - You must confirm your email address before editing pages. Please set and validate your email address through your user preferences. You can view and copy the source of this page. Templates used on this page: Return to Help4.x:Permissions for Group/nl.
https://docs.joomla.org/index.php?title=Help4.x:Permissions_for_Group/nl&action=edit
2022-09-25T09:25:35
CC-MAIN-2022-40
1664030334515.14
[]
docs.joomla.org
File Upload¶ File upload functionality is accessed through the +modify item view or the Index view. To upload a file on the +modify item view, click the browsers Browse/Choose button. Use the browser’s file dialog to select an item, then click the OK button. The file will be uploaded and saved with the previously chosen content type; file name suffixes are ignored. To upload a file or files on the global index navigation view, start by clicking the New Item link to bring the Create new item dialog into view. From the Index view, there are two methods of uploading files, single file or multiple files. Uploaded files will be logically placed within the current index, sub-index or namespace. Multiple file uploads have several restrictions: - the files will be uploaded and saved using the current name - existing items with the same name will be overwritten - the file names should have a valid suffix that defines the file type - files without a known suffix will be stored as is and available for download Single File Upload¶ Enter the new item name into the input area and click the Create button. Select the content type to proceed to the +modify view. Use the browsers file dialog to select an item, then click the OK button. The file will be uploaded and saved with the chosen content type; file name suffixes are ignored. Multiple File Upload¶ Click the browsers Browse/Choose button and select one or more files from the browser’s file dialog or use the drag & drop method to copy files. The file uploads will start immediately. Upload status will be displayed by overall and individual progress bars. The names of the files successfully uploaded will be prepended to list of files in the index.
https://moin-20.readthedocs.io/en/latest/user/upload.html
2022-09-25T08:58:56
CC-MAIN-2022-40
1664030334515.14
[]
moin-20.readthedocs.io
New and improved in this release: HTTP triggers Trigger editing Logging of process executions and presenting these in a graph Foreach loops Exception handling aka error scopes Environments Environment variables Process validation Exporting and importing processes It is possible to give only viewing rights to a user Icons for latest process execution added Cancelling running processes made possible App insights set up for the Azure deployments NuGet repository can be configured Improvements to the decision editor 4.1 SR1 New and improved in this release: Fixed problems Agent's config was causing with WCF calls Fixed bug that caused Agent log store to grow without limit Fixed bug where SignalR updates starved the DB server 4.1 SR2 New and improved in this release: Allow skipping database creation on Agent install Fixed login to new environments with Azure AD Fixed UI updating very slowly after changes were made Windows Authentication can now be used for On-Premise deployment Azure Agent user's password will no longer expire Agent can now be easily installed on a VM 4.1 SR3 - February 2016 New and improved in this release: Error handlers can no longer have error handlers File watch trigger's filePaths now always returns file paths 4.1 SR4 - March 2016 New and improved in this release: HTTP triggers pass on which HTTP verb was used when triggering it Optional HTTP parameters allowed Agent heartbeat monitoring added Bugs related to process execution errors, exceptions, string validation, process updating, and NuGet importing on IE 4.1 SR5 - April 2016 New and improved in this release: Improved log message processing Added support for parallel foreach branches Fixed errors caused by erroneous NuGet blob metadata Fixed bug which caused Agent to not start if generated SQL password contained an ampersand (&) Fixed bug which caused file watch trigger to sometimes process the same file twice Fixed bug where a missing HTTP parameter caused a KeyNotFoundException Fixed bug which caused Cobalt custom editor to not be always shown Fixed bug where ScheduleTriggerManager sometimes crashed Agent Fixed bug where non-ascii characters in process name prevented process from saving Fixed bug where exceptions with cyclic property references were not reported correctly via UI
https://docs.frends.com/en/articles/2189589-4-1-release-notes
2022-09-25T09:06:51
CC-MAIN-2022-40
1664030334515.14
[]
docs.frends.com
Custom Views You can customize views. To configure custom views in the UI configuration use JSON similar to the following example. Custom View { "point":"com.reltio.plugins.ui.view", "id":"com.reltio.plugins.entity.bbChartView", "class":"com.reltio.plugins.ui.CustomActionView", "caption":"B2B Integration", "url":"", "height":200, "action":{ "files":[ "" ] } } As shown, this is a normal view configuration but with class "com.reltio.plugins.ui.CustomActionView" and an additional "action" property. Refer to Menu Items for more information about the "action" property You can use this custom view as the normal view on any perspective.
https://docs.reltio.com/hub/customviews.html
2022-09-25T07:56:48
CC-MAIN-2022-40
1664030334515.14
[]
docs.reltio.com
Performs either or both of the following functions. - Controls whether an existing method can run in protected mode as a separate process or in non-protected mode as part of the database. - Recompiles or relinks the method and redistributes it. Privileges To alter a method definition, you must have the UDTMETHOD privilege on the database SYSUDTLIB. There are no privileges granted automatically.
https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/User-Defined-Method-Statements/ALTER-METHOD
2022-09-25T08:39:05
CC-MAIN-2022-40
1664030334515.14
[]
docs.teradata.com
min¶ New in version 12.0. - min¶ - Path $GLOBALS['TCA'][$table]['columns'][$field]['config'] - Type integer - Scope Display This option allows to define a minimum number of characters for the <input>field and adds a minlengthattribute to the field. If at least one character is typed in and the number of characters is less than min, the FormEngine marks the field as invalid, preventing the user to save the element. DataHandler will also take care for server-side validation and reset the value to an empty string, if minis not reached. When using minin combination with max, one has to make sure, the minvalue is less than or equal max. Otherwise the option is ignored. Empty fields are not validated. If one needs to have non-empty values, it is recommended to use required => truein combination with min.
https://docs.typo3.org/m/typo3/reference-tca/main/en-us/ColumnsConfig/Type/Input/Properties/Min.html
2022-09-25T07:28:50
CC-MAIN-2022-40
1664030334515.14
[]
docs.typo3.org
Ethtree Search… Ethtree Documentation GitBook Ethtree Documentation Welcome to our Ethtree documentation page. Here you can find descriptions of how to use our DApp and answers to commonly asked questions. Ethtree - Batch Transaction DApp On our Home page there is one important feature, our logo! Just kidding, it's of course our button that allows you to start a batch transaction on either Ethereum's Mainnet or Rinkeby Network. Rinkeby Network is Ethereum's test network, so if you want to test out Ethtree you would use the Rinkeby setting. Once you're ready for the real thing you can switch to the Mainnet setting and click the start button to begin our simple process. Page 1 - Private Key After clicking our start button, a page will appear that asks you to enter your Ethereum private key. This is the first step of linking your personal Ethereum account so Ethtree can distribute your tokens. We want to make it extremely clear that Ethtree never has your private key nor is it stored anywhere. It's 100% client side and doesn't leave your computer. Page 2 - Token Address On our next page Ethtree will ask for the token information you're sending. You will need both the contract address of the token your sending and how many decimals the token has. You can find this information on etherscan.io or ethplorer.io. Simply copy and paste them in and we're ready to move on to the next page. Page 3 - Recipient List Upload After entering your token information, the next page will be where you upload your list of recipients or addresses that you will be sending your tokens to. This list has to be in a .CSV file format. The .CSV file should read the address first, then a comma and finally the token amount. Once your address list is in the proper .CSV format you can click the upload button and navigate to your file. Once your file uploads you can hit the next button to move on! . CSV format example below: 0x8Ff3C8a1dB6F543007f1177b52506383818aB2FB,2550 0x8F43C8a0da6F343027f7163b52506623811aF4AB,1785 0x8F43C8a0dBBT443123f7145b58906383810aA7BF,2197 Once your file is uploaded you should see a file path showing where you .csv file is located. Page 4 - Gas Price Before entering any information on this page we highly suggest you head over to ethgasstation.info and see what the current suggested gas price is. After finding out the suggested gas price, you can go back to the Ethtree DApp and enter in the same number or one number higher to ensure a smooth delivery. This step is very important. We suggest monitoring the gas prices at ethgasstation.info for at least 30 to 60 minutes before starting your batch transactions. When the network is stable and the suggested gas prices have been the same for an extended period of time is the best time to start using Ethtree. Pull Gas Price from Ethgasstation Feature We created a cool little feature that actually pulls the gas prices from Ethgasstation allowing your batch transaction to process smoothly. The only caveat is if you start your batch process paying 1 Gwei, that price could increase while your batch transactions are processing. By clicking this feature, Ethtree would then start paying the higher suggested price so your transactions would continue to go through. An ideal time to use this feature would be if you see the suggested gas rising and dipping in a specific range. If you know the gas prices has stayed between 1 and 3 and you're comfortable paying that price in gas you can check this option and rest easy knowing Ethtree will pay the right amount of gas and your transactions will go through in a timely manner. After you've set your gas price its time to move on to the next page! Page 5 - Batch Transaction Summary and Pricing On this page you will find a quick recap of the info you've entered on the previous few pages. Please double check to ensure all your information is correct and your recipient list is uploaded. If this information is incorrect Ethtree is not responsible for where your tokens get sent. Entering your information correctly is very important and should be double checked at all times. Cost Breakdown On the right side of the summary page you will see the price breakdown. This shows you the number of transfers or addresses you're sending to, the Ethtree fee per address you're sending to and then the Ethereum network fee (gas) associated with each transaction. Below this info will be your total cost in Ether that you will be paying for your batch transactions. The Ethtree fee is $0.10 per address you send to, converted to Ether. The Network Fee will be whatever the gas price is at the time of your batch transaction times the amount of addresses you're sending to. Once all of your info is correct and you have enough ether in your wallet to cover the total cost you can click start. Attention After clicking start you will be prompted one last time before officially starting your batch transactions. We want to make it very clear that Ethtree is not responsible for any failed transactions or tokens sent to wrong addresses. We're extremely confident our app is fully functioning and works exactly the way we want it to as long as all your info and the gas information is correct. At the same time, we're here to help you if you do encounter an issue or need help getting your batch transaction to work. If for some reason our fee was collected but your batch transaction didn't go through, we will of course refund you the Ether sent to our contact. Thats the beauty of the blockchain, we'll easily be able to see what happened and where the process stopped. Success! All thats left to do is watch Ethtree do its thing. You will see hash links populate as your batch transactions are processing. You can click these links once you see a green checkmark to see your transactions as they process. If you need to stop the process for any reason you can click cancel and all remaining transactions wont be sent. Once Ethtree is finished you can also download a receipt of your batch transactions by clicking the receipt button. After Using Ethtree Please feel free to reach out to us on twitter at @ethtreeio or on our telegram channel t.me/ethtree to let us know how your experience was. We appreciate all feedback and any suggestions you might have to make Ethtree even better! Last modified 4yr ago Copy link Outline Ethtree - Batch Transaction DApp Page 1 - Private Key Page 2 - Token Address Page 3 - Recipient List Upload Page 4 - Gas Price Page 5 - Batch Transaction Summary and Pricing Attention Success! After Using Ethtree
https://docs.ethtree.io/
2022-09-25T09:11:04
CC-MAIN-2022-40
1664030334515.14
[]
docs.ethtree.io
Contacts Much like tenancy, contact assignment enables you to track ownership of resources modeled in NetBox. A contact represents an individual responsible for a resource within the context of its assigned role. flowchart TD ContactGroup --> ContactGroup & Contact ContactRole & Contact --> assignment([Assignment]) assignment --> Object click Contact "../../models/tenancy/contact/" click ContactGroup "../../models/tenancy/contactgroup/" click ContactRole "../../models/tenancy/contactrole/" Contact Groups Contacts can be grouped arbitrarily into a recursive hierarchy, and a contact can be assigned to a group at any level within the hierarchy. Contact Roles A contact role defines the relationship of a contact to an assigned object. For example, you might define roles for administrative, operational, and emergency contacts. Contacts A contact should represent an individual or permanent point of contact. Each contact must define a name, and may optionally include a title, phone number, email address, and related details. Contacts are reused for assignments, so each unique contact must be created only once and can be assigned to any number of NetBox objects, and there is no limit to the number of assigned contacts an object may have. Most core objects in NetBox can have contacts assigned to them. The following models support the assignment of contacts: - circuits.Circuit - circuits.Provider - dcim.Device - dcim.Location - dcim.Manufacturer - dcim.PowerPanel - dcim.Rack - dcim.Region - dcim.Site - dcim.SiteGroup - tenancy.Tenant - virtualization.Cluster - virtualization.ClusterGroup - virtualization.VirtualMachine
https://docs.netbox.dev/en/stable/features/contacts/
2022-09-25T08:16:20
CC-MAIN-2022-40
1664030334515.14
[]
docs.netbox.dev
Redpanda security on Kubernetes The custom resource definition (CRD) of a Redpanda cluster includes four APIs: Kafka API, HTTP Proxy (formerly Pandaproxy), Schema Registry, and Admin API. You can configure authentication individually for each of the APIs. Redpanda does not have authentication enabled by default, which allows you to choose the authentication method that best suits the needs of your cluster. This document provides an overview of the Redpanda supported authentication methods and the advantages of each. Authentication is how you verify that the users and clients that access the API endpoints in your cluster are who they say they are. We recommend that you enable authentication on each API for a production environment. TLS authenticates the server and encrypts communication between the server and the client, while SASL and mTLS provide client authentication. The table below shows the types of client authentication that Redpanda supports on each API: In general, the guidelines put in place by your organization will determine the type of authentication that you will use. SASL authentication with TLS encryption provides flexibility with respect to authentication, along with the added layer of security provided by TLS encryption. mTLS includes the additional layer of authentication in which the server authenticates the client. The sections below contain general information for TLS encryption and SASL and mTLS authentication. If you want to quickly set up authentication on your cluster, you can follow the steps in one of the three configuration guides: - Configuring Redpanda TLS on Kubernetes - Configuring Redpanda SASL authentication on Kubernetes - Configuring Redpanda mTLS authentication on Kubernetes TLS Transport Layer Security (TLS), previously SSL, provides authentication of the server as well as encryption for client-server communication, which prevents third parties from accessing the data that is transferred between the client and the server. You can configure TLS inside a Kubernetes cluster to establish a secure connection with encrypted communication between a client and a broker. TLS requires the server to give a certificate to the client. For information about certificates, see the Certificates section below. If you need flexibility for authorization combined with encrypted communication, you can configure TLS encryption along with SASL authentication. TLS is available on all four APIs. The table below lists the supported listener configurations for each API with TLS enabled. For more information about listeners, see the Listeners section below. Configuring TLS To enable TLS, you add the following configuration to each API in the cluster configuration file: tls: enabled: true For detailed steps on how to enable TLS, see the Configuring Redpanda TLS on Kubernetes documentation. SASL/SCRAM Simple Authentication Security Layer (SASL) is an authentication framework that allows the user to choose the authentication mechanism. Redpanda supports the Salted Challenge Response Authentication Mechanism (SCRAM) authentication method. SASL authentication is available for the Kafka API. SASL provides authentication, but not encryption. You can, however, configure TLS to only handle encryption, and use SASL for authentication. This is useful if you require flexibility in the authorization mechanisms that you use. See the rpk acl documentation for information about using rpk to manage SASL users. SCRAM SCRAM provides strong encryption for usernames and passwords by default and does not require an external data store for user information. Redpanda supports the following SASL mechanisms: SCRAM-SHA-256 SCRAM-SHA-512 When you execute a command with SASL authentication, you must include the mechanism with the following flag: --sasl-mechanism <mechanism> For example, the following command uses the SCRAM-SHA-256 mechanism: kubectl exec -c redpanda cluster-sample-sasl-0 -- rpk topic create littlefoot \ --user brontosaurus \ --password brontosaurusPassword \ --sasl-mechanism SCRAM-SHA-256 Superusers When you configure SASL authentication, you include one or more superusers in the Redpanda configuration file. This user has ALL permissions on the cluster and is the user that will grant permissions to new users. Without a superuser, you can create other users, but you will not be able to grant them permissions to the cluster. Users, including superusers, are created through the Admin API. However, you must specify the username of the superuser with the following property in the redpanda.yaml file: superUsers: - username: admin You can specify an existing user or a user that does not exist yet. Adding the username in the configuration file does not create the user, but when you do create a user with the username that you specified in the configuration file, that user will have full access to the cluster. You can then use that superuser to grant permissions to other users. As a security best practice, you do not want to use the superuser to interact with the cluster, but you also do not want to delete the superuser in case you later create new users and need to grant permissions to them. In addition, when you create the superuser, you specify a password for the user, which adds a level of security to the superuser. If you delete the user, someone else might recreate the user with a different password. Configuring SASL authentication To configure SASL authentication, add the following properties to the cluster configuration file: enableSasl: true superUsers: - username: admin This enables SASL authentication and specifies the superusers. For detailed steps on how to enable SASL authentication, see the Configuring Redpanda SASL authentication on Kubernetes documentation. SASL with TLS encryption To enable SASL authentication with TLS encryption for the Kafka API, follow the standard configuration steps to enable SASL. In addition, enable TLS by adding the highlighted lines below to the kafkaApi property in the configuration file: kafkaApi: - port: 9092 tls: enabled: true mTLS Mutual TLS (mTLS) is a method of authentication in which the client authenticates the server and the server authenticates the client. This provides an additional layer of security to TLS, where the client is not authenticated. When mTLS is enabled, the server determines whether the client can be trusted. mTLS requires the client to give a certificate in addition to the server certificate that is required in TLS. This involves more overhead to implement, but can be useful for environments that require additional security and only have a small number of verified clients. mTLS authentication is available on all four APIs. The table below lists the supported listener configurations for each API with mTLS enabled. For more information about listeners, see the Listeners section below. Redpanda does not perform user authentication on the client certificate. Because Redpanda does not associate the distinguished name (DN) in the client certificate with a Redpanda principal, you cannot distinguish between users when using mTLS. You can use mTLS with multiple users, but from Redpanda’s point of view, the users are identical. Configuring mTLS authentication To enable mTLS, you must add the following configuration to each API in the cluster configuration file: tls: enabled: true requireClientAuth: true For detailed steps on how to enable mTLS, see the Configuring Redpanda mTLS authentication on Kubernetes documentation. Certificates The Redpanda operator uses cert-manager to generate certificates for TLS and mTLS authentication (SASL does not use certificates). When the client opens a connection to Redpanda, Redpanda sends the client a certificate and the client verifies the certificate with the Certificate Authority. If mTLS is enabled, the client then sends its own certificate to Redpanda and Redpanda verifies that certificate with the Certificate Authority. For information about how certificates are generated in cert-manager, see the cert-manager Certificate documentation. The Redpanda operator uses the following certificates: If you delete the certificate, the Secret does not get deleted. This means that if you delete the certificate manually, the operator will continue to use the same Secret. For information about recreating the Secret, see the cert-manager Certificate Resources documentation. Root certificate When you configure TLS or mTLS on a Redpanda cluster and you do not provide an issuer, the Redpanda operator uses cert-manager to generate a root certificate that is local to the cluster. The operator then uses the root certificate to generate a node certificate for the listener, and for mTLS a certificate is also created for the client. If you do provide an issuer, the operator does not generate a root certificate. Node certificate The operator provides the node certificate to Redpanda. The certificate Secret is mounted as a volume that is consumed by Redpanda. For information about mounting Secrets as a volume, see the Kubernetes Secrets documentation. The node certificate Secret is named in the following way for each API: Client certificate (mTLS only) The client certificate is generated when mTLS authentication is enabled. The client certificate is held by the client so that the server can use it to verify that the client is safe. The client certificate Secret is named in the following way for each API: Providing a trusted certificate issuer or certificate For Kafka API and Schema Registry, you also have the option of providing a trusted certificate issuer or a certificate. For example, Redpanda Cloud uses a Let’s Encrypt issuer, which prevents the need for the client to to download the certificate for the cluster. Instead, the Let’s Encrypt certificate, which is available on all operating systems, is used by the client. When you provide an issuer, you add the issuerRef property to the Redpanda configuration file:field is not set, or if it is set to Issuer, an issuer with the name specified in the nameproperty that exists in the same namespace as the certificate will be used. When you provide a certificate, you add the nodeSecretRef property to the Redpanda configuration file:. Details for providing a trusted issuer or certificate issuer are included in the Configuring Redpanda TLS on Kubernetes and Configuring Redpanda mTLS authentication on Kubernetes documentation. Certificate Secrets As stated above, the Redpanda operator uses cert-manager to generate certificates. When a certificate is created, a Kubernetes Secret resource for the certificate is also created in the same namespace as the Redpanda cluster. The Secret resource is the following type: kubernetes.io/tls For information about the kubernetes.io/tls Secret type, see the Kubernetes TLS Secrets documentation. The kubernetes.io/tls resource contains the following components: tls.key tls.crt ca.crt- This is provided if you are using a self-signed Certificate Authority (i.e. you did not provide an issuer in the cluster configuration file). These components are described further in the TLS certificates with external connectivity and mTLS certificates with external connectivity sections of this article. To see the contents of kubernetes.io/tls, run this command: kubectl get secret <secret_name> -o yaml Renewing certificates The certificate renewal process is handled seamlessly by cert-manager. You do not need to do anything to facilitate the renewal. However, keep in mind that if you have a customer that is using the certificate, you will need to give the new certificate to the customer. For that reason, a new certificate is issued 30 days before the old certificate expires. In this 30-day window, the new certificate and the old certificate are active, which gives you time to update the certificate with the customer. The Redpanda operator sets the certificate duration to five years. This is non-configurable. You can run the following command to see when your certificate was issued, when a new certificate will be issued, and when your certificate will expire: kubectl describe certificate <certificate_name> If you have a security breach or for some other reason you want to manually renew your certificate, see the cert-manager Actions that will trigger a rotation of the private key documentation. Subject Alternative Name Each certificate has a Subject Alternative Name (SAN), that lists the DNS names that are secured by the certificate. When the Redpanda operator provides the certificate to the client, it provides the SAN. The SAN is structured like this: DNS: *.<cluster_name>.default.svc.cluster.local The wildcard (*) prefix indicates that the SAN is for all brokers. Redpanda does not generate certificates that are specific to brokers. The client must specify a broker when it communicates with the operator. For example, the client might use this SAN: DNS: 0.<cluster_name>.default.svc.cluster.local For external connectivity, the SAN is structured like this: DNS: *.<subdomain_name> External connectivity If the client is within the same Kubernetes cluster as Redpanda, you do not need to configure external connectivity. However, if you have communication from outside the cluster or from outside the virtual private cloud, you will need to set up external connectivity. This section contains an overview of how external connectivity works, and the Configuring Redpanda TLS on Kubernetes and Configuring Redpanda mTLS authentication on Kubernetes pages contain detailed steps to enable TLS and mTLS with external connectivity. Listeners The listener ports are the ports that the Redpanda APIs use to communicate with the client. You must configure external connectivity on each API individually. The supported listener configurations for each API with TLS and mTLS are listed in the tables in the TLS and mTLS sections above. You can specify up to two listeners for each API, but only one listener can have TLS or mTLS enabled. If you do have two listeners, one must be external. The exception is Schema Registry, which can only have one listener. The Schema Registry listener can be internal, or it can be an internal port that is used internally and externally. If you enable external connectivity on Schema Registry, the Kubernetes node port connects to the internal Redpanda port to provide external connectivity. When you configure external connectivity, can specify the external port, but you don't need to. If you do not specify a port, a port is picked from the 3000-32767 range. This range is the default specified in Kubernetes. For more information about the autogenerated port and directions on how to change the default range, see the Kubernetes Type NodePort documentation. Configuring external connectivity To enable external connectivity with TLS, add the following lines to each API in the configuration file: - external: enabled: true subdomain: <subdomain_name> The external port is generated automatically and you do not need to specify it. In the example below, TLS is enabled on the external listener for the Kafka API. Enable external connectivity the same way for See the Configuring Redpanda TLS on Kubernetes and Configuring Redpanda mTLS authentication on Kubernetes pages for detailed steps on how to enable TLS and mTLS with external connectivity. Subdomain The subdomain field allows you to specify the advertised address of the external listener. The subdomain addresses, including the brokers, must be registered with a DNS provider, such as Amazon Route 53. Each API in the configuration file must have the same subdomain specified. The configuration file uses the subdomain field to generate the advertised addresses for the external listeners. The advertised addresses for the external listeners are structured like this: <broker_id>.\<subdomain_name>:\<node_port> If you do not provide a subdomain, you cannot configure TLS or mTLS for the cluster. The Redpanda operator does not issue certificates for IP addresses. TLS certificates with external connectivity If you have external connectivity configured for your cluster and you did not provide an issuer in the configuration file, you must export the Certificate Authority's (CA) public certificate file from the node certificate Secret as a file named ca.crt. To extract ca.crt from the certificate Secret, run this command: kubectl get secret <secret_name> -o go-template='{{index .data "ca.crt"}}' | base64 -d - > ca.crt Note that the Secret names for each API are listed in the Node certificate section of this article. Once you have ca.crt stegosaurus_config.yaml, you can create a topic called stegosaurus with this command: rpk topic create stegosaurus --config stegosaurus_config.yaml mTLS certificates with external connectivity If you have external connectivity configured for your cluster and you're using mTLS, you must extract the tls.crt and tls.key files from the client certificate Secret and export them to the client. In addition, if you did not provide an issuer in the cluster configuration file, you must export ca.crt. The table below gives the command to extract each of these files. Note that the Secret names for each API are listed in the Node certificate section of this article. If you want to retrieve the entire resource to view the contents, you can use the following command, but keep in mind that the Kafka client cannot process the resource as a single file. kubectl get secret <secret_name> --namespace=default -o yaml Once you have ca.crt, tls.crt, and tls.key: key_file: <key_file_path>/tls.key cert_file: <cert_file_path>/tls.crt. key_file_path- The directory where you want to mount the tls.keyprivate client key. Generally this is /etc/tls/certs. cert_file_path- The filename and directory where you want to mount the tls.crtprivate key. Generally this is /etc/tls/certs. triceratops_config.yaml, you can create a topic called triceratops in the cluster with this command: rpk topic create triceratops --config triceratops_config.yaml
https://docs.redpanda.com/docs/22.1/security/kubernetes-security/
2022-09-25T09:14:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.redpanda.com
Ansible - Management of Files¶ In this chapter you will learn how to manage files with Ansible. Objectives: In this chapter you will learn how to: modify the content of file; upload files to the targeted servers; retrieve files from the targeted servers. ansible, module, files Knowledge: Complexity: Reading time: 20 minutes Depending on your needs, you will have to use different Ansible modules to modify the system configuration files. ini_file module¶ When you want to modify an INI file (section between [] then key=value pairs), the easiest way is to use the ini_file module. The module requires: - The value of the section - The name of the option - The new value Example of use: - name: change value on inifile community.general.ini_file: dest: /path/to/file.ini section: SECTIONNAME option: OPTIONNAME value: NEWVALUE lineinfile module¶ To ensure that a line is present in a file, or when a single line in a file needs to be added or modified, use the linefile module. In this case, the line to be modified in a file will be found using a regexp. For example, to ensure that the line starting with SELINUX= in the /etc/selinux/config file contains the value enforcing: - ansible.builtin.lineinfile: path: /etc/selinux/config regexp: '^SELINUX=' line: 'SELINUX=enforcing' copy module¶ When a file has to be copied from the Ansible server to one or more hosts, it is better to use the copy module. Here we are copying myflile.conf from one location to another: - ansible.builtin.copy: src: /data/ansible/sources/myfile.conf dest: /etc/myfile.conf owner: root group: root mode: 0644 fetch module¶ When a file has to be copied from a remote server to the local server, it is best to use the fetch module. This module does the opposite of the copy module: - ansible.builtin.fetch: src: /etc/myfile.conf dest: /data/ansible/backup/myfile-{{ inventory_hostname }}.conf flat: yes template module¶ Ansible and its template module use the Jinja2 template system () to generate files on target hosts. For example: - ansible.builtin.template: src: /data/ansible/templates/monfichier.j2 dest: /etc/myfile.conf owner: root group: root mode: 0644 It is possible to add a validation step if the targeted service allows it (for example apache with the command apachectl -t): - template: src: /data/ansible/templates/vhost.j2 dest: /etc/httpd/sites-available/vhost.conf owner: root group: root mode: 0644 validate: '/usr/sbin/apachectl -t' get_url module¶ To upload files from a web site or ftp to one or more hosts, use the get_url module: - get_url: url: dest: /tmp/archive.zip mode: 0640 checksum: sha256:f772bd36185515581aa9a2e4b38fb97940ff28764900ba708e68286121770e9a By providing a checksum of the file, the file will not be re-downloaded if it is already present at the destination location and its checksum matches the value provided.
https://docs.rockylinux.org/pl/books/learning_ansible/03-working-with-files/
2022-09-25T07:38:04
CC-MAIN-2022-40
1664030334515.14
[]
docs.rockylinux.org
This page was generated from ad/methods/modeldistillation.ipynb. Model distillation Overview. The detector can be used as follows: Given an input \(x,\) an adversarial score \(S(x)\) is computed. \(S(x)\) equals the value loss function employed for distillation calculated between the original model’s output and the distilled model’s output on \(x\). If \(S(x)\) is above a threshold (explicitly defined or inferred from training data), the instance is flagged as adversarial. Usage Initialize Parameters: threshold: threshold value above which the instance is flagged as an adversarial instance. distilled_model: tf.keras.Sequentialinstance containing the model used for distillation. Example: distilled_model = tf.keras.Sequential( [ tf.keras.InputLayer(input_shape=(input_dim,)), tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax) ] ) model: the classifier as a tf.keras.Model. Example: inputs = tf.keras.Input(shape=(input_dim,)) hidden = tf.keras.layers.Dense(hidden_dim)(inputs) outputs = tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)(hidden) model = tf.keras.Model(inputs=inputs, outputs=outputs) loss_type: type of loss used for distillation. Supported losses: ‘kld’, ‘xent’. temperature: Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution which can be beneficial for prediction distributions with high entropy. data_type: can specify data type added to metadata. E.g. ‘tabular’ or ‘image’. Initialized detector example: from alibi_detect.ad import ModelDistillation ad = ModelDistillation( distilled_model=distilled_model, model=model, temperature=0.5 ) Fit We then need to train the detector. The following parameters can be specified: X: training batch as a numpy array. loss_fn: loss function used for training. Defaults to the custom model distillation / harmfulness model could have misclassified training instances. ad.infer_threshold(X_train, threshold_perc=95, batch_size=64) Detect We detect adversarial / harmful instances by simply calling predict on a batch of instances X. We can also return the instance level) Examples Image Harmful drift detection through model distillation on CIFAR10
https://docs.seldon.io/projects/alibi-detect/en/latest/ad/methods/modeldistillation.html
2022-09-25T08:45:52
CC-MAIN-2022-40
1664030334515.14
[]
docs.seldon.io
Create Data Volume Data volumes contain the database data. For more information on volumes, refer to the Volumes Overview section. Assigning Nodes to a Volume When creating a storage master nodes is 4. The following principles apply: - The value you set in the Number of Master Nodes field of the Add EXAStorage Volume screen must match the number of nodes you add to the volume's Nodes List. - The number of master nodes that you define for the volume must match the number of active nodes you assign to the database later when you Create a Database. Follow these steps to create a data volume: - In EXAoperation, go to Services > EXAStorage and click Add Volume. - Enter the properties for the new node, and set the Volume Type to Data. - Click Add to create the volume. The volume is added to EXAStorage.
https://docs.exasol.com/db/6.2/administration/gcp/manage_storage/create_data_volume.htm
2022-09-25T08:20:48
CC-MAIN-2022-40
1664030334515.14
[array(['../../../resource/images/administration/storage/add volume - default screen.png', 'Add Volume'], dtype=object) array(['../../../resource/images/administration/storage/add volume - data volume added.png', 'Add Volume'], dtype=object) ]
docs.exasol.com
... - Open Flow Analysis. ...
https://docs.parasoft.com/pages/diffpagesbyversion.action?pageId=80825127&selectedPageVersions=1&selectedPageVersions=2
2022-09-25T08:01:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.parasoft.com
Solutions Products Support and Downloads About Zebra Solutions Products Support and Downloads About Zebra GK420d Desktop Printer User Guide About This Guide Notational Conventions Icon Conventions Introduction GK Series Thermal Printers What is in the Box Unpack and Inspect the Printer Your Printer Opening the Printer Printer Features Operator Controls Power Switch Feed Button Status Light Closing the Printer Getting Started Printer Setup Overview Attaching Power Preparing Media Placing the Roll in the Media Compartment Printing a Test (Printer Configuration) Label Connecting your Printer to a Device Connecting to a Phone or Tablet Installing Drivers and Connecting to a Windows-Based Computer Pre-install Windows® Printer Drivers Installing the Drivers Running the Printer Installation Wizard Plug'n'Play (PnP) Printer Detection and Windows operating systems Ethernet Serial Port and Windows Operating Systems Interface Cable Requirements USB Interface Requirements Serial Communications Parallel Port Ethernet Cable After Your Printer is Connected Testing Communications by Printing Test Printing with Zebra Setup Utility Test Printing with Windows Printer and Faxes Menu Test Print on an Ethernet Printer Test Print with a Copied ZPL Command File What to Do If You Forget to Install Printer Drivers First Print Operations Determining Printer Configuration Localizing the Printer Status Configuration Label Long Term Printer Inactivity or Storage Thermal Printing Modes of Printing Print Media Types Determining Thermal Media Types Performing a Media Scratch Test Replacing Supplies Adjusting the Print Width Adjusting the Print Quality Media Sensing Printing on Fanfold Media Printing with Externally Mounted Roll Media Externally Mounted Roll Media Considerations Fonts and Your Printer Localizing the Printer with Code Pages Identifying Fonts in Your Printer ZPL Fonts EPL Fonts Standalone Printing Sending Files to the Printer Print Meter EPL Line Mode Printer Options Label Dispenser Option ZebraNet 10/100 Internal (Wired) Print Server Option Printer Network Configuration Status Label Zebra KDU — Printer Accessory KDU Plus — Printer Accessory ZBI 2.0 Zebra Basic Interpreter Maintenance Cleaning Cleaning the Printhead Media Path Considerations Sensor Cleaning Platen Cleaning and Replacement Other Printer Maintenance Replacing the Printhead Troubleshooting Status Light Descriptions Status Light Error Resolutions Print Quality Problems Manual Calibration Troubleshooting Tests Resetting the Factory Default Values Communications Diagnostics Feed Button Modes Interface Wiring Universal Serial Bus Interface Parallel Interface Ethernet Interface Serial Port Interface Dimensions GK Printer External Dimensions Label Dispenser Dimensions ZPL Configuration Managing the ZPL Printer Configuration ZPL Printer Configuration Format ZPL Configuration Status to Command Cross-reference Printer Memory Management and Related Status Reports ZPL Programming for Memory Management About This Guide About This Guide About This Guide This document is intended for use by any person who needs to perform routine maintenance, upgrade, or troubleshoot problems with the printer. Notational Conventions Icon Conventions
https://docs.zebra.com/content/tcm/us/en/printers/desktop/gk420d-desktop-printer-user-guide/c-gk420d-ug-about-this-guide.html
2022-09-25T07:41:23
CC-MAIN-2022-40
1664030334515.14
[]
docs.zebra.com
pynq.pl Module¶ The pynq.pl module facilitates management of the Programmable Logic (PL). The PL module manages the PL state through the PL class. The PL class is a singleton for the Overlay class and Bitstream classes that provide user-facing methods for bitstream and overlay manipulation. The TCL in the PL module parses overlay .tcl files to determine the overlay IP, GPIO pins, Interrupts indices, and address map. The Bitstream class within the PL module manages downloading of bitstreams into the PL. - class pynq.pl. Bitstream(bitfile_name)[source]¶ Bases: object This class instantiates a programmable logic bitstream. timestamp¶ str – Timestamp when loading the bitstream. Format: year, month, day, hour, minute, second, microsecond - class pynq.pl. PL[source]¶ Bases: object Serves as a singleton for Overlay and Bitstream classes. This class stores multiple dictionaries: IP dictionary, GPIO dictionary, interrupt controller dictionary, and interrupt pins dictionary. timestamp¶ str – Bitstream download timestamp, using the following format: year, month, day, hour, minute, second, microsecond. ip_dict¶ dict – All the addressable IPs from PS7. Key is the name of the IP; value is a dictionary mapping the physical address, address range, IP type, configuration dictionary, the state associated with that IP, any interrupts and GPIO pins attached to the IP and the full path to the IP in the block design: {str: {‘phys_addr’ : int, ‘addr_range’ : int, ‘type’ : str, ‘config’ : dict, ‘state’ : str, ‘interrupts’ : dict, ‘gpio’ : dict, ‘fullpath’ : str}}. gpio_dict¶ dict – All the GPIO pins controlled by PS7. Key is the name of the GPIO pin; value is a dictionary mapping user index (starting from 0), the state associated with that GPIO pin and the pins in block diagram attached to the GPIO: {str: {‘index’ : int, ‘state’ : str, ‘pins’ : }}. hierarchy_dict¶ dict – All of the hierarchies in the block design containing addressable IP. The keys are the hiearachies and the values are dictionaries containing the IP and sub-hierarchies contained in the hierarchy and and GPIO and interrupts attached to the hierarchy. The keys in dictionaries are relative to the hierarchy and the ip dict only contains immediately contained IP - not those in sub-hierarchies. {str: {‘ip’: dict, ‘hierarchies’: dict, ‘interrupts’: dict, ‘gpio’: dict, ‘fullpath’: str}} - class pynq.pl. PLMeta[source]¶ Bases: type This method is the meta class for the PL. This is not a class for users. Hence there is no attribute or method exposed to users. Note If this metaclass is parsed on an unsupported architecture it will issue a warning and leave class variables undefined bitfile_name¶ The getter for the attribute bitfile_name. Note If this method is called on an unsupported architecture it will warn and return an empty string clear_dict()[source]¶ Clear all the dictionaries stored in PL. This method will clear all the related dictionaries, including IP dictionary, GPIO dictionary, etc. client_request(address='/home/docs/checkouts/readthedocs.org/user_builds/pynq/checkouts/v2.0/pynq/.log', key=b'xilinx')[source]¶ Client connects to the PL server and receives the attributes. This method should not be used by the users directly. To check open pipes in the system, use lsof | grep <address> and kill -9 <pid> to manually delete them. load_ip_data(ip_name, data)[source]¶ This method writes data to the addressable IP. Note The data is assumed to be in binary format (.bin). The data name will be stored as a state information in the IP dictionary. reset()[source]¶ Reset both all the dictionaries. This method must be called after a bitstream download. 1. In case there is a *.tcl file, this method will reset the IP, GPIO , and interrupt dictionaries based on the tcl file. 2. In case there is no *.tcl file, this method will simply clear the state information stored for all dictionaries. server_update(continued=1)[source]¶ Client sends the attributes to the server. This method should not be used by the users directly. To check open pipes in the system, use lsof | grep <address> and kill -9 <pid> to manually delete them. setup(address='/home/docs/checkouts/readthedocs.org/user_builds/pynq/checkouts/v2.0/pynq/.log', key=b'xilinx')[source]¶ Start the PL server and accept client connections. This method should not be used by the users directly. To check open pipes in the system, use lsof | grep <address> and kill -9 <pid> to manually delete them.
https://pynq.readthedocs.io/en/v2.0/pynq_package/pynq.pl.html
2019-06-15T23:20:45
CC-MAIN-2019-26
1560627997501.61
[]
pynq.readthedocs.io
Warning We highly recommend that users use the Anaconda python distribution. Instructions are available here Using conda. Because of the wide variety of methods for setting up a Python environment on Mac OS X, users have a number of options with how to install SunPy and its dependencies. We have a number of solutions listed below. Choose the solution which best suits you. For users who wish to have more control over the installation of Python, several alternative installation methods are provided below, including instructions for Macports and Homebrew. The following instructions are not recommended for beginners. OS X comes pre-loaded with Python but each versions of OS X (Mountain Lion, Snow Leopard, etc.) ships with a different version of Python. In the instructions below, we therefore recommend that you install your own version of Python. You will then have two versions of Python living on your system at the same time. It can be confusing to make sure that when you install packages they are installed for the correct Python so beware. Download and install the latest version of 32 bit Python 2.7.3 using their DMG installer. Next, choose your package installer of choice (either Macports or Homebrew) and follow the instructions below. If you do not have either go to their respective websites and install one of the other as needed.). Homebrew is a tool for helping to automate the installation of a number of useful tools and libraries on Mac OS X. It is similar to Macports, but attempts to improve on some of the pitfalls of Macports. Note that if you have already installed either fink or Macports on your system, it is recommended that you uninstall them before using Homebrew. Next, install and update homebrew: /usr/bin/ruby -e "$(/usr/bin/curl -fsSL)" brew doctor Using homebrew, install Qt and some of the other dependencies needed for compilation later on by pip: brew -v install gfortran pkgconfig git openjpeg readline pyqt Now on to the next steps. Most Python distributions ship with a tool called easy_install which assists with installing Python packages. Although easy_install is capable of installing most of the dependencies needed for SunPy itself, a more powerful tool called pip which provides a more flexible installation (including support for uninstalling, upgrading, and installing from remote sources such as GitHub) and should be used instead. Use easy_install to install pip: sudo easy_install pip You are now ready to install scipy, numpy, and matplotlib. If pip installed properly, then you can install NumPy simply with: pip install numpy Now under Lion, install the stable version of SciPy (0.10) by running: pip install scipy Mountain Lion users will need to install the development version of SciPy (0.11) by executing the following line: pip install -e git+ Now on to matplotlib On Lion, install matplotlib like any other package: pip install matplotlib Mountain Lion users will have to use the development version as of this writing: pip install git+ Done! You are now ready to install SunPy itself.
http://docs.sunpy.org/en/stable/guide/installation/mac.html
2018-04-19T19:22:46
CC-MAIN-2018-17
1524125937016.16
[]
docs.sunpy.org
A custom result processor must implement the GemFire XD ProcedureResultProcessor interface. GemFire XD calls the processor implementation after a client invokes a procedure using the WITH RESULT PROCESSOR clause and begins retrieving results. Procedure Result Processor Interfaces provides a complete reference to the methods defined in ProcedureResultProcessor, ProcedureProcessorContext, and IncomingResultSet.
http://gemfirexd.docs.pivotal.io/docs/1.3.0/userguide/developers_guide/topics/server-side/dap-impl-processor-interface.html
2018-04-19T19:22:26
CC-MAIN-2018-17
1524125937016.16
[]
gemfirexd.docs.pivotal.io
... Description / Features This plugin allows you to generate a project quality report in PDF format that contains with the most relevant information showed by from SonarQube TM web interface. The report aims to be a deliverable as part of project documentation. Actually the The report contains: - Global A dashboard (similar to SonarQube TM web interface dashboard) - Violations by categories - Hotspots: - Most violated rules - Most violated files - Most complex classes - Most duplicated files - Dashboard, violations and hotspots for all child modules (if they exists) You can download an example of report report here. Usage This plugin can be used as a Maven plugin (installation in SonarQube not required) or as a SonarQube plugin. SonarQube Plugin ... (recommended): ImageQube TM server URL (i.ebe TM server URL is is you you don't need to specify Removed Download the report PDF report can be downloaded from SonarQubeTM GUI: Image Removed Changelog ... Known limitations Ant task and SonarQube TM Runner are not supported.
http://docs.codehaus.org/pages/diffpages.action?pageId=116359257&originalId=231081976
2013-12-05T02:03:48
CC-MAIN-2013-48
1386163038079
[array(['/download/attachments/116359257/sonar-pdf-plugin-2.png?version=1&modificationDate=1239910382053&api=v2', 'sonar-pdf-plugin-2.png'], dtype=object) ]
docs.codehaus.org
: Keep backlighting on when you play a video Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/15974/Playlists_234393_11.jsp
2013-12-05T02:04:19
CC-MAIN-2013-48
1386163038079
[]
docs.blackberry.com
. The. The Joomla! Framework consists of several different packages.
http://docs.joomla.org/index.php?title=Framework&diff=6578&oldid=2357
2013-12-05T01:55:26
CC-MAIN-2013-48
1386163038079
[]
docs.joomla.org