content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Unzer API basics The Unzer API connects your business to the Unzer payment infrastructure. Overview The Unzer API is based on the representational state transfer (REST) architectural style. Exposed API resources can be manipulated using HTTP methods. Request to a resource URL generate a response with JSON payload containing detailed information about the requested resource. You can receive notifications about API state changes using webhooks and HTTP response codes are used to indicate API errors. The API uses HTTP basic authentication over HTTPS, with your API key serving as a username. Basic requirements To learn more about basic requirements, see Basic integration requirements. Base URL Unzer API URL is Sandbox environment We provide a sandbox in which you can make calls to test your integration. This is done using your test API keys. Requests made with test credentials never affect real payment networks and are never billed. To learn more about testing, see Get a test account. API versioning The current version of the Unzer API is: 1.0<VERSION_NUMBER>/<path-to-resource> When we release a new API version with backward-incompatible changes, we keep the previous API versions available.
https://docs.unzer.com/server-side-integration/api-basics/
2022-06-25T11:17:43
CC-MAIN-2022-27
1656103034930.3
[]
docs.unzer.com
You are viewing documentation for Kubernetes version: v1.23 Kubernetes v1.23 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. kubelet Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud]
https://v1-23.docs.kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
2022-06-25T11:39:41
CC-MAIN-2022-27
1656103034930.3
[]
v1-23.docs.kubernetes.io
. Controllers are the glue that binds the parts of an application together, implementing the business rules and logic that make your application behave as intended. They listen for events (usually from Views) and take some action based on the event, whether that is rendering Views, instantiating Models, or performing some other application logic. For example, if your app contains a tappable Logout button, a Controller listens to the button's tap event and takes the appropriate action, such as sending a move to another user in a game. A Controller lets Views handle the display of data and Models handle the loading and saving of data. In Sencha's MVC package, Controllers manage Views. Views do not call out to Controllers to invoke methods. Views fire events, and Controllers respond to them. With Architect, you associate a View with a Controller by selecting the Controller in the Inspector, navigating to the Views config in the Config panel, and selecting from the list of available Views that shown in the dropdown list that appears there, or you can type the name of the View you want to select. Note that you have to add and build Views to your project before you can associate a View with a Controller. You enable a Controller to respond to an event by adding a Controller action to the controller. using the Config panel: Once you choose the target View and event, Architect has everything it needs to subscribe to all user interface controls of that type. For example, say you choose Ext.Button as the target type and the tap event. Architect automatically generates an onButtonTap method (which you can rename). It also generates a controlQuery of button. controlQuery specifies which UI controls the Controller needs monitor. It is akin to a CSS selector that works with components instantiated on the page. Double clicking the Controller action in the Inspector opens Code View for the action, where you can add your own code to determine the behavior that is triggered when a user taps the button. Use the Inspector to add Controllers to either Ext JS projects: click the add button ("+") at the top right of the Inspector and select Controller from the list of components. Architect displays the Controller ("MyController") under the Controller node. From there, use the Config Panel to add functionality to the Controller. See the next section to learn how to do that. You can also add Controllers through the Toolbox, where Controller, Controller Action, and Controller Reference are all available as standard components. This is not the recommended practice. To set configs for a Controller, select that Controller in the Inspector, then open the Config panel. The most commonly used configs for Ext JS are: Actions. Click the add button next to Actions in the Config panel to add actions to a Controller. Select either Controller Action or Application Action. For Controller Action, follow the instructions in Config to choose a target type from the list of View components and an event for the action. Double-click the Controller Action in the Inspector to add custom code to the Action. Select (single-click) the Controller Action and Application Action in the Inspector to see available configs for them. Key Action configs are targetType, where you set the type of component targeted by the Action, and name, which binds an event to the target. References. Click the add button next to References in Config to add a reference to a Controller, then follow the directions in Config to enter a name for the reference and a selector. Click the reference in the Inspector to edit these values in Config (the name is contained in the ref config). You should use the exact name of a View component in the application for the name and selector to reference only that specific View. init. Click the add button next to init in Config to add init functions to a Controller. An init function sets up how a Controller interacts with a View, and is usually used in conjunction with another Controller function -- control. Control helps the Controller listen for events and take some action with a handler function. Double-click the init function in the Inspector to open the Architect code editor and add the code needed to add functionality to the init, including control and other functions. onLaunch. Click the add button next to Actions in Config to add onLaunch functions to a Controller. Double-click the onLaunch function in the Inspector to open the Architect code editor and add the code needed to add functionality. Controller configs are slightly different for Modern projects. Here are the additional main configs Architect makes available for mobile apps: Before Filters. Click the add button next to Before Filters in Config to add a before filter, then select in the Inspector to see available configs. These are used to define filter functions that run before the function specified in the route. Examples include user authentication/authorization for specific actions or loading classes that are not yet on the page. Routes. Click the add button next to Routes in Config to add a route, then select in the Inspector to see available configs. These are used to specify the routes of interest to a Controller, which provides history support within an app as well as the ability to deeply link to any part of an app for which we provide a route. Architect also makes the following configs available for Controllers. Typically, these parts of your application are set at the application level. You only set them for Controllers if you want them to be available only for a particular Controller and not to be available at the application level: Functions. Click the add button next to Functions in Config to add functions to the Controller. Select the function in the Inspector to view all function configs in the Config panel. Models. Binds Models to the Controller. Names of Models added to a project are displayed as a scrolling list in the Value field on the right of Config; open the list by clicking the field (which by default includes the text "(none)"). Stores. Binds stores to the Controller. Names of stores added to a project are displayed as a scrolling list in the Value field on the right of Config; open the list by clicking the field (which by default includes the text "(none)"). Views. Binds Views to the Controller; only top-level Views can be selected. Names of a project's top-level Views are displayed as a scrolling list in the Value field on the right of Config; open the list by clicking the field (which by default includes the text "(none)") For more details about using Controllers in Architect, see the following:
http://docs.sencha.com/architect/4.1/guides/creating_an_application/controllers.html
2017-02-19T14:15:17
CC-MAIN-2017-09
1487501169776.21
[]
docs.sencha.com
GSOC 2013 Project Ideas/template < GSOC 2013 Project Ideas Advertisement
https://docs.joomla.org/GSOC_2013_Project_Ideas/template
2017-02-19T14:26:26
CC-MAIN-2017-09
1487501169776.21
[]
docs.joomla.org
this release of Ext JS, we have worked hard to minimize the impact of changes on existing code, but in some situations this was not completely achievable. This guide walks through the most important changes and their potential impact on Ext JS 5.0 applications. We have removed the top-level "ext*.js" files from the distribution. These files were of limited use in Ext JS 5 and were only preserved for testing and debugging the examples using the dynamic loader. Because Ext JS 5 is a Sencha Cmd Package, its build content has always been in the "build" sub-folder, but the presence of these stubs in the root has generally created confusion for those familiar with previous releases. You can run the examples in their built, optimized form from the "build/examples" folder. To restore these stubs, you can run this command from the root folder of the extracted archive: sencha ant bootstrap As mentioned in What's New in Ext JS 5.1, Ext JS 5.1 still has two Observable classes (Ext.mixin.Observable, and Ext.util.Observable), but their API differences have been eliminated. There is only one exception: Ext.mixin.Observable calls initConfig in its constructor whereas Ext.util.Observable uses the legacy Ext.apply approach to copy config object properties onto the instance. We recommend that applications use Ext.mixin.Observable going forward, but we will continue to support Ext.util.Observable for the foreseeable future since many classes internal to the framework and in user code depend upon its behavior. In the past the two Observables offered two different approaches to sorting listeners that needed to be run in a specific order. Ext.util.Observable used a numeric "priority" option to sort the listeners, while Ext.mixin.Observable used the “order” option which only had 3 possible values - “before”, “after”, and “current” (the default). Since “priority is the more flexible of the two, we are standardizing on it going forward, but the “order” option is still supported for compatibility reasons. Along with this change we have deprecated several convenience methods for adding a listener with a particular order: As part of the API unification process, each Observable class gains some features that were previously only available in the other Observable. Ext.util.Observable gains the following features: Auto-managed listeners - When calling on(), if a scope is provided, and that scope is an Observable instance, the listener will automatically become a "managed" listener. This means simply that when the scope object is destroyed, the listener will automatically be removed. This eliminates the need to use the mon() method in a majority of cases since the managing observable and the scope are typically the same object. The fireAction method - Fires the specified event with the passed parameters and executes a function The "order" option The "args" option Ext.mixin.Observable gains the following features Class-level observability - allows entire classes of objects to be observed via the static observe() method The "priority" event option In Ext JS 5.0, component plugins are automatically destroyed when the component is destroyed. This safety enhancement ensures that all plugins are cleaned up with their component but this can conflict with plugins written for previous versions of the framework have their own handling of component destroy. This is more prominent in Ext JS 5.1 due to the above merging of Observables and how listeners are auto-managed. The resolution is to remove any listeners for "destroy" events and instead rely on the destroy method being called by the component. Prior to Ext JS 5.1, a two-way binding to a formula would not always write the value back as expected. Consider the following fiddle: The ViewModel has a formula: Ext.define('App.view.FooModel', { extend: 'Ext.app.ViewModel', alias: 'viewmodel.foo', formulas: { x2: { get: function (getter) { return getter('x') * 2; }, set: function (value) { this.set('x', value / 2); } } } }); The "x2" property in the above ViewModel is defined as "twice the value of 'x'". The view that is created has a child component with its own ViewModel: Ext.define('App.view.Foo', { extend: 'Ext.panel.Panel', viewModel: { type: 'foo', data: { x: 10 } }, bind: { title: 'Formula: {x2} = 2 * {x}' }, items: [{ xtype: 'numberfield', viewModel: { data: { label: 'Something' } }, bind: { value: '{x2}', fieldLabel: '{label}' } }] }); In this contrived example, we have two components each with their own ViewModel. Initially, the value of x was delivered to the formula and then x2 was delivered to the numberfield. When the number was increased, however, the value of x2 was simply written to the child's ViewModel. This behavior is consistent with the fact that ViewModel data is based on the JavaScript prototype chain, but was at odds with the goal of formulas. In Ext JS 5.1, writes to a formula will properly "climb" to the ViewModel with the formula defined and perform the set at that level. moveEvent The Ext.container.Container move event to indicate that a child component has had the index moved has been renamed to childmove. This resolves the conflict between this event and the Ext.Component move event. selectEvent The Ext.form.field.ComboBox select event had an inconsistency where the records parameter would be passed as a single record in some cases or an array of records in others. This has now been corrected. The default behavior is to provide a array of records only when using multiSelect:true. The documentation for the event has been updated to reflect this. Due to conflicts with built-in focus treatment introduced in Ext JS 5.0.1, Ext.FocusManager has been removed. For more details regarding accessibility, focus, and keyboard navigation improvements, see the Accessibility Guide. Ext.menu.MenuManager no longer registers all Menus within your application. To access Menus in a global manner use Ext.ComponentQuery. The axis rangechange event listener signature changed to include a missing parameter: the axis itself, which is now the first parameter, e.g.: listeners: { rangechange: function (axis, range) { ... The Ext.data.proxy.Sql class has been removed for Ext JS 5.1 but will be restored in a future release. This class was not planned to be in Ext JS 5.0 but was accidentally included during the merge of Sencha Touch and Ext JS data packages. Apologies for the inconvenience.
http://docs.sencha.com/extjs/5.1.0/guides/upgrades_migrations/extjs_upgrade_guide.html
2017-02-19T14:21:20
CC-MAIN-2017-09
1487501169776.21
[]
docs.sencha.com
Switching methods for importing data file From Displayr This page describes how to change a data set imported from a file off your local computer to a data set imported from a URL. There are several reasons you may want to do this - Transparency: Displayr will store the link from which you imported the file so your source data is documented. - Updating: Files that are imported via URLs can be set to update at regular intervals. Method (Requires access to Desktop Q) - Download Archive Pack - Open QPack in Desktop Q and save as a Q Project. - Open .Q file in a text editor (Notepad may struggle with the size of this file) and search for ImportReader - Search for a line starting with a tag for ImportReader that lists the file you want to change. For example, if I want to change importing for hubspot_contacts.csv, I look for a line that looks like: <ImportReader qset="d2ebb10e-5a1d-4981-9bd6-a559dc815c3d" type="CSVReader" defaultEncoding="Windows-1252" dataFileLocation="C:\Users\carmen\Documents\Sales dashboard\hubspot_contacts.csv" dataFileDateTime="2017-05-01 15:05:44Z" dataFileSize="4457355" colNames="true" headerType="OneCellPerVariable" parseMode="V2" unlimitedVarNames="true" delimiter="," /> - Replace the dataFileLocation field with the URL of the file. For example, if I now want to put a link from Dropbox, I would replace the file path with dataFileLocation="" (Note that for dropbox links, it should not end with dl=0). - It is best to only change 1 file at a time to get informative error messages. - Save and close text file. Double-click on the icon to re-open file in Q. Note that this step may take a while for the data to be validated. - Re-run Outputs and save as QPack - Re-upload to Displayr
http://docs.displayr.com/wiki/Switching_methods_for_importing_data_file
2017-12-11T03:59:58
CC-MAIN-2017-51
1512948512121.15
[]
docs.displayr.com
Module Metadata and metadata.json Included in Puppet Enterprise 2015.3. A newer version is available; see the version menu above for details. Puppet: ["msyql", : Best Practice: Set an Upper Bound for Dependencies When your module depends on other modules, make sure to). Specifying Operating System Compatibility" }, ]
https://docs.puppet.com/puppet/4.3/modules_metadata.html
2017-12-11T04:12:05
CC-MAIN-2017-51
1512948512121.15
[]
docs.puppet.com
The SQL Server Installation Wizard provides a single feature tree for an in-place upgrade of SQL Server components to SQL Server 2017. Warning When you upgrade SQL Server, the previous SQL Server instance will be overwritten and will no longer exist on your computer. Before upgrading, back up SQL Server databases and other objects associated with the previous SQL Server instance. Caution For many production and some development environments, a new installation upgrade or a rolling upgrade is more appropriate than an in-place upgrade. For more information regarding upgrade methods, see Choose a Database Engine Upgrade Method, Upgrade Data Quality Services, Upgrade Integration Services, Upgrade Master Data Services, Upgrade and Migrate Reporting Services, Upgrade Analysis Services and Upgrade Power Pivot for SharePoint. Prerequisites You must run Setup as an administrator. If you install SQL Server from a remote share, you must use a domain account that has read and execute permissions on the remote share, and is a local administrator. Warning Be aware that you cannot change the features to be upgraded, and you cannot add features during the upgrade operation. For more information about how to add features to an upgraded instance of SQL Server 2017 after the upgrade operation is complete, see Add Features to an Instance of SQL Server 2016 (Setup). If you are upgrading the Database Engine, review Plan and Test the Database Engine Upgrade Plan and then perform the following tasks, as appropriate for your environment: 2016. Ensure that existing SQL Server system databases - master, model, msdb, and tempdb - are configured to autogrow, and ensure that they have sufficient hard disk space. Ensure that all database servers have logon information in the master database. This is important for restoring a database, as system logon information resides in master. Disable all startup stored procedures, as the upgrade process will stop and start services on the SQL Server instance being upgraded. Stored procedures processed at startup time might block the upgrade process. When upgrading instances of SQL Server where SQL Server Agent is enlisted in MSX/TSX relationships, upgrade target servers before you upgrade master servers. If you upgrade master servers before target servers, SQL Server Agent will not be able to connect to master instances of SQL Server. Quit all applications, including all services that have SQL Server dependencies. Upgrade might fail if local applications are connected to the instance being upgraded. Make sure that Replication is current and then stop Replication. For detailed steps for performing a rolling upgrade in a replicated environment, see Upgrade Replicated Databases Procedure To upgrade to SQL Server 2017 2008, SQL Server 2008 R2, SQL Server 2012, or SQL Server 2014. On the Product Key page, click an option to indicate whether you are upgrading to a free edition of SQL Server, or whether you have a PID key for a production version of the product. For more information, see Editions and Components of SQL Server 2016. In the Upgrade Rules window, the setup procedure will automatically advance to the Select instance window if there are no rule errors. On the Select Instance page, specify the instance of SQL Server to upgrade. To upgrade Management tools and shared features, select Upgrade shared features only. On the Select Features... Installed instances — The grid will show instances of SQL Server that are on the computer where Setup is running. If a default instance is already installed on the computer, you must install a named instance of SQL Server 2017.. The Feature Rules window will automatically advance if all rules pass.. Next Steps. See Also Upgrade to SQL Server 2016 Backward Compatibility_deleted
https://docs.microsoft.com/en-us/sql/database-engine/install-windows/upgrade-sql-server-using-the-installation-wizard-setup
2017-06-22T17:30:52
CC-MAIN-2017-26
1498128319636.73
[]
docs.microsoft.com
Micro and REST A Route in Micro is an http method paired with a URL-matching pattern; same as Sinatra. Routes are defined in the config/routes.yml as simple Controllers/Views-to-URL-patterns bindings. The Java ecosystem is full with cool web development frameworks that are using annotated code for implementing RESTful support. We are taking a different approach in Micro: declaring routes in a simple configuration file. This way you don't have to code anything in Java, nor compile anything. A Designer will appreciate this support in the situation where he writes SEO friendly web resources, and yes, he doesn't have to learn Java. Example: config/routes.yml file: ----------------------- - route: "/{image_file}.{type: png|jpg|jpeg|txt}" method: get, head controller: name: ca.simplegames.micro.controllers.BinaryContent options: { mime_types: { .txt: "text/plain;charset=UTF-8" } } - route: /system/info controller: name: ca.simplegames.micro.controllers.StatsController - route: /system/test method: get controller: name: demo/Demo.bsh options: {foo: bar} view: repository: content path: system/demo.json - route: /hello/{name} method: get, post, head controller: name: test/Hello.bsh This is how you can declare RESTful services, serving dynamic contents as resources or SEO friendly links in Micro. Route, in detail. Let's talk about the route definition. We'll use this example: - route: /system/test method: get controller: name: demo/Demo.bsh options: {foo: bar} view: repository: content path: system/demo.json In the definition above Micro will evaluate the demo/Demo.bsh controller and merge its resulting models by rendering the View system/demo.json from the content repository. This will happen for every web request with a URL that matches the route: /system/test, having the Request method: GET, any other request method, such as: HEAD, etc. will be ignored. Notable config elements: route- the URL-matching pattern. Route patterns may include named parameters, accessible via the paramshash in the Micro context. method- a comma separated list with the request methods accepted by the route. Example: get, head, delete. Default: any, if methodis not specified. controller- an optional structure defining a Micro controller that will be evaluated for the route. The controlleris optional and can be replaced by a simple view; see below. The optionsstructure will be transmitted to the controller as the configurationparameter. view- a template or simple html resource, can be static text or any valid web resource. You must specify the repository where the view is stored and the path to the view. The viewis an optional route configuration element, some developers preferring to decide about the views in their controller implementation. The routes are evaluated in the same order as they are defined, from top to bottom, and they will be evaluated for all the matching URLs. While a simple implementation, the Routing in Micro is quite powerful and flexible, allowing you to define complex route matching rules. As a technical detail, Micro is currently embedding the Apache Wink library, and we are planning to extend this integration in the near future. Probably reading about Resource Matching in Wink is a good place to start for Developers.
http://micro-docs.simplegames.ca/routing.md
2017-06-22T16:23:15
CC-MAIN-2017-26
1498128319636.73
[]
micro-docs.simplegames.ca
Sencha Cmd 6.0, we are excited to introduce a new and rapid tool for developing themes for Ext JS 6.0 called "Fashion". Combining Fashion and sencha app watch creates a new mode for theme development we call "Live Update". Live Update uses Fashion to compile and then inject up-to-date CSS into your running page. This means you don't have to reload to see theme changes and you instead see these updates in near real-time directly in the browser. Sencha Cmd 6 also uses Fashion when compiling themes for Ext JS 6 applications. Since Fashion is implemented in JavaScript, Ruby is no longer required. Fashion is a compiler for .scss files used to generate CSS. Fashion also adds some new features not available in Sass that allow tools like Sencha Inspector to visually inspect and edit the variables defined by the theme (or your application). Users can extend Fashion's functionality by writing JavaScript modules. Generally speaking, Ext JS users are probably most familiar with JavaScript, so extending behavior should be much simpler than extending Compass. We'll talk more about extending Fashion below. Now that the introductions are out of the way, let's talk about Fashion! Fashion is compatible with CSS3 syntax as well as the majority of the sass-spec validation suite. Because Fashion does understand the majority of Sass code, getting existing .scss code to compile using Fashion should not be difficult. Due to the additional features discussed later, however, it is not correct to say that Fashion "is an implementation of Sass" or that the language Fashion compiles "is Sass". There are many places where the word "sass" is used as a name for configuration options or folders on disk. For compatibility reasons, these config options are still named "sass" even though the language underneath is not strictly "Sass". You can open an application in a (modern) browser and the Sass files will be loaded instead of the generated CSS. Fashion can then react to file changes and recompile the Sass and update the CSS without reloading the page. There are two ways to enable Fashion for use in sencha app watch. You may enable Fashion by editing the "development" object found in app.json: ... "development": { "tags": [ "fashion" ] }, ... Alternatively, you can add "?platformTags=fashion:1" to your URL as you load the page. Now we are ready to launch: You should now be able to modify your theme variables and see changes almost instantly. Note: Live Update will only work when viewing the page from the Cmd web server. In Ext JS classic toolkit, some Sass changes may require a layout or a full page reload. This will be less of an issue in the modern toolkit since it is more heavily based on CSS3 and will reflow to accommodate more aggressive changes. Dynamic variables play a very important part in Fashion. Dynamic variables are similar to normal variables, but their value is wrapped in "dynamic()". What makes dynamic variables different from normal variables is how they interact with each other. Consider $bar: dynamic(blue); $foo: dynamic($bar); // Notice $foo depends on $bar $bar: dynamic(red); @debug $foo; // red Notice that $foo is calculated from $bar. This dependency is treated specially by Fashion and the computation of $foo is deferred until the final value of $bar is known. In other words, dynamic variables are processed in two passes: assignment and evaluation. Dynamic variables, like normal variables, are assigned in normal "cascade" order (unlike !default): $bar: dynamic(blue); $bar: dynamic(red); @debug $bar; // red This allows tooling to append custom values to any block of code and take control of its dynamic variables. Assignment to dynamic variables can only occur at file scope and outside any control structure. For example, this is illegal: $bar: dynamic(blue); @if something { $bar: dynamic(red); // ILLEGAL } Instead of the above, this form should be used: $bar: dynamic(if(something, red, blue)); This requirement is necessary to enable the evaluation and hoisting behaviors discussed below. Dynamic variables can be assigned after their declaration with or without using the dynamic() wrapper. $bar: dynamic(blue); $bar: red; // reassigns $bar to red $bar: green !default; // reassigns $bar to green @debug $bar; // green There is no such thing as "default dynamic". Dynamic variables are evaluated in dependency order, not declaration order. Declaration order only applies to the cascade of individual variable assignment. This can be seen in the above example. This ordering also means we could even remove the first setting of $bar and the code would have the same result. Consider the more elaborate example: $bar: dynamic(mix($colorA, $colorB, 20%)); $bar: dynamic(lighten($colorC, 20%)); The original expression for $bar used $colorA and $colorB. Had that been the only assignment to $bar, then $bar would have depended on these two variables and been evaluated after them. Since $bar was reassigned and subsequently only used $colorC, in the final analysis, $bar depends only on $colorC. The original assignment to $bar might as well never have occurred. To accomplish all of this, Fashion gathers all dynamic variables and evaluates them prior to any other execution of the Sass code. In other words, similar to JavaScript variables, dynamic variables are "hoisted" to the top. When variables are used to assign dynamic variables, these variables are elevated to dynamic. $foo: blue; $bar: dynamic($foo); Even though $foo was declared as a normal variable, because $bar uses $foo, Fashion will elevate $foo to be dynamic. NOTE: This implies that $foo must now follow the rules for dynamic variables. This behavior is supported to maximize portability from previous versions of Sencha Cmd. When variables are elevated, a warning will be generated. In future releases, this warning will become an error. We recommend correcting this code to properly declare required variables as dynamic(). You can extend Fashion by writing code in JavaScript. To require this code from the Sass code that needs it, use require(). For example: require("my-module"); // or require("../path/file.js"); // relative to scss file Internally, Fashion uses the ECMAScript 6 (ES6) System.import API (or its polyfill via SystemJS) to support loading standard JavaScript modules. A module can be written in pre-ES6 syntax like so: exports.init = function(runtime) { runtime.register({ magic: function (first, second) { // ... } }); }; Using SystemJS, you can enable "transpilers" to write ES6 code in any browser. The above would look like this in ES6: module foo { export function init (runtime) { runtime.register({ magic: function (first, second) { // ... } }); } } When upgrading to Ext JS 6, the internal use of dynamic variables could impact how these variables are assigned in applications and custom themes. Although not necessary, we recommended you change variable assignments to use dynamic(). In most cases this will be mechanically replacing !default (the approach taken in previous releases) with dynamic(): // before: $base-color: green !default; // after: $base-color: dynamic(green); This will make it more obvious if an error is generated based on the more restrictive nature of assignment to dynamic variables. Compass functionality depending on Ruby code will be unavailable since Ruby is no longer being utilized. An equivalent will have to be created using JavaScript. This can be provided in many cases using require() to implement the missing functionality in JavaScript. The Sass code from Compass, however, is included in Fashion, so not all Compass functionality is impacted. Generally speaking, if you have not used any custom or Ruby based Compass functionality, you will most likely not need to make any changes. We are very excited about Fashion we hope you are too! Quickly theming your application has never been easier, and extending Sass can now be done in the same language as the framework. Please be sure to leave us any feedback on the forums.
http://docs.sencha.com/cmd/guides/fashion.html
2017-06-22T16:36:32
CC-MAIN-2017-26
1498128319636.73
[]
docs.sencha.com
Three kinds of data types can be used as input or output by WSME. The native types are a fixed set of standard Python types that different protocols map to their own basic types. The native types are : - - type bytes¶ A pure-ascii string (wsme.types.bytes which is str in Python 2 and bytes in Python 3). - - type text¶ A unicode string (wsme.types.text which is unicode in Python 2 and str in Python 3). - - type int¶ - - type float¶ - - type bool¶ - - type Decimal¶ A fixed-width decimal (decimal.Decimal) - - type date¶ A date (datetime.date) - - type datetime¶ A date and time (datetime.datetime) - - type time¶ A time (datetime.time) - Arrays – This is a special case. When stating a list datatype, always state its content type as the unique element of a list. Example:class SomeWebService(object): @expose([str]) def getlist(self): return ['a', 'b', 'c'] - Dictionaries – Statically typed mappings are allowed. When exposing a dictionary datatype, you can specify the key and value types, with a restriction on the key value that must be a ‘pod’ type. Example:class SomeType(object): amap = {str: SomeOthertype} There are other types that are supported out of the box. See the Pre-defined user types. User types allow you to define new, almost-native types. The idea is that you may have Python data that should be transported as base types by the different protocols, but needs conversion to/from these base types, or needs to validate data integrity. To define a user type, you just have to inherit from wsme.types.UserType and instantiate your new class. This instance will be your new type and can be used as @wsme.expose or @wsme.validate parameters. Note that protocols can choose to specifically handle a user type or a base class of user types. This is case with the two pre-defined user types, wsme.types.Enum and wsme.types.binary. WSME provides some pre-defined user types: These types are good examples of how to define user types. Have a look at their source code! Here is a little example that combines binary and Enum: ImageKind = Enum(str, 'jpeg', 'gif') class Image(object): name = unicode kind = ImageKind data = binary The wsme.types.BinaryType instance to use when you need to transfer base64 encoded data. A user type that use base64 strings to carry binary data. A simple enumeration type. Can be based on any non-complex type. If nullable, ‘None’ should be added the values set. Example: Gender = Enum(str, 'male', 'female') Specie = Enum(str, 'cat', 'dog') Complex types are structured types. They are defined as simple Python classes and will be mapped to adequate structured types in the various protocols. A base class for structured types is provided, wsme.types.Base, but is not mandatory. The only thing it adds is a default constructor. The attributes that are set at the class level will be used by WSME to discover the structure. These attributes can be: - A datatype – Any native, user or complex type. - A wsattr – This allows you to add more information about the attribute, for example if it is mandatory. - A wsproperty – A special typed property. Works like standard property with additional properties like wsattr. Attributes having a leading ‘_’ in their name will be ignored, as well as the attributes that are not in the above list. This means the type can have methods, they will not get in the way. Gender = wsme.types.Enum(str, 'male', 'female') Title = wsme.types.Enum(str, 'M', 'Mrs') class Person(wsme.types.Base): lastname = wsme.types.wsattr(unicode, mandatory=True) firstname = wsme.types.wsattr(unicode, mandatory=True) age = int gender = Gender title = Title hobbies = [unicode] A few things you should know about complex types: - The class must have a default constructor – Since instances of the type will be created by the protocols when used as input types, they must be instantiable without any argument. - Complex types are registered automatically (and thus inspected) as soon a they are used in expose or validate, even if they are nested in another complex type. If for some reason you need to control when type is inspected, you can use wsme.types.register_type(). - The datatype attributes will be replaced. When using the ‘short’ way of defining attributes, ie setting a simple data type, they will be replaced by a wsattr instance. So, when you write:class Person(object): name = unicode After type registration the class will actually be equivalent to:class Person(object): name = wsattr(unicode) You can still access the datatype by accessing the attribute on the class, along with the other wsattr properties:class Person(object): name = unicode register_type(Person) assert Person.name.datatype is unicode assert Person.name.key == "name" assert Person.name.mandatory is False - The default value of instance attributes is Unset.class Person(object): name = wsattr(unicode) p = Person() assert p.name is Unset This allows the protocol to make a clear distinction between null values that will be transmitted, and unset values that will not be transmitted. For input values, it allows the code to know if the values were, or were not, sent by the caller. - When 2 complex types refer to each other, their names can be used as datatypes to avoid adding attributes afterwards:class A(object): b = wsattr('B') class B(object): a = wsattr(A) A complex type that represents a file. In the particular case of protocol accepting form encoded data as input, File can be loaded from a form file field. Data samples: { "content": null, "contenttype": null, "filename": null } <value> <filename nil="true" /> <contenttype nil="true" /> <content nil="true" /> </value> <Element 'value' at 0x7ff821e42720> { "content": null, "contenttype": null, "filename": null } File content The file name Mime type of the content
http://wsme.readthedocs.io/en/latest/types.html
2017-06-22T16:32:38
CC-MAIN-2017-26
1498128319636.73
[]
wsme.readthedocs.io
Warning! This page documents an old version of InfluxDB, which is no longer actively developed. InfluxDB v1.2 OS X. Getting Started A introductory guide to reading and writing time series data using InfluxDB.
https://docs.influxdata.com/influxdb/v0.13/introduction/
2017-06-22T16:27:56
CC-MAIN-2017-26
1498128319636.73
[]
docs.influxdata.com
Examples¶ This section has a few examples of how to do these things. Read and manipulate¶ Say we have a directory with a bunch of CSV files with information about light bulbs in a home. Each CSV file has the wattage used by the bulb as a function of time. Some of the light bulbs only send a signal when the state changes, but some send a signal every minute. We can read them with this code. def parse_iso_datetime(value): return datetime.strptime(value, "%Y-%m-%dT%H:%M:%S") def read_all(pattern='data/lightbulb-*.csv'): """Read all of the CSVs in a directory matching the filename pattern as TimeSeries. """ result = [] for filename in glob.iglob(pattern): print('reading', filename, file=sys.stderr) ts = traces.TimeSeries.from_csv( filename, time_column=0, time_transform=parse_iso_datetime, value_column=1, value_transform=int, default=0, ) ts.compact() result.append(ts) return result ts_list = read_all() The call to ts.compact() will remove any redundant measurements. Depending on how often your data changes compared to how often it is sampled, this can reduce the size of the data dramatically. Basic analysis¶ Now, let’s say we want to do some basic exploratory analysis of how much power is used in the whole home. We’ll first take all of the individual traces and merge them into a single TimeSeries where the values is the total wattage. total_watts = traces.TimeSeries.merge(ts_list, operation=sum) The merged time series has times that are the union of all times in the invidual series. Since each time series is the wattage of the lightbulb, the values after the sum are the total wattage used over time. Here’s how to check the mean power consumption in January. histogram = total_watts.distribution( start=datetime(2016, 1, 1), end=datetime(2016, 2, 1), ) print(histogram.mean()) Let’s say we want to break this down to see how the distribution of power consumption varies by time of day. for hour, distribution in total_watts.distribution_by_hour_of_day(): print(hour, distribution.quantiles([0.25, 0.5, 0.75])) Or day of week. for day, distribution in total_watts.distribution_by_day_of_week(): print(day, distribution.quantiles([0.25, 0.5, 0.75])) Finally, we just want to look at the distribution of power consumption during business hours on each day in January. for t in datetime_range(datetime(2016, 1, 1), datetime(2016, 2, 1), 'days'): biz_start = t + timedelta(hours=8) biz_end = t + timedelta(hours=18) histogram = total_watts.distribution(start=biz_start, end=biz_end) print(t, histogram.quantiles([0.25, 0.5, 0.75])) In practice, you’d probably be plotting these distribution and time series using your tool of choice. Transform to evenly-spaced¶ Now, let’s say we want to do some forecasting of the power consumption of this home. There is probably some seasonality that need to be accounted for, among other things, and we know that statsmodels and pandas are tools with some batteries included for that type of thing. Let’s convert to a pandas Series. regular = total_watts.moving_average(300, pandas=True) That will convert to a regularly-spaced time series using a moving average to avoid aliasing (more info here). At this point, a good next step is the excellent tutorial by Tom Augspurger, starting with the Modeling Time Series section.
http://traces.readthedocs.io/en/latest/examples.html
2017-06-22T16:18:01
CC-MAIN-2017-26
1498128319636.73
[]
traces.readthedocs.io
Applies To: Windows Server 2016 You can use this topic to learn how to configure DNS policy to perform application load balancing. Previous versions of Windows Server DNS only provided load balancing by using round robin responses; but with DNS in Windows Server 2016, you can configure DNS policy for application load balancing. When you have deployed multiple instances of an application, you can use DNS policy to balance the traffic load between the different application instances, thereby dynamically allocating the traffic load for the application. Example of Application Load Balancing Following is an example of how you can use DNS policy for application load balancing. This example uses one fictional company - Contoso Gift Services - which provides online gifing services, and which has a Web site named contosogiftservices.com. The contosogiftservices.com website is hosted in multiple datacenters that each have different IP addresses. In North America, which is the primary market for Contoso Gift Services, the Web site is hosted in three datacenters: Chicago, IL, Dallas, TX and Seattle, WA. The Seattle Web server has the best hardware configuration and can handle twice as much load as the other two sites. Contoso Gift Services wants application traffic directed in the following manner. - Because the Seattle Web server includes more resources, half of the application's clients are directed to this server - One quarter of the application's clients are directed to the Dallas, TX datacenter - One quarter of the application's clients are directed to the Chicago, IL, datacenter The following illustration depicts this scenario. How Application Load Balancing Works After you have configured the DNS server with DNS policy for application load balancing using this example scenario, the DNS server responds 50% of the time with the Seattle Web server address, 25% of the time with the Dallas Web server address, and 25% of the time with the Chicago Web server address. Thus for every four queries the DNS server receives, it responds with two responses for Seattle and one each for Dallas and Chicago. One possible issue with load balancing with DNS policy is the caching of DNS records by the DNS client and the resolver/LDNS, which can interfere with load balancing because the client or resolver do not send a query to the DNS server. You can mitigate the effect of this behavior by using a low Time-to-Live (TTL) value for the DNS records that should be load balanced. How to Configure Application Load Balancing The following sections show you how to configure DNS policy for application load balancing. Create the Zone Scopes You must first create the scopes of the zone contosogiftservices.com for the datacenters where they are hosted. "DallasZoneScope" Add-DnsServerZoneScope -ZoneName "contosogiftservices.com" -Name "ChicagoZoneScope" For more information, see Add-DnsServerZoneScope Add Records to the Zone Scopes Now you must add the records representing the web server host into the zone scopes. In SeattleZoneScope, you can add the record with IP address 192.0.0.1, which is located in the Seattle datacenter. In ChicagoZoneScope, you can add the same record () with IP address 182.0.0.1 in the Chicago datacenter. Similarly in DallasZoneScope, you can add a record () "DallasZoneScope" For more information, see Add-DnsServerResourceRecord. Create the DNS Policies After you have created the partitions (zone scopes) and you have added records, you must create DNS policies that distribute the incoming queries across these scopes so that 50% of queries for contosogiftservices.com are responded to with the IP address for the Web server in the Seattle datacenter and the rest are equally distributed between the Chicago and Dallas datacenters. You can use the following Windows PowerShell commands to create a DNS policy that balances application traffic across these three datacenters. Note In the example command below, the expression –ZoneScope "SeattleZoneScope,2; ChicagoZoneScope,1; DallasZoneScope,1" configures the DNS server with an array that includes the parameter combination <ZoneScope>,<weight>. Add-DnsServerQueryResolutionPolicy -Name "AmericaPolicy" -Action ALLOW – -ZoneScope "SeattleZoneScope,2;ChicagoZoneScope,1;DallasZoneScope,1" -ZoneName "contosogiftservices.com" For more information, see Add-DnsServerQueryResolutionPolicy. You have now successfully created a DNS policy that provides application load balancing across Web servers in three different datacenters. You can create thousands of DNS policies according to your traffic management requirements, and all new policies are applied dynamically - without restarting the DNS server - on incoming queries.
https://docs.microsoft.com/en-us/windows-server/networking/dns/deploy/app-lb
2017-06-22T17:26:56
CC-MAIN-2017-26
1498128319636.73
[array(['../../media/dns-app-lb/dns-app-lb.jpg', 'DNS Application Load Balancing with DNS Policy'], dtype=object)]
docs.microsoft.com
) Transforms the source node to a string applying the stylesheet given by the xsltprocessor::importStylesheet() method. doc The DOMDocument or SimpleXMLElement object to be transformed. The result of the transformation as a string or FALSE on error. Example #1 Transforming to a string <?php // Load the XML source $xml = new DOMDocument; $xml->load('collection.xml'); $xsl = new DOMDocument; $xsl->load('collection.xsl'); // Configure the transformer $proc = new XSLTProcessor; $proc->importStyleSheet($xsl); // attach the xsl rules echo $proc->transformToXML($xml); ?> Il precedente esempio visualizzerà: Hey! Welcome to Nicolas Eliaszewicz's sweet CD collection! <h1>Fight for your mind</h1><h2>by Ben Harper - 1995</h2><hr> <h1>Electric Ladyland</h1><h2>by Jimi Hendrix - 1997</h2><hr> To prevent your xsl file from automatically prepending <?xml version="1.0"?> whilst keeping the output as xml, which is preferable for a validated strict xhtml file, rather specify output as <xsl:output The transformToXML function can produce valid XHTML output - it honours the <xsl:output> element's attributes, which defines the format of the output document. For instance, if you want valid XHTML 1.0 Strict output, you can provide the following attribute values for the <xsl:output> element in your XSL stylesheet: <xsl:output The function transformToXML has a problem with the meta content type tag. It outputs it like this: <meta http- which is not correct X(HT)ML, because it closes with '>' instead of with '/>'. A way to get the output correct is to use instead of transformToXML first transformToDoc anf then saveHTML: $domTranObj = $xslProcessor->transformToDoc($domXmlObj); $domHtmlText = $domTranObj->saveHTML();. $domTranObj = $xslProcessor->transformToDoc($domXmlObj); $domHtmlText = $domTranObj->saveHTML(); Do fix the <meta> for valid XHTML but do not correctly end empty node like <br /> which ouput like this : <br></br> Some browser note this as 2 different <br /> ... To fix this use $domTranObj = $xslProcessor->transformToDoc($domXmlObj); $domHtmlText = $domTranObj->saveXML(); How to fix:: *Fatal error: Call to undefined method domdocument::load()* If you get this error, visit the php.ini file and try commenting out the following, like this: ;[PHP_DOMXML] ;extension=php_domxml.dll Suddenly, the wonderfully simple example above works as advertised. I noticed an incompatibility between libxslt (php4) and the transformation through XSLTProcessor. Php5 and the XSLTProcessor seem to add implicit CDATA-Section-Elements. If you have an xslt like <script type="text/javascript"> foo('<xsl:value-of'); </script> It will result in <script type="text/javascript"><![CDATA[ foo('xpath-result-of-bar'); ]]></script> (at least for output method="xml" in order to produce strict xhtml with xslt1) That brings up an error (at least) in Firefox 1.5 as it is no valid javascript. It should look like that: <script type="text/javascript">//<![CDATA[ foo('xpath-result-of-bar'); ]]></script> As the CDATA-Section is implicit, I was not able to disable the output or to put a '//' before it. I tried everything about xsl:text disable-output-escaping="yes" I also tried to disable implicit adding of CDATA with <output cdata- (I thought that would exclude script-tags. It didn't). The solution: <xsl:text<script type="text/javascript"> foo('</xsl:text><xsl:value-of<xsl:text'); </script></xsl:text> Simple, but it took me a while. transformToXML, if you have registered PHP functions previously, does indeed attempt to execute these functions when it finds them in a php:function() pseudo-XSL function. It even finds static functions within classes, for instance: <xsl:value-of However, in this situation transformToXML does not try to execute "MyClass::MyFunction()". Instead, it executes "myclass:myfunction()". In PHP, since classes and functions are (I think) case-insensitive, this causes no problems. A problem arises when you are combining these features with the __autoload() feature. So, say I have MyClass.php which contains the MyFunction definition. Generally, if I call MyClass::MyFunction, PHP will pass "MyClass" to __autoload(), and __autoload() will open up "MyClass.php". What we have just seen, however, means that transformToXML will pass "myclass" to __autoload(), not "MyClass", with the consequence that PHP will try to open "myclass.php", which doesn't exist, instead of "MyClass.php", which does. On case-insensitive operating systems, this is not significant, but on my RedHat server, it is--PHP will give a file not found error. The only solution I have found is to edit the __autoload() function to look for class names which are used in my XSL files, and manually change them to the correct casing. Another solution, obviously, is to use all-lowercase class and file names.
http://docs.php.net/manual/it/xsltprocessor.transformtoxml.php
2017-06-22T16:40:30
CC-MAIN-2017-26
1498128319636.73
[]
docs.php.net
JS 5.0, developers have had the ability to embed components within grid cells using the Widget Column class. Beginning in ExtJS 6.2.0, developers have the ability to configure a component to be displayed in an expansion row below (or, configurably, above) the data row using the Row Widget plugin. In this guide we will cover how to embed components in grid cells, or in an expansion row. Widget Column allows you to easily embed any Component Widget define the type of component to embed in the cells. Based on the xtype contained in the widget config, the Widget Column will create, and manage the lifecycle of instances of the required component. This config cannot be an instance already because Widget Column needs one instance per rendered cell. Each instance is automatically connected with a specific record and row in the Grid. Over the lifetime of the Grid, the Widgets created for a row will be "recycled" and connected to different records and rows. The field referenced by the column's dataIndex is bound to the embedded component's defaultBindProperty. Since 6.2.0, components embedded in grids have access to the ViewModel and all the data within it. The ViewModel contains two row-specific properties: record recordIndex The Row Widget plugin. allows developers to specify a component to embed in an expansion row in a very similar way to using a Widget Column To enable this, configure the RowWidget plugin with a widget property: plugins: [{ ptype: 'rowwidget', // The widget definition describes a widget to be rendered into the expansion row. // It has access to the application's ViewModel hierarchy. Its immediate ViewModel // contains a record and recordIndex property. These, or any property of the record // (including association stores) may be bound to the widget. widget: { xtype: 'form' ... The embedded component has access to the grid's ViewModel, including the record and recordIndex properties. The grid may be configured with a rowViewModel setting which may specify a type of ViewModel to create which may include custom data and formulas to help provide data for the widgets. See Row Widget Grid The Ext.WidgetJS 6+. These include: As with normal Components, Widgets can be added to the items of a Container. For example, we can add a Sparkline to a toolbar: In the case of Sparklines, you must provide a size (both width and height) or use an ExtJS layout manager to do so. This is because the internal drawings have no natural size. While ExtJS ExtJS Modern Component. That is because Ext.Widget is an enhanced version of Ext.AbstractComponent from ExtJS Modern. The ability to add listeners to the element template is one of those enhancements, but there are a handful of others. Refer to the documentation on Ext.Widget for more details.
http://docs.sencha.com/extjs/6.2.1/guides/components/widgets_widgets_columns.html
2017-06-22T16:34:48
CC-MAIN-2017-26
1498128319636.73
[]
docs.sencha.com
Getting Started¶ This section is intended to get Pylons up and running as fast as possible and provide a quick overview of the project. Links are provided throughout to encourage exploration of the various aspects of Pylons. Requirements¶ - Python 2 series above and including 2.4 (Python 3 or later not supported at - this time) Installing¶ To avoid conflicts with system-installed Python libraries, Pylons comes with a boot-strap Python script that sets up a “virtual” Python environment. Pylons will then be installed under the virtual environment. By the Way virtualenv is a useful tool to create isolated Python environments. In addition to isolating packages from possible system conflicts, it. Download the go-pylons.py script. Run the script and specify a directory for the virtual environment to be created under: $ python go-pylons.py mydevenv Tip The two steps can be combined on unix systems with curl using the following short-cut: $ curl | python - mydevenv To isolate further from additional system-wide Python libraries, run with the –no-site-packages option: $ python go-pylons.py --no-site-packages mydevenv The go-pylons.py script is little more than a basic virtualenv bootstrap script, that then does easy_install Pylons==1.0. You could do the equivilant steps by manually fetching the virtualenv.py script and then installing Pylons like so: curl -O python virtualenv.py mydevenv mydevenv/bin/easy_install Pylons==1.0 This will leave a functional virtualenv and Pylons installation. Activate the virtual environment (scripts may also be run by specifying the full path to the mydevenv/bin dir): $ source mydevenv/bin/activate Or on Window to activate: > mydevenv\Scripts\activate.bat Note If you get an error such as: ImportError: No module named _md5 during the install. It is likely that your Python installation is missing standard libraries needed to run Pylons. Debian and other systems using debian packages most frequently encounter this, make sure to install the python-dev packages and python-hashlib packages. Working Directly From the Source Code¶ Mercurial must be installed to retrieve the latest development source for Pylons. Mercurial packages are also available for Windows, MacOSX, and other OS’s. $ hg clone To tell setuptools to use the version in the Pylons directory: $ cd pylons $ python setup.py develop The active version of Pylons is now the copy in this directory, and changes made there will be reflected for Pylons apps running. Creating a Pylons Project¶ Create a new project named helloworld with the following command: $ paster create -t pylons helloworld Note Windows users must configure their PATH as described in Windows Notes, otherwise they must specify the full path to the paster command (including the virtual environment bin directory). Running this will prompt for two choices: - which templating engine to use - whether to include SQLAlchemy support Hit enter at each prompt to accept the defaults (Mako templating, no SQLAlchemy). Here is the created directory structure with links to more information: - helloworld - MANIFEST.in - README.txt - development.ini - Runtime Configuration - docs - ez_setup.py - helloworld (See the nested helloworld directory) - helloworld.egg-info - setup.cfg - setup.py - Application Setup - test.ini The nested helloworld directory looks like this: - helloworld - __init__.py - config - environment.py - Environment - middleware.py - Middleware - routing.py - URL Configuration - controllers - Controllers - lib - app_globals.py - app_globals - base.py - helpers.py - Helpers - model - Models - public - templates - Templates - tests - Unit and functional testing - websetup.py - Runtime Configuration Running the application¶ Run the web application: $ cd helloworld $ paster serve --reload development.ini The command loads the project’s server configuration file in development.ini and serves the Pylons application. Note The --reload option ensures that the server is automatically reloaded if changes are made to Python files or the development.ini config file. This is very useful during development. To stop the server press Ctrl+c or the platform’s equivalent. The paster serve command can be run anywhere, as long as the development.ini path is properly specified. Generally during development it’s run in the root directory of the project. Visiting when the server is running will show the welcome page. Hello World¶ To create the basic hello world application, first create a controller in the project to handle requests: $ paster controller hello Open the helloworld/controllers/hello.py module that was created. The default controller will return just the string ‘Hello World’:): # Return a rendered template #return render('/hello.mako') # or, Return a response return 'Hello World' At the top of the module, some commonly used objects are imported automatically. Navigate to where there should be a short text string saying “Hello World” (start up the app if needed): Tip URL Configuration explains how URL’s get mapped to controllers and their methods. Add a template to render some of the information that’s in the environ. First, create a hello.mako file in the templates directory with the following contents: Hello World, the environ variable looks like: <br /> ${request.environ} The request variable in templates is used to get information about the current request. Template globals lists all the variables Pylons makes available for use in templates. Next, update the controllers/hello.py module so that the index method is as follows: class HelloController(BaseController): def index(self): return render('/hello.mako') Refreshing the page in the browser will now look similar to this:
http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/gettingstarted.html
2017-06-22T16:28:22
CC-MAIN-2017-26
1498128319636.73
[array(['_images/helloworld.png', '_images/helloworld.png'], dtype=object) array(['_images/hellotemplate.png', '_images/hellotemplate.png'], dtype=object) ]
docs.pylonsproject.org
Fit the maxent model p whose feature expectations are given by the vector K. Model expectations are computed either exactly or using Monte Carlo simulation, depending on the ‘func’ and ‘grad’ parameters passed to this function. For ‘model’ instances, expectations are computed exactly, by summing over the given sample space. If the sample space is continuous or too large to iterate over, use the ‘bigmodel’ class instead. For ‘bigmodel’ instances, the model expectations are not computed exactly (by summing or integrating over a sample space) but approximately (by Monte Carlo simulation). Simulation. This instrumental distribution is specified by calling setsampleFgen() with a user-supplied generator function that yields a matrix of features of a random sample and its log pdf values. The algorithm can be ‘CG’, ‘BFGS’, ‘LBFGSB’, ‘Powell’, or ‘Nelder-Mead’. The CG (conjugate gradients) method is the default; it is quite fast and requires only linear space in the number of parameters, (not quadratic, like Newton-based methods). The BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm is a variable metric Newton method. It is perhaps faster than the CG method but requires O(N^2) instead of O(N) memory, so it is infeasible for more than about 10^3 parameters. The Powell algorithm doesn’t require gradients. For small models it is slow but robust. For big models (where func and grad are simulated) with large variance in the function estimates, this may be less robust than the gradient-based algorithms.
http://docs.scipy.org/doc/scipy-0.9.0/reference/generated/scipy.maxentropy.model.fit.html
2014-10-20T11:23:39
CC-MAIN-2014-42
1413507442497.30
[]
docs.scipy.org
Browser security options If your email account uses a BlackBerry Enterprise Server, you might not be able to change your browser security options. If you change a browser security option, other applications on your BlackBerry smartphone that access a server might be affected. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38326/1477846.jsp
2014-10-20T11:49:16
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
: An alternative to "{loadposition xx}" is the "{loadmodule yyy}" variation which is handled by the same plugin. In this case the plugin looks for the first module who's type matches the string 'yyy'. So, you could load a "mod_login" module by placing {loadmodule login} in your text. If you wish to load a specific instance of a module, because you have more than one login module e.g. titled as login..
http://docs.joomla.org/index.php?title=How_do_you_put_a_module_inside_an_article%3F&diff=106598&oldid=22277
2014-10-20T11:36:41
CC-MAIN-2014-42
1413507442497.30
[]
docs.joomla.org
Smart Tag The Smart Tags for RadDock, RadDockZone, and RadDockLayout are identical. They all let you easily change the skin for your control or quickly get help. You can display the Smart Tag by right clicking on a RadDock, RadDockZone, or RadDockLayout control and choosing "Show Smart Tag". As of Q2 2015 we have improved the smart tags of our controls by displaying the most popular control specific properties and adding links to their important online resources: RadDock control looks like for each skin. Assign a skin by selecting the one you want from the list. When you set the Skin from the RadDock Smart Tag, the selected skin applies only to that RadDock control. When you set the Skin from the RadDockZone Smart Tag, the selected skin not only affects the appearance of the RadDockZone control, it becomes the default skin for any RadDock controls nested in the RadDockZone at design time. (Setting the Skin property of individual RadDock controls overrides this default.) When you set the Skin from the RadDockLayout Smart Tag, the selected skin becomes the default skin for all RadDockZone controls nested in the RadDockLayout at design time. (Setting the Skin property of individual RadDockZone controls overrides this default.) As RadDockLayout is not rendered on client Web pages, there is no visual impact on the RadDockLayout component itself. Learning Center Links navigate you directly to RadDock examples, help, and code library.
https://docs.telerik.com/devtools/aspnet-ajax/controls/dock/design-time/smart-tag
2018-06-18T05:17:39
CC-MAIN-2018-26
1529267860089.11
[array(['images/dock-smart-tag.png', 'dock-smart-tag'], dtype=object)]
docs.telerik.com
Depending on what type of user depends on what type of license to buy. Dongle A dongle is a small security device that plugs into your computer. It acts like a mobile serial number for FlowJo, allowing users to run FlowJo on any number of computers (one at a time). Serial Number A license number will run FlowJo on one, and only one, computer. It is a good option for labs and users who don’t want to worry about losing a dongle (or getting it stolen) or who only have one workstation they will be using for analysis. If you start with a serial number and your lab expands with more computers you can convert a serial number into a dongle for a small fee of $300 plus shipping. Site License Large institutions with numerous users typically have serial numbers tied to a site license. This is the ideal licensing method and the cheapest.
http://docs.flowjo.com/d2/faq/general-faq/dongle-serial-number/
2018-06-18T05:18:58
CC-MAIN-2018-26
1529267860089.11
[]
docs.flowjo.com
This is a script for a workshop on using Voyant for the CWRC community. 1.0 Introduction - The workshop leaders will introduce themselves: - Stéfan Sinclair, McGill University - Susan Brown, University of Guelph -” (multi-tool interface) Jane Austen’s Persuasion. Cirrus (Austen’s Persuasion): (backup) The Cirrus tool shows you a word cloud of high frequency words. Some questions to ask yourself: - What words did you expect? What words are missing? What words are interesting? - How does the tool arrange words and choose colours? Is there any correspondence between size and frequency??: (backup). Note that you can create a persistent URL for your corpus – that way your link can be shared or bookmarked and you won’t need to reload the texts into Voyant. Click the save icon in the blue bar at the top and the first URL will be the link for your Voyant corpus. 5.0 Other Stuff - Other Voyant Tools: - Austen in Bubblelines - Persuasion in Bubbles - Persuasion in Knots - Austen in the Tool Browser - Other Voyant Skins: - Austen in the original Voyeur skin (more analysis, less reading) - Austen in the Scatter Skin for correspondence analysis (which sounds worse than it is:) - Austen in Desktop Skin - Other Tools
http://docs.voyant-tools.org/workshops/cwrcshop2-ryerson/
2018-06-18T06:05:23
CC-MAIN-2018-26
1529267860089.11
[]
docs.voyant-tools.org
In this section, we’ll show you in the staff client: We also provide an appendix with a listing of suggested minimum permissions for some essential groups. You can compare the existing permissions with these suggested permissions and, if any are missing, you will know how to add them. In the staff client, in the upper right corner of the screen, click on Administration >. In the staff client, in the upper right corner of the screen, navigate to Administration > Server Administration >. First, we will remove a permission from the Staff group. You can select a group of individual items by holding down the Ctrl key and clicking on them. You can select a list of items by clicking on the first item, holding down the Shift key, and clicking on the last item in the list that you want to select. Now, we will add the permission we just removed back to the Staff group. If you have saved your changes and you don’t see them, you may have to click the Reload button in the upper left side of the staff client screen.
http://docs.evergreen-ils.org/3.0/_managing_permissions_in_the_staff_client.html
2018-06-18T05:39:38
CC-MAIN-2018-26
1529267860089.11
[]
docs.evergreen-ils.org
Testing an Email Flow Rule To check your current rule configuration, you can test how the configuration will behave against specific email addresses. To test an email flow rule Open the Amazon WorkMail console at. In the navigation pane, choose Organization settings, Email flow rules. Next to Test configuration, enter a full email address to test. Choose Test. The action that will be taken for the provided email address is displayed
https://docs.aws.amazon.com/workmail/latest/adminguide/test-email-flow-rule.html
2018-06-18T05:41:33
CC-MAIN-2018-26
1529267860089.11
[]
docs.aws.amazon.com
Steffens Scleroderma Center The Steffens Scleroderma Center is an integral part of The Center for Rheumatology. Led by Dr. Lee S. Shapiro, it is the only upstate New York clinic devoted to the diagnosis and care of individuals with scleroderma and related disorders. It is also a research center and, as such, a participating center of the Scleroderma Clinical Trials Consortium. Ongoing projects focus on the "microvascular" aspects of scleroderma and on developing new treatment approaches to difficult aspects of the disease, such as calcium deposition (calcinosis) and gastrointestinal dysfunction. Research is supported, in part, by the local Steffens Scleroderma Foundation (). Accomplishments include development of the "renal crisis prevention card" and new treatment approaches to Degos Disease, another microvascular disorder. The Steffens Scleroderma Center is actively recruiting for the following studies; Vascana- a phase II, double-blinded crossover study of topical formulation of nitroglycerine versus placebo in the subjective and physiologic responses to controlled cold challenge in subjects with Raynaud Phenomenon secondary to connective tissue disease. FocuSSced- a phase III, double-blinded, randomized study to assess the efficacy and safety of tocilizumab versus placebo in patients with systemic sclerosis. New patient visits occur on Tuesdays in the Albany office and Thursdays in the Saratoga office. To schedule a visit, contact Mackenzie Rouleau at 518-584-4953. Physicians Affiliated with the Scleroderma Center: • Lee Shapiro, M.D • Aixa Toledo-Garcia, M.D • Jessica Chapman, M.D New Patient Referrals and Appointments: Appointments are available in either the Albany or Saratoga office. Please direct all requests for appointments to: Mackenzie Rouleau – New Patient Coordinator 6 Care Lane, Saratoga Springs NY 12866 Phone: (518) 584-4953 Direct Fax: (518) 533-1369 ***Please note: As a specialty provider, it is important for us to have a copy of any recent office notes, lab reports, radiological reports, including x-ray, bone density, MRI and/or bone scans. Please call your referring physician and request a copy of your records be faxed or mailed to us. Please confirm that we have received them prior to your appointment. Thank you. New Patient Form: Download NewPatientSSC_1.pdf Clinical Research Coordinators: Inquiries regarding ongoing research studies should be directed to one of the research coordinators. Roberta Lukasiewicz: Call (518) 489-4471 ext 410 Heather Sickler: Call (518) 584-4953 ext 410 To learn more about The Steffens Scleroderma Foundation and their mission, please visit
http://joint-docs.com/about-us/Steffens-Scleroderma-Center_79_pg.htm
2018-06-18T05:15:41
CC-MAIN-2018-26
1529267860089.11
[array(['../img/uploads/_middle/image/1024x1024-tower-place-albany.jpg', None], dtype=object) array(['../img/uploads/_middle/image/saratoga-location.jpg', None], dtype=object) ]
joint-docs.com
Difference between revisions of "Using a custom image in the menu bar title" From Joomla! Documentation Latest revision as of 09 To use a custom image in the Menu Bar title we need an image (obviously) and a little bit of CSS. First off, create the following folders in your administrator component (we're using a fictitious.
https://docs.joomla.org/index.php?title=Using_a_custom_image_in_the_menu_bar_title&diff=76150&oldid=71348
2015-11-25T00:31:28
CC-MAIN-2015-48
1448398444138.33
[array(['/images/c/c8/Compat_icon_1_5.png', 'Since Version 1.5'], dtype=object) ]
docs.joomla.org
. Down Down /help/statutes false /help/statutes section true » » Statutes × Details for PDF view Link (Permanent link) Bookmark this location View toggle Go to top of document Cross references for section Acts affecting this section References to this 1970 Statutes Annotations Appellate Court Citations Reference lines Clear highlighting
http://docs.legis.wisconsin.gov/help/statutes
2015-11-25T00:11:43
CC-MAIN-2015-48
1448398444138.33
[]
docs.legis.wisconsin.gov
resizeobject. If you set either height or width to null it will keep the aspect ratio based on the width or height that is set. So, for the example above the widthis set to 1000pixels and since the heightis set to nullit will resize the image width to 1000 pixels and resize the height based on the current aspect ratio. qualitykey. Typically between 70 and 100% there is little notice of image quality, but the image size may be dramatically lower. upsizeto true. It will upsize all images to your specified resize values. nameand scalepercentage. The namewill be attached to your thumbnail image (as an example say the image you uploaded was ABC.jpg a thumbnail with the nameof mediumwould now be created at ABC-medium.jpg). The scaleis the percentage amount you want that thumbnail to scale. This value will be a percentage of the resize width and height if specified.
https://voyager-docs.devdojo.com/bread/introduction-1/images
2022-06-25T13:26:13
CC-MAIN-2022-27
1656103035636.10
[]
voyager-docs.devdojo.com
SDK 3.2.0 released Windowed Mode (beta) You can now choose to display your game in "Windowed Mode". That means that players can now enjoy your game without going into full-screen. This way, they can play your game while doing something else. Players will still be able to play in fullscreen if they wish so. Note, this feature is in beta. You need to contact us so we can enable it in your game. Introducing: Analytics SDK 3.1.0 released Added Texture Import Settings Applier tool. It can help you optimize your texture import settings easier; you can read more about it here. Added a new section in the SDK titled "Recommended Guides" that will link to important guides that may fit your current project. Right now, this only works with FMOD but we have plans to add more in the future. Added a memory graph in the Performance Monitor Overlay to help you measure memory usage in your games. SDK 3.0.0 released - A new feature, File Queue is now available. You can read about it here - Removed the Trail.UUIDobject; instead, use string-based IDs directly. Updated documentation to reflect this change. - Cloud Saves will now automatically upload PlayerPrefs. Note: This feature is available only by request. If you would like to enable it for your project please contact us on Discord. - Added support for Unity 2020.3 and 2021.1 SDK 2.6.1 released - Added PaymentsKit.GetEntitlementsSDK call for retrieving entitlements client-side. SDK 2.5.0 & 2.6.0 released - Removed the fixed paths from the SDK; you no longer need to have the Trail folder in the Assets' folder root. - Added support for Unity 2020.2 - Added a warning when we fail to find the Trail.asmdef SDK 2.4.1 released - Player Token ID is no longer cached. SDK 2.4.0 released - Added an automated fix to stop Unity from stripping Addressables when building for Trail. - Added a new feature; cloud saves. - Fixed a warning related to default variable initialization in the SDK. - Fix an overflow bug that affected projects with long names. - Improved the SDK references page. - Updated links in the SDK from beta.trail.gg to dev.trail.gg - Now when you print the UUIDs they'll be in lowercase instead of uppercase to follow proper conventions. SDK 2.3.0 released - Fixed a small error in the Editor Extension. - Reduced the number of logs printed when you test in the editor. - Fixed a division by zero bug in InsightsKit. - We've made it possible to access the Trail username field in AuthKit. - Fixed a warning related to accessing the same variable from multiple threads. SDK 2.2.6 released - Renamed Trail.GetUserIDto Trail.GetGameUserIDin the C API. - PaymentKit C API now returns one entitlement ID per product ID. - Changed how we deal with exceptions in the editor extension; now you'll receive a log in the console. - We've added a new feature allowing you to force an aspect ratio before the game starts. - We made the error message when you can't login to trail.gg using the editor extension clearer.
https://docs.trail.gg/changelog?page=2
2022-06-25T14:36:43
CC-MAIN-2022-27
1656103035636.10
[array(['https://files.readme.io/aa0aa8a-trail_analytics.png', 'trail analytics.png'], dtype=object) array(['https://files.readme.io/aa0aa8a-trail_analytics.png', 'Click to close...'], dtype=object) ]
docs.trail.gg
Interactive Examples using the ReactiveSearch REST API While appbase.io maintains 100% API compatibility with Elasticsearch, it also provides a declarative API to query Elasticsearch. This is the recommended way to query via web and mobile apps as it prevents query injection attacks. It composes well with Elasticsearch's query DSL, and lets you make use of appbase.io features like caching, query rules, server-side search settings, and analytics. ReactiveSearch API Examples You can read the API reference for the ReactiveSearch API over here. In the following section, we will show interactive examples for using the API. Basic Usage In the basic usage example, we will see how to apply a search query using the ReactiveSearch API. Search + Facet In this example, we will see how to apply a search and a facet query together. This makes use of two queries. We also introduce a concept for executing a query that depends on another query using the react and execute properties. Here, the search query also utilizes the value of the facet query while returning the documents. Search + Facet + Result In this example, we will be using three queries: search + facet + result. If you had a UI, visualize a scenario where the user has entered something in the searchbox and selected a value in the facet filter. These two should inform the final results that get displayed. Note: executeproperty's usage shows whether we expect the particular query to return a response. It's set to trueonly for the results (books) query, as a result, only that key is returned back. Search + Facet + Range + Result In this example, we will see a more complex use-case where an additional range filter is also applied. Search + Geo In this example, we will see an application of a search query along with a geo query. We are searching for earthquakes within 100mi distance of a particular location co-ordinate. Search on multiple indices In this example, we make two search queries - each on a different index. Return DISTINCT results In this example, we show how to only return distinct results back from a search query, the equivalent of a DISTINCT / GROUP BY clause in SQL. Use Elasticsearch Query DSL In this example, we show how to use Elasticsearch's query DSL using the defaultQuery property. This provides the flexibility of overriding the ReactiveSearch API completely for advanced use-cases. Combining ReactiveSearch API + Elasticsearch Query DSL In this example, we show how to use Elasticsearch's query DSL for writing a term query using the customQuery property. This query is then applied to the search results query, which is composed using the ReactiveSearch API. Configuring Search Settings In this example, we see usage of advanced search settings that show how to record custom analytics events, enable query rules, and enable cache (per request).
https://docs.appbase.io/api/examples/rest/
2022-06-25T13:16:38
CC-MAIN-2022-27
1656103035636.10
[]
docs.appbase.io
Verify Password Used to Protect the Worksheet Aspose.Cells APIs have enhanced the Protection class by introducing some useful properties & methods. One such method is the verifyPassword which allows specifying a password as an instance of String and verifies if the same password has been used to protect the Worksheet. Verify Password Used to Protect the Worksheet The Protection.verifyPassword method returns true if the specified password matches with the password used to protect the given worksheet, false if the specified password does not match. Following piece of code uses the Protection.verifyPassword method in conjunction with Protection.isProtectedWithPassword property to detect the password protection, and verifies the password.
https://docs.aspose.com/cells/java/verify-password-used-to-protect-the-worksheet/
2022-06-25T13:36:03
CC-MAIN-2022-27
1656103035636.10
[]
docs.aspose.com
Debugging with Dagger Cloud Dagger Cloud is under development, but we have just released the first telemetry feature! tip Ensure you're using Dagger Engine/CLI version 0.2.18 or higher for Dagger Cloud. To take advantage of this feature, you will need to create an account by following the steps below: - Initiate a login process using the CLI with dagger login. - A new window will open in the browser requesting to sign-up using GitHub. - After authenticating, authorize the Dagger GitHub application to finish the process. - The browser window will close automatically and your CLI will be authenticated. Now, dagger doexecutions will now be listed in Dagger Cloud. Once you create an account, after running your project again, you will see in the following for each of your runs in a single dashboard: When you click on a specific run, you can see the following: - Overview - An overview of the dagger doexecution results. The presented information will highlight the status of the execution (succeed, failed), contextual information about the environment, and a detailed view of each action with their corresponding log outputs. - Shareable Run URL - A unique URL only accessible by the owner of the execution as well as some specialized Dagger engineers. - CUE Plans - The raw execution plan. This provides understanding about how Dagger resolved the action dependencies as well as the CUE evaluation results. - Actions - All the events involved in the action execution with their corresponding duration and outputs. - CLI Argument view - Arguments specified in the CLI when running dagger do <action> [flags]. With this information, we’ve made it easier for you to inspect your runs with a more verbose failure output. If you are still struggling with your run, we have provided an easy way for you to request help from our team. You can follow the instructions below to submit your request. How to Submit a Help Request Once you have created an account, you can easily create help requests that will be pre-filled with all of your run information. By providing this information, our team can help debug your issue much faster. Follow the steps below to submit your request: - Click on the “ask for help” button in your single run view - Once you click on “ask for help”, you will be redirected to a GitHub issue that is pre-filled with all of the information that our team needs to review your request. - Click “submit” to publish your issue - Now, you will be able to see your issue on the “help” tab of your account or in GitHub directly. Once we have your request, our team will assign a team member to review and help with your request. They will get back to you directly through the GitHub issue. Who can access the information in my execution URL? The Dagger team are the only ones who can access the information in the execution URL. What if I don’t want to open a public issue? A public issue is the best way for us to communicate since it helps us track the completion of your request, but you can always reach out to us on Discord directly if you don’t want to open an issue. When you reach out to us, please share your Dagger run URL, so we can troubleshoot the issue as quickly as possible. You can find the shareable Dagger run URL by clicking on the copy icon where it says “share my run”. The contents of this link are only accessible from our Dagger team and yourself. See example below:
https://docs.dagger.io/1241/dagger-cloud/
2022-06-25T14:45:37
CC-MAIN-2022-27
1656103035636.10
[array(['/assets/images/runs-8d03ccb01d43e12e28e4ec3f42176ecf.png', 'Dagger Cloud run URL'], dtype=object) array(['/assets/images/share-url-9163608173500d1c3038cc14bfbb4e83.png', 'Dagger Cloud run URL'], dtype=object) ]
docs.dagger.io
This is a typical scenario of the flow management in the SmartNIC and the application. Flow learning The SmartNIC decodes every frame and looks up the flow in the flow table, taking actions based on the rules programmed by the application. The application controls the SmartNIC flow table as well as manages the host flow table which contains complete metadata of each flow. The following figure illustrates a typical scenario of the flow management in the SmartNIC and the application. - The flow manager looks up the flow in the flow table when a frame is received in the SmartNIC. - Hit: If the flow is found in the SmartNIC flow table, the flow table is updated and actions are applied, for example, forward to a specific stream, transmit, drop etc. - Miss: If the flow is not found in the SmartNIC flow table, the frame is forwarded to the host to be processed by the application. - The application looks up the flow of the received frame. - Hit: If the flow is found in the host flow table, actions are applied. - Miss: If the flow is not found in the host flow table, the frame is forwarded to the flow learning process. - The application decides actions and performs flow learning.Note: The creation time stamp of the flow is stored in the host flow table. Flow unlearning A flow can be deleted from the flow table (flow unlearning) in three ways. When a flow is unlearned, a flow info record is generated and forwarded to the host. It is configurable on a per-flow basis whether a flow info record is generated at flow unlearning. The application can then read the flow info record and add to stored flow metadata in the host flow table. Flow info records can be used for generating NetFlow/IPFIX records in the application. - TCP termination: TCP flows can be unlearned at TCP termination. Automatic TCP flow unlearning is configurable on a per-flow basis. - Timeout: The SmartNIC flow scrubber reads the SmartNIC flow table and times out inactive flows. The timeout value is configurable globally. - The application manually unlearns a flow.
https://docs.napatech.com/r/Stateful-Flow-Management/Managing-the-Life-of-a-Flow
2022-06-25T13:44:59
CC-MAIN-2022-27
1656103035636.10
[]
docs.napatech.com
Describes how to configure a Non VMware SD-WAN Site of type Generic IKEv2 Router (Route Based VPN) in SD-WAN Orchestrator. Note: To configure a Generic IKEv2 Router (Route Based VPN) via Edge, see Configure a Non-VMware SD-WAN Site of Type Generic IKEv2 Router via Edge. IKEv2 Router (Route Based VPN). - Enter the IP address for the Primary VPN Gateway (and the Secondary VPN Gateway if necessary), and click Next.A route-based Non VMware SD-WAN Site of type IKEv2 is created and a dialog box for your Non VMware SD-WAN Site appears. - route based VPN, if the user do not specify a value, Default is used as the local authentication ID. The default local authentication ID value will be the SD-WAN Gateway Interface Public IP. - Generic IKEv2 VPN gateways. - Click Save Changes.
https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-562EF637-AC76-4CE1-B49A-915D44A3E407.html
2022-06-25T15:04:26
CC-MAIN-2022-27
1656103035636.10
[]
docs.vmware.com
Write IO path API, which is either YSQL or YCQL. This user request is translated by the YQL layer into an internal key. Recall from the sharding section that.PC requests. -abyteDB server, which then routes the request appropriately. In practice, the use of the YugabyteDB smart client is recommended for removing the extra network hop.
https://docs.yugabyte.com/preview/architecture/core-functions/write-path/
2022-06-25T13:28:54
CC-MAIN-2022-27
1656103035636.10
[array(['/images/architecture/write_path_io.png', 'write_path_io'], dtype=object) ]
docs.yugabyte.com
Downstream to Kinshasa The odyssey of war crime victims in the Democratic Republic of Congo. A group of people with disabilities tries to force corrupt politicians in the capital to take action. Dieudo Hamadi's camera accompanies a group of people with disabilities on their journey to the capital of the Democratic Republic of Congo to receive pensions from authorities. The director observes his characters during a multi-day voyage and then on the streets of Kinshasa. What form must the protest take to pierce the veil of silence? Konrad Wirkowski Festivals 2020 IDFA 2020 TIFF (Amplify Voices Award) 2020 DOK Leipzig (Golden Dove, Prize of the Interreligious Jury) 2021 ZagrebDox (Big Stamp Award) 2021 DocAviv Online availability 10 dec, 09:00 - 19 dec MOJEeKINO.pl Screenings 11.12Sat16:00 Downstream to Kinshasa Kinoteka 14.12Tue18:00 Downstream to Kinshasa Kino Muranów
https://watchdocs.pl/en/watch-docs/2021/films/downstream-to-kinshasa
2022-06-25T14:51:31
CC-MAIN-2022-27
1656103035636.10
[array(['/upload/thumb/2021/12/plynac-do-kinszasy-3_auto_800x900.jpg', 'Downstream to Kinshasa'], dtype=object) ]
watchdocs.pl
CORSRule Specifies a cross-origin access rule for an Amazon S3 bucket. Contents - AllowedHeaders Headers that are specified in the Access-Control-Request-Headersheader. These headers are allowed in a preflight OPTIONS request. In response to any preflight OPTIONS request, Amazon S3 returns any requested headers that are allowed. Type: Array of strings Required: No - AllowedMethods An HTTP method that you allow the origin to execute. Valid values are GET, PUT, HEAD, DELETE. Type: Array of strings Required: Yes - AllowedOrigins One or more origins you want customers to be able to access the bucket from. Type: Array of strings Required: Yes - ExposeHeaders One or more headers in the response that you want customers to be able to access from their applications (for example, from a JavaScript XMLHttpRequestobject). Type: Array of strings Required: No - ID Unique identifier for the rule. The value cannot be longer than 255 characters. Type: String Required: No - MaxAgeSeconds The time in seconds that your browser is to cache the preflight response for the specified resource. Type: Integer Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CORSRule.html
2022-06-25T15:27:28
CC-MAIN-2022-27
1656103035636.10
[]
docs.aws.amazon.com
Table of Contents The. The information driving the MARC 007 Field Physical Characteristics Wizard is already a part of the Evergreen database. This data can be customized by individual sites and / or updated when the Library of Congress dictates new values or positions in the 007 field. There are three relevant tables where the information that drives the wizard is stored:
http://docs-testing.evergreen-ils.org/docs/reorg/3.5/command_line_admin/_cataloging_staff_interface.html
2022-06-25T13:23:55
CC-MAIN-2022-27
1656103035636.10
[]
docs-testing.evergreen-ils.org
Configure the Client Session To improve 3DS success rates, it is recommended to pass the following elements in the Client Session: Set up the Workflow for 3D Secure There are multiple ways to trigger 3D Secure prompts within your Primer Workflow. - Dynamic 3DS via a processor - Enforced 3DS - Via a fraud provider If you rely on your processor to determine whether 3DS is required, then implement Dynamic 3DS. If you have a fraud provider or want to enforce 3DS, then implement Enforced 3DS. Dynamic 3DS When adding a processor you can optionally enable “Trigger 3DS Automatically” which will allow Primer to trigger 3DS prompts when necessary. It creates a 3DS-enabled processor in your workflow Enforced 3DS Create a 3D Secure step in your workflow. This forces a 3D Secure prompt for all payments for your provided condition. Fraud Prevention Providers If you have a fraud prevention provider connection set up (e.g. Riskified) then you can allow 3D Secure prompts from the provider to trigger 3DS in your workflow.
https://deploy-preview-320--primerapidocs.netlify.app/docs/accept-payments/three-d-secure/configuring-three-d-secure/
2022-06-25T13:33:31
CC-MAIN-2022-27
1656103035636.10
[array(['/docs/static/b063fdb26740f448a36349624b88bdec/b1665/dynamic-3ds.png', 'Dynamic 3DS Dynamic 3DS'], dtype=object) array(['/docs/static/62fc1579b9c25e421ff3e62410ba490f/d0e73/dynamic-3ds-2.png', 'Dynamic 3DS Dynamic 3DS'], dtype=object) array(['/docs/static/25b4e29a32a1bf2d3f847fa300122a78/d9199/enforced-3ds.png', 'Enforced 3DS Enforced 3DS'], dtype=object) array(['/docs/static/cec5235b035b317c4d4f597a62b8c034/d9199/fraud.png', 'Fraud Prevention Fraud Prevention'], dtype=object) ]
deploy-preview-320--primerapidocs.netlify.app
Use CloudWatch RUM With CloudWatch RUM, you can perform real user monitoring to collect and view client-side data about your web application performance from actual user sessions in near real time. The data that you can visualize and analyze includes page load times, client-side errors, and user behavior. When you view this data, you can see it all aggregated together and also see breakdowns by the browsers and devices that your customers use. You can use the collected data to quickly identify and debug client-side performance issues. CloudWatch RUM helps you visualize anomalies in your application performance and find relevant debugging data such as error messages, stack traces, and user sessions. You can also use RUM to understand the range of end user impact including the number of users, geolocations, and browsers used. End user data that you collect for CloudWatch RUM is retained for 30 days and then automatically deleted. If you want to keep the RUM events for a longer time, you can choose to have the app monitor send copies of the events to CloudWatch Logs in your account. Then, you can adjust the retention period for that log group. To use RUM, you create an app monitor and provide some information. RUM generates a JavaScript snippet for you to paste into your application. The snippet pulls in the RUM web client code. The RUM web client captures data from a percentage of your application's user sessions, which is displayed in a pre-built dashboard. You can specify what percentage of user sessions to gather data from. The RUM web client is open source. For more information, see CloudWatch RUM web client Performance considerations This section discusses the performance considerations of using CloudWatch RUM. Load performance impact— The CloudWatch RUM web client can be installed in your web application as a JavaScript module, or loaded into your web application asynchronously from a content delivery network (CDN). It does not block the application’s load process. CloudWatch RUM is designed for there to be no perceptible impact to the application’s load time. Runtime impact— The RUM web client performs processing to record and dispatch RUM data to the CloudWatch RUM service. Because events are infrequent and the amount of processing is small, CloudWatch RUM is designed for there to be no detectable impact to the application’s performance. Network impact— The RUM web client periodically sends data to the CloudWatch RUM service. Data is dispatched at regular intervals while the application is running, and also immediately before the browser unloads the application. Data sent immediately before the browser unloads the application are sent as beacons, which, are designed to have no detectable impact on the application’s unload time. With CloudWatch RUM, you incur charges for every RUM event that CloudWatch RUM receives. Each data item collected using the RUM web client is considered a RUM event. Examples of RUM events include a page view, a JavaScript error, and an HTTP error. You have options for which types of events are collected by each app monitor. You can activate or deactivate options to collect performance telemetry events, JavaScript errors, HTTP errors, and X-Ray traces. For more information about choosing these options, see Step 2: Create an app monitor and Information collected by the CloudWatch RUM web client. For more information about pricing, see Amazon CloudWatch Region availability CloudWatch RUM is currently available in the following Regions: US East (N. Virginia) US East (Ohio) US West (Oregon) Europe (Frankfurt) Europe (Stockholm) Europe (Ireland) Europe (London) Asia Pacific (Tokyo) Asia Pacific (Singapore) Asia Pacific (Sydney) Topics - IAM policies to use CloudWatch RUM - Set up an application to use CloudWatch RUM - Configuring the CloudWatch RUM web client - Viewing the CloudWatch RUM dashboard - CloudWatch metrics that you can collect with CloudWatch RUM - Data protection and data privacy with CloudWatch RUM - Information collected by the CloudWatch RUM web client - Manage your applications that use CloudWatch RUM - CloudWatch RUM quotas - Troubleshooting CloudWatch RUM
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM.html
2022-06-25T15:40:38
CC-MAIN-2022-27
1656103035636.10
[]
docs.aws.amazon.com
Ensure storage account uses the latest version of TLS encryption Error: Storage account does not use the latest version of TLS encryption Bridgecrew Policy ID: BC_AZR_STORAGE_2 Checkov Check ID: CKV_AZURE_44 Severity: HIGH Storage account does not use the latest version of TLS encryption Description Communication between a client application and an Azure Storage account is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. Azure Storage uses TLS 1.2 on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility. To follow security best practices and the latest PCI compliance standards, Microsoft recommends enabling the latest version of TLS protocol (TLS 1.2) for all your Microsoft Azure App Service web applications. PCI DSS information security standard requires that all websites accepting credit card payments uses TLS 1.2 after June 30, 2018. Fix - Runtime Azure Portal To change the policy using the Azure Portal, follow these steps: - Log in to the Azure Portal at. - Navigate to your storage account. - Select Configuration. - Under Minimum TLS version, use the drop-down to select the minimum version of TLS required to access data in this storage account, as shown in the following image. CLI Command The minimumTlsVersion property is not set by default when you create a storage account with Azure CLI. This property does not return a value until you explicitly set it. The storage account permits requests sent with TLS version 1.0 or greater if the property value is null. az storage account create \ --name <storage-account> \ --resource-group <resource-group> \ --kind StorageV2 \ --location <location> \ --min-tls-version TLS1_1 az storage account show \ --name <storage-account> \ --resource-group <resource-group> \ --query minimumTlsVersion \ --output tsv az storage account update \ --name <storage-account> \ --resource-group <resource-group> \ --min-tls-version TLS1_2 az storage account show \ --name <storage-account> \ --resource-group <resource-group> \ --query minimumTlsVersion \ --output tsv Fix - Buildtime Terraform - Resource: azurerm_storage_account - Attribute: min_tls_version (Optional) The minimum supported TLS version for the storage account. Possible values are TLS1_0, TLS1_1, and TLS1_2. Defaults to TLS1_0 for new storage accounts. Use TLS1_2. resource "azurerm_storage_account" "test" { ... + min_tls_version = "TLS1_2" ... } ARM Template - Resource: Microsoft.Storage/storageAccounts - Argument: minimumTlsVersion To configure the minimum TLS version for a storage account with a template, create a template with the MinimumTLSVersion property set to TLS1_0, TLS1_1, or TLS1_2. { "$schema": "", "contentVersion": "1.0.0.0", "parameters": {}, "variables": { "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]" }, "resources": [ { "name": "[variables('storageAccountName')]", "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-06-01", "location": "<location>", "properties": { "minimumTlsVersion": "TLS1_2" }, "dependsOn": [], "sku": { "name": "Standard_GRS" }, "kind": "StorageV2", "tags": {} } ] } Updated 10 months ago
https://docs.bridgecrew.io/docs/bc_azr_storage_2
2022-06-25T15:04:37
CC-MAIN-2022-27
1656103035636.10
[]
docs.bridgecrew.io
How to debug scanning problems next. Debugging sane-airscan If your device supports eSCL or WSD (you can find it out from device specification - look for the mentioned protocols or AirScan), then its scanning functionality is supported by sane-airscan. Regarding debugging, on the top of usual logging sane-airscan gathers a communication dump and output image, which is helpful during investigation. sane-airscan debugging can be enabled in /etc/sane.d/airscan.conf by setting: [debug] trace = /path/to/dir/where/debugfiles/will/be/saved enable = true How to divide logs In case your debug log is too big for bugzilla to attach (because your issue doesn’t happen with the lowest settings or logs are big even with the lowest settings), do divide the logs to three files like this: $ grep dll debug_log > debug_log_dll $ grep <connection> debug_log > debug_log_connection $ grep <backend> debug_log > debug_log_backend <backend> is the name of backend which supports your scanner (pixma, genesys, plustek, hpaio, airscan etc.), <connection> is the type of connection you use for the device (tcp, usb). The division makes the investigation more difficult (the person needs to have three opened files at the same time), so do divide the logs only if log file is too big.
https://docs.fedoraproject.org/ne/quick-docs/how-to-debug-scanning-problems/
2022-06-25T14:42:58
CC-MAIN-2022-27
1656103035636.10
[]
docs.fedoraproject.org
Improved Updated Apache Log4j checks We updated our Apache Log4j checks so that results from Log4Shell-specific scan templates no longer get removed by subsequent scans. Improved Wordpress fingerprinting We improved WordPress fingerprinting to reduce false negatives. Windows Application Manifest file verification The Windows Application Manifest file verification now needs a file to exist before attempting to parse. Scan engine now handles assessment and on-premise Adobe Flash scans the same way. When performing an on-prem scan for Adobe Flash with a file under C:\\WINDOWS\\system32\\Macromed\\FlashFlash.ocx, scan engines now assert a version of flash with an empty version instead of throwing an exception. The scan engine now handles this case for both assessments and on-premise scans the same way. Fixed We fixed an issue where some scan engine updates were being skipped. This caused some engines to be out of sync with their updates. In Shared Scan Credential Configuration, test credentials no longer allow literal values to be passed, which could have provided a potential opportunity for an XSS attack. Thank you to Aleksey Solovev for disclosing this issue. An issue which prevented users from deleting custom policies when arf files were corrupted or missing has been fixed. The policy deletion will now complete and a warning will be displayed in the console log, highlighting the arf files. Goals dashboard cards failed to load correctly, which caused the entire dashboard not to load. Dashboards now successfully load in this case. We fixed an issue which caused some assets with the InsightVM Agent installed to fail to remediate vulnerabilities in the Console UI if the Agent data is never imported. We fixed an issue that was causing errors in the console and engine communications to be suppressed. We fixed F+ for Rule 4.2.9 in CIS IBM AIX 7.1 Benchmark 1.1.0 and for some rules in the Apache http 2.4 policy v1.3.0. We fixed an issue when asserting network interfaces. We fixed an issue that caused scans to be slow to start and consoles to lose connectivity to shared engines if a scan contained large IPv6 address ranges.
https://docs.rapid7.com/release-notes/nexpose/20220309/
2022-06-25T14:20:11
CC-MAIN-2022-27
1656103035636.10
[]
docs.rapid7.com
. Download and Install Teradata drivers To enable connectivity, you must download and install the Teradata drivers into an accessible location on the. Steps: - If you don't have a Teradata developer account, create one here: - Log in to the account. Navigate to - Download the JDBC driver in ZIP or TAR form. - Copy the downloaded ZIP or TAR file to the . - Extract and place the two JAR files into a folder accessible to the . - Verify that the is the owner of both of these JAR files and their parent folder. - Locate the data-service.classpath. To the classpath value for the drivers directory: - Add a prefix of :. - Add a suffix of /*. - Example: Was: Updated: - Whole classpath example: Was: Updated: - Save your changes and restart the platform. Increase Read Timeout Particular when reading from large Teradata tables, you might experience read timeouts in the..
https://docs.trifacta.com/pages/diffpages.action?originalId=148810776&pageId=151995031
2022-06-25T13:23:07
CC-MAIN-2022-27
1656103035636.10
[]
docs.trifacta.com
Secure Secure your deployment of YugabyteDB. Role-based access control Manage users and roles, grant privileges, implement row-level security (RLS), and column-level security. Encryption in transit Enable encryption in transit (using TLS) to secure and protect network communication. Encryption at rest Enable encryption at rest in YugabyteDB (using TLS) to secure and protect data on disk. Audit logging Configure YugabyteDB's session-level and object-level audit logging for security and compliance.
https://docs.yugabyte.com/preview/secure/
2022-06-25T13:58:13
CC-MAIN-2022-27
1656103035636.10
[array(['/images/section_icons/index/secure.png', 'Secure Secure'], dtype=object) ]
docs.yugabyte.com
Trailer The Quiet Epidemic Skip to tickets Directed By: Lindsay Keys, Winslow Crane-Murdoch Special Presentations 2022 USA English World Premiere 102 minutes When a young girl's mysterious symptoms go undiagnosed, her desperate father takes measures into his own hands, only to discover he has landed in the middle of a contentious medical debate that dates to 1975. For almost 50 years, Lyme disease has stranded patients in limbo as they seek diagnosis and treatment. The Quiet Epidemic is a deep investigation behind the scenes that reveals how and why many doctors, health insurance companies and even the CDC are motivated to keep this disease in the dark. At the centre of this fight emerges famed oncologist and researcher Dr. Neil Spector, who helps the young girl in her own treatment and brings groundbreaking research to the forefront of the medical community. With an estimated 476,000 new cases of Lyme disease per year in the US, and as climate change fuels the spread of infected ticks, the urgency of this issue has reached critical levels. Heather Haynes Read less Credits Director(s) Lindsay Keys Winslow Crane-Murdoch Producer(s) Daria Lombroso Lindsay Keys Chris Hegedus Executive Producer(s) Phyllis & Scott Bedford Alex Cohen Laure Woods Linda Giampa Ally Hilfiger Isabel Rose Sarena Snider Archive Producer(s) Shari Chertok Nancy Kern (Associate) Editor(s) Mark Harrison Winslow Crane-Murdoch Cinematography Winslow Crane-Murdoch Lindsay Keys Composer Alex Ebert Sound Morgan Johnson Noah Woodburn Graphics Krzys Pianko David Driscoll Cody Tilson Zach Booth Visit the film's website Read less Promotional Partners Special Presentations sponsored by Back to What's On See More & Save Buy your Festival ticket packages and passes today! Share
https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/quiet-epidemic
2022-06-25T13:35:56
CC-MAIN-2022-27
1656103035636.10
[]
hotdocs.ca
GetBotAlias Returns information about an Amazon Lex bot alias. For more information about aliases, see Versioning and Aliases. This operation requires permissions for the lex:GetBotAlias action. Request Syntax GET /bots/ botName/aliases/ nameHTTP/1.1 URI Request Parameters The request uses the following URI parameters. Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "botName": "string", "botVersion": "string", "checksum": "string", "conversationLogs": { "iamRoleArn": "string", "logSettings": [ { "destination": "string", "kmsKeyArn": "string", "logType": "string", "resourceArn": "string", "resourcePrefix": "string" } ] }, "createdDate": number, "description": "string", "lastUpdatedDate": number, "name": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - botName The name of the bot that the alias points to. Type: String Length Constraints: Minimum length of 2. Maximum length of 50. Pattern: ^([A-Za-z]_?)+$ - botVersion The version of the bot that the alias points to. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: \$LATEST|[0-9]+ - checksum Checksum of the bot alias. Type: String - conversationLogs The settings that determine how Amazon Lex uses conversation logs for the alias. Type: ConversationLogsResponse object - createdDate The date that the bot alias was created. Type: Timestamp - description A description of the bot alias. Type: String Length Constraints: Minimum length of 0. Maximum length of 200. - lastUpdatedDate The date that the bot alias was updated. When you create a resource, the creation date and the last updated date are the same. Type: Timestamp - name The name of the bot alias. Type: String Length Constraints: Minimum length of 1. Maximum length of 100. Pattern: ^([A-Za-z]_?)+$:
https://docs.aws.amazon.com/lex/latest/dg/API_GetBotAlias.html
2022-06-25T15:44:51
CC-MAIN-2022-27
1656103035636.10
[]
docs.aws.amazon.com
KT Cloud #Set up Kupboard clusters #Introduction This section describes how to create and configure G1,G2 instances to build a kupboard cluster on KT Cloud. #G11,G2 instances as in the table below. OS for all instances must be Ubuntu18.04 or 20.04. The size of additional disks should change depending on services or packages installed on the service cluster #Firewall Create a firewall for a subnet. #SSH Key In order to set up the servers, you need a SSH key pair to enable automatic root account login to the individual server. KT Cloud provides the SSH Key Pair root account login is automatically done. #Server Configuration Result #G1,G2 platform #SSH Key Pair #Server The password of each server is delivered with a short-lived alarm the moment the server is created. The account is root. But, the password is not required to log into the server once the public key of SSH Key Pair is mapped into the server as it is created. #Select SSH Key Pair #Networking Each public IP is mapped into a server, respectively.
https://docs.kupboard.io/docs/getstarted/kcloud/
2022-06-25T13:50:02
CC-MAIN-2022-27
1656103035636.10
[]
docs.kupboard.io
Go to the docs for the latest release. Provisioning storage You can provision additional NFS and CIFS storage for your Cloud Volumes ONTAP systems from Cloud Manager by managing volumes and aggregates. If you need to create iSCSI storage, you should do so from System Manager. Creating FlexVol volumes If you need more storage after you launch a Cloud Volumes ONTAP system, you can create new FlexVol volumes for NFS or CIFS FlexVol. Creating FlexVol. Using FlexCache volumes to accelerate data access A FlexCache volume is a storage volume that caches NFS read data from an origin (or source) volume. Subsequent reads to the cached data result in faster access to that data.. FlexCache volumes work well for system workloads that are read-intensive. Cloud Manager does not provide management of FlexCache volumes at this time, but you can use the ONTAP CLI or ONTAP System Manager to create and manage FlexCache volumes: Starting with the 3.7.2 release, Cloud Manager generates a FlexCache license for all new Cloud Volumes ONTAP systems. The license includes a 500 GB usage limit.
https://docs.netapp.com/us-en/occm37/task_provisioning_storage.html
2022-06-25T14:04:49
CC-MAIN-2022-27
1656103035636.10
[array(['./media/workflow_storage_provisioning.png', 'This illustration shows the steps to provision storage for Cloud Volumes ONTAP: if using NFS, create volumes in Cloud Manager and if using CIFS or iSCSI, create aggregates in Cloud Manager and then provision storage in System Manager.'], dtype=object) ]
docs.netapp.com
The XEM8320 supports the FrontPanel Device Settings in the table below, accessible from the FrontPanel Application as well as the Device Settings API. XEM8320 Device Settings The XEM8320 has six SYZYGY ports that support SmartVIO for automatic interface voltage selection. These settings may optionally be overridden by the settings below. SYZYGY Device Discovery The following settings are common to all FrontPanel devices that support SYZYGY either natively or through a compatible breakout board or reference platform. The XEM8320 has six ports, supporting n=0..5.
https://docs.opalkelly.com/xem8320/device-settings/
2022-06-25T14:01:28
CC-MAIN-2022-27
1656103035636.10
[]
docs.opalkelly.com
Quickstart¶ Clients only require a valid config object: >>> from oci ~/.oci/config and default profile name DEFAULT to create an Identity client. Since we’ll be using the root compartment (or tenancy) for most operations, let’s also extract that from the config object: >>> import oci >>> config = oci.config.from_file() >>> identity = oci.identity.IdentityClient(config) >>> compartment_id = config["tenancy"] Next we’ll need to populate an instance of the CreateGroupDetails model with our request, and then send it: >>> from oci oci oci. You can manually iterate through responses, providing the page from the previous response to the next request. For example: response = identity.list_users(compartment_id) users = response.data while response.has_next_page: response = identity.list_users(compartment_id, page=response.next_page) users.extend(response.data) For examples on pagination, please check GitHub. = oci.object_storage.ObjectStorageClient(config) >>> namespace = object_storage.get_namespace().data To upload an object, we’ll create a bucket: >>> from oci.
https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/stable/quickstart.html
2022-06-25T13:42:21
CC-MAIN-2022-27
1656103035636.10
[]
oracle-cloud-infrastructure-python-sdk.readthedocs.io
I’m so sorry From Chernobyl to Fukushima. Impressive with its tasteful cinematography, a poetic image of the dangers associated with the use of nuclear energy. The latest production by Zhao Liang, one of the most important Chinese documentary artists, again impresses with its refined form. However, the dreamy, deserted landscapes and captivating frames hide terrifying secrets. The protagonists of Liang's previous film, "Behemoth" (WATCH DOCS 2015 Award), were inhumanly exploited coal miners. Continuing the theme of the threat to humanity caused by uncontrolled development, the director devotes his new documentary to the dangers of nuclear energy. He visits people living in contaminated areas, from Fukushima to Chernobyl, shows victims of radiation and the process of neutralizing toxic waste. As to mankind’s energy prospects - you can agree or disagree with Zhao Liang, but one thing is certain - in his sometimes poetically ambiguous but also brutally literal film, the Chinese artist sides with the victims, as usual. Konrad Wirkowski 2021 Festival de Cannes 2021 Busan IFF (Busan Cinephile Award)
https://watchdocs.pl/en/watch-docs/2021/films/i-m-so-sorry,111575332
2022-06-25T14:16:20
CC-MAIN-2022-27
1656103035636.10
[array(['/upload/thumb/2021/11/i-am-so-sorry_auto_800x900.jpg', 'I’m so sorry'], dtype=object) ]
watchdocs.pl
Web Interface - Alluxio Master Web UI - Alluxio Worker Web UI - Security Alluxio has a user-friendly web interface to allow users to monitor and manage the system. Each master and worker serve their own web UI. The default port for the web interface is 19999 for the master and 30000 for the workers. Alluxio Master Web UI%. License Summary Alluxio license information Storage Usage Summary Alluxio tiered storage information which gives a break down of amount of space used per tier across the Alluxio cluster.. Configuration To check the current configuration information, click “Configuration” in the navigation bar on the top of the screen. The configuration page has two sections: Alluxio Configuration A map of all the Alluxio configuration properties and their set values. Whitelist Contains all the Alluxio path prefixes eligible to be stored in Alluxio. A request may still be made to a file not prefixed by a path in the whitelist. Only whitelisted files will be stored in Alluxio. Workers The master also shows all known Alluxio workers in the system and shows them in the “Workers” tab. >>IMAGE. In-Alluxio Data To browse all files in Alluxio, click on the “In-Alluxio Data” tab in the navigation bar. Files currently in Alluxio are listed, with the file name, file size, size for each block, whether the file is pinned or not, the file creation time, and the file modification time. Logs To browse the master node’s logs directory, click “Logs” in the navigation bar on the top of the screen.. Enable/Disable Auto-Refresh To toggle the browser to automatically refresh information, click the “Enable Auto-Refresh” button. Click it again to disable. Alluxio Worker Web UI Each Alluxio worker. BlockInfo In the “BlockInfo” page, you can see the files on the worker, and other information such as the file size and which tiers the files is stored on. Also, if you click on a file, you can view all the blocks of that file. Logs To browse the worker node’s logs directory, click “Logs” in the navigation bar on the top of the screen. Metrics To access worker metrics section, click on the “Metrics” tab in the navigation bar. This section shows all worker metrics. It includes the following sections: Worker Gauges Overall measures of the worker. Logical Operation Number of operations performed. Enable/Disable Auto-Refresh To toggle the browser to automatically refresh information, click the “Enable Auto-Refresh” button. Click it again to disable. Return to Master Provides a link to the Master node’s web UI. Security Authentication A pair of username and password can be set to restrict access to the web UI, set the following properties in alluxio-site.properties to enable login authentication for the web UI: alluxio.web.login.enabled=true alluxio.web.login.username=xxx alluxio.web.login.password=xxx SSL To enable HTTPS for the web UI, set the following properties in alluxio-site.properties: alluxio.web.ssl.enabled=true alluxio.web.ssl.keystore.path=<path to the keystore containing the SSL key pairs and certificate> alluxio.web.ssl.keystore.password=<keystore password> alluxio.web.ssl.key.password=<SSL key's password in the keystore> alluxio.web.ssl.key.alias=<SSL key's alias in the keystore>
https://docs.alluxio.io/ee/user/1.8/en/Web-Interface.html
2020-08-03T12:21:42
CC-MAIN-2020-34
1596439735810.18
[array(['../img/screenshot_overview.png', 'Alluxio Master Home Page'], dtype=object) array(['../img/screenshot_browseFileSystem.png', 'browse'], dtype=object) array(['../img/screenshot_viewFile.png', 'viewFile'], dtype=object) array(['../img/screenshot_systemConfiguration.png', 'configurations'], dtype=object) array(['../img/screenshot_workers.png', 'workers'], dtype=object) array(['../img/screenshot_inMemoryFiles.png', 'inMemFiles'], dtype=object) array(['../img/screenshot_logs.png', 'workers'], dtype=object) array(['../img/screenshot_masterMetrics.png', 'masterMetrics'], dtype=object) array(['../img/screenshot_workersOverview.png', 'Alluxio Worker Home Page'], dtype=object) array(['../img/screenshot_logs.png', 'logs'], dtype=object) array(['../img/screenshot_workerMetrics.png', 'workerMetrics'], dtype=object) ]
docs.alluxio.io
Palette:pointMerge Summary[edit] The merge component packs the pixels of two input images into a single output image. The dimensions of the output image are automatically calculated to fit all of the pixels of the input imagess. The Palette:pointRepack component can be used to resize the output image if specific dimensions are required. Multiple pointMerge components can be chained together if you need to merge more than two images. Parameters - Help Page Help - Opens this page. Version Version - Current version of the COMP. Operator Inputs - Input 0 - The first input image. - Input 1 - The second input image. Operator Outputs - Output 0 - An output image that includes all of the pixels of the two input images..
https://docs.derivative.ca/Palette:pointMerge
2020-08-03T12:08:34
CC-MAIN-2020-34
1596439735810.18
[]
docs.derivative.ca
Where the syntax is used Data Modeling Syntax is used in SQL interfaces, including: - SQL editors: SQL Editor, Transform Model's creation interface - Custom Dimensions and Measures creation in Data Modeling In SQL editors Model reference In queries, you can use models just like your normal database tables. This means you can combine models and raw database tables in a query. However, it is recommended that you use models all the time for consistency and ensure dependency between SQL models. For example, we want to combine models ecommerce_orders, ecommerce_products and ecommerce_order_items: with val as ( select { { #oi.order_id }} , { { #oi.quantity }} * { { #p.price }} as total_value from { {#ecommerce_order_items oi}} left join { {#ecommerce_products p}} on { { #oi.product_id }} = { { #p.id }} ) select { { #o.* }} , val.total_value from { {#ecommerce_orders o}} left join val on { { #o.id }} = val.order_id If you do not want to use alias on models, you can use the model name, just like when you query normal tables: with val as ( select { { #oi.order_id }} , { { #oi.quantity }} * { { #p.price }} as total_value from { {#ecommerce_order_items oi}} left join { {#ecommerce_products p}} on { { #oi.product_id }} = { { #p.id }} ) select { { #ecommerce_orders.* }} -- Use model name , val.total_value from { {#ecommerce_orders}} -- No alias left join val on { { #ecommerce_orders.id }} = val.order_id -- Use model name Field reference Example of a query using field reference: -- Calculate statistics of buyers Custom Dimensions and Measures are just SQL snippets, so you must specify a field name when using them in a query. If not, the resulted column will be named automatically by your database's SQL. In Custom Dimensions & Measures To refer to a Base Dimensions, custom Dimensions or Measure in the same model, the syntax is { { #THIS.field_name }} or { { #THIS.measure_name }} How it works When you write a SQL using the syntax, Holistics's engine will parse it into a full query to be run against your database. In the final query: - Referred models will be turned into CTEs - Referred custom fields/measures will be turned into the full formula This way the dependency between models will be enforced. For example, you have model users_count created from this aggregation: select { {#u.age_group}} as age_group , { {#u.count_users}} as count_users from { {#ecommerce_users u}} group by 1 In the final query: - Model ecommerce_usersis turned into a CTE age_group's formula is used count_usersformula is used Query optimization When you are selecting from a Base Model, or a persisted Transform Model, if you use Holistics's field reference syntax, Holistics will be able to select only the necessary fields to be inserted into the CTE. If the normal SQL syntax is used, the engine will need to insert all the fields in the base table to the CTE. For example, selecting from a Base Model created from a table with more than 20 fields, using normal SQL syntax: select id , name , property_type , room_type from { {#homestay_listings}} The resulted query will include all the fields: If you use Holistics's syntax: select { {#l.id}} , { {#l.name}} , { {#l.property_type}} , { {#l.room_type}} from { {#homestay_listings l}} This is particularly important when you query from "fat tables" with large number of columns (like Snowplow event tables). Updated 2 months ago
https://docs.holistics.io/docs/data-modeling-syntax
2020-08-03T11:37:54
CC-MAIN-2020-34
1596439735810.18
[array(['https://files.readme.io/6714fa6-Selection_246.png', 'Selection_246.png'], dtype=object) array(['https://files.readme.io/6714fa6-Selection_246.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/944fc06-Selection_248.png', 'Selection_248.png'], dtype=object) array(['https://files.readme.io/944fc06-Selection_248.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/205ede4-Selection_249.png', 'Selection_249.png'], dtype=object) array(['https://files.readme.io/205ede4-Selection_249.png', 'Click to close...'], dtype=object) ]
docs.holistics.io
Installation of Yatra The installation of Yatra WordPress Travel plugin is quite easy. You can follow the following steps to install the yatra plugin. - Go to plugin area of your admin panel - Click on add new button - Type Yatra search box - Click on install button and wait for a couple of seconds and click on activate button Yeyy, Yatra plugin successfully installed on your website. Here is the installation video tutorial:
https://docs.mantrabrain.com/yatra-wordpress-plugin/
2020-08-03T11:25:28
CC-MAIN-2020-34
1596439735810.18
[]
docs.mantrabrain.com
adCenter API forums back online We had some issues yesterday with the API forums during a migration of the MSDN system but the problems appear to be resolved. Apologies for the inconvenience. Please let us know if you encounter any ongoing issues with the forum by posting via the blog comments, the Advertiser forum on this site. Also, you can send a message to the support alias for API users or to me via the web site. Thanks for taking part in the community. Chris
https://docs.microsoft.com/en-us/archive/blogs/bing_ads_api/adcenter-api-forums-back-online
2020-08-03T12:25:21
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Basic string operations in .NET Applications often respond to users by constructing messages based on user input. For example, it is not uncommon for websites. Related sections Type Conversion in .NET Describes how to convert one type into another type. Formatting Types Describes how to format strings using format specifiers.
https://docs.microsoft.com/en-us/dotnet/standard/base-types/basic-string-operations
2020-08-03T13:40:24
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Modules in v0.2.0¶ MONAI aims at supporting deep learning in medical image analysis at multiple granularities. This figure shows a typical example of the end-to-end workflow in medical deep learning area: MONAI architecture¶ The design principle of MONAI is to provide flexible and light APIs for users with varying expertise. All the core components are independent modules, which can be easily integrated into any existing PyTorch programs. Users can leverage the workflows in MONAI to quickly set up a robust training or evaluation program for research experiments. Rich examples and demos are provided to demonstrate the key features. Researchers contribute implementations based on the state-of-the-art for the latest research challenges, including COVID-19 image analysis, Model Parallel, etc. The overall architecture and modules are shown in the following figure: The rest of this page provides more details for each module. Medical image data I/O, processing and augmentation¶ Medical images require highly specialized methods for I/O, preprocessing, and augmentation. Medical images are often in specialized formats with rich meta-information, and the data volumes are often high-dimensional. These require carefully designed manipulation procedures. The medical imaging focus of MONAI is enabled by powerful and flexible image transformations that facilitate user-friendly, reproducible, optimized medical data pre-processing pipelines. 1. Transforms support both Dictionary and Array format data¶ The widely used computer vision packages (such as torchvision) focus on spatially 2D array image processing. MONAI provides more domain-specific transformations for both spatially 2D and 3D and retains the flexible transformation “compose” feature. As medical image preprocessing often requires additional fine-grained system parameters, MONAI provides transforms for input data encapsulated in python dictionaries. Users can specify the keys corresponding to the expected data fields and system parameters to compose complex transformations. There is a rich set of transforms in six categories: Crop & Pad, Intensity, IO, Post-processing, Spatial, and Utilities. For more details, please visit all the transforms in MONAI. 2. Medical specific transforms¶ MONAI aims at providing a comprehensive medical image specific transformations. These currently include, for example: LoadNifti: Load Nifti format file from provided path Spacing: Resample input image into the specified pixdim Orientation: Change the image’s orientation into the specified axcodes RandGaussianNoise: Perturb image intensities by adding statistical noises NormalizeIntensity: Intensity Normalization based on mean and standard deviation Affine: Transform image based on the affine parameters Rand2DElastic: Random elastic deformation and affine in 2D Rand3DElastic: Random elastic deformation and affine in 3D 2D transforms tutorial shows the detailed usage of several MONAI medical image specific transforms. 3. Fused spatial transforms and GPU acceleration¶ As medical image volumes are usually large (in multi-dimensional arrays), pre-processing performance affects the overall pipeline speed. MONAI provides affine transforms to execute fused spatial operations, supports GPU acceleration via native PyTorch for high performance. For example: # create an Affine transform affine = Affine( rotate_params=np.pi/4, scale_params=(1.2, 1.2), translate_params=(200, 40), padding_mode='zeros', device=torch.device('cuda:0') ) # convert the image using bilinear interpolation new_img = affine(image, spatial_size=(300, 400), mode='bilinear') Experiments and test results are available at Fused transforms test. Currently all the geometric image transforms (Spacing, Zoom, Rotate, Resize, etc.) are designed based on the PyTorch native interfaces. Geometric transforms tutorial indicates the usage of affine transforms with 3D medical images. 4. Randomly crop out batch images based on positive/negative ratio¶ Medical image data volume may be too large to fit into GPU memory. A widely-used approach is to randomly draw small size data samples during training and run a “sliding window” routine for inference. MONAI currently provides general random sampling strategies including class-balanced fixed ratio sampling which may help stabilize the patch-based training process. A typical example is in Spleen 3D segmentation tutorial, which achieves the class-balanced sampling with RandCropByPosNegLabel transform. 5. Deterministic training for reproducibility¶ Deterministic training support is necessary and important for deep learning research, especially in the medical field. Users can easily set the random seed to all the random transforms in MONAI locally and will not affect other non-deterministic modules in the user’s program. For example: # define a transform chain for pre-processing train_transforms = monai.transforms.Compose([ LoadNiftid(keys=['image', 'label']), RandRotate90d(keys=['image', 'label'], prob=0.2, spatial_axes=[0, 2]), ... ... ]) # set determinism for reproducibility train_transforms.set_random_state(seed=0) Users can also enable/disable deterministic at the beginning of training program: monai.utils.set_determinism(seed=0, additional_settings=None) 6. Multiple transform chains¶ To apply different transforms on the same data and concatenate the results, MONAI provides CopyItems transform to make copies of specified items in the data dictionary and ConcatItems transform to combine specified items on the expected dimension, and also provides DeleteItems transform to delete unnecessary items to save memory. Typical usage is to scale the intensity of the same image into different ranges and concatenate the results together. 7. Debug transforms with DataStats¶ When transforms are combined with the “compose” function, it’s not easy to track the output of specific transform. To help debug errors in the composed transforms, MONAI provides utility transforms such as DataStats to print out intermediate data properties such as data shape, value range, data value, Additional information, etc. It’s a self-contained transform and can be integrated into any transform chain. 8. Post-processing transforms for model output¶ MONAI also provides post-processing transforms for handling the model outputs. Currently, the transforms include: Adding activation layer (Sigmoid, Softmax, etc.). Converting to discrete values (Argmax, One-Hot, Threshold value, etc), as below figure (b). Splitting multi-channel data into multiple single channels. Removing segmentation noise based on Connected Component Analysis, as below figure (c). Extracting contour of segmentation result, which can be used to map to original image and evaluate the model, as below figure (d) and (e). After applying the post-processing transforms, it’s easier to compute metrics, save model output into files or visualize data in the TensorBoard. Post transforms tutorial shows an example with several main post transforms. 9. Integrate third-party transforms¶ The design of MONAI transforms emphasis code readability and usability. It works for array data or dictionary-based data. MONAI also provides Adaptor tools to accommodate different data format for 3rd party transforms. To convert the data shapes or types, utility transforms such as ToTensor, ToNumpy, SqueezeDim are also provided. So it’s easy to enhance the transform chain by seamlessly integrating transforms from external packages, including: ITK, BatchGenerator, TorchIO and Rising. For more details, please check out the tutorial: integrate 3rd party transforms into MONAI program. Datasets¶ 1. Cache IO and transforms data to accelerate training¶ Users often need to train the model with many (potentially thousands of) epochs over the data to achieve the desired model quality. A native PyTorch implementation may repeatedly load data and run the same preprocessing steps for every epoch during training, which can be time-consuming and unnecessary, especially when the medical image volumes are large. MONAI provides a multi-threads CacheDataset to accelerate these transformation steps during training by storing the intermediate outcomes before the first randomized transform in the transform chain. Enabling this feature could potentially give 10x training speedups in the CacheDataset experiment. 2. Cache intermediate outcomes into persistent storage¶ The PersistentDataset is similar to the CacheDataset, where the intermediate cache values are persisted to disk storage for rapid retrieval between experimental runs (as is the case when tuning hyperparameters), or when the entire data set size exceeds available memory. The PersistentDataset could achieve similar performance when comparing to CacheDataset in PersistentDataset experiment. 3. Zip multiple PyTorch datasets and fuse the output¶ MONAI provides ZipDataset to associate multiple PyTorch datasets and combine the output data (with the same corresponding batch index) into a tuple, which can be helpful to execute complex training processes based on various data sources. For example: class DatasetA(Dataset): def __getitem__(self, index: int): return image_data[index] class DatasetB(Dataset): def __getitem__(self, index: int): return extra_data[index] dataset = ZipDataset([DatasetA(), DatasetB()], transform) 4. Predefined Datasets for public medical data¶ To quickly get started with popular training data in the medical domain, MONAI provides several data-specific Datasets(like: MedNISTDataset, DecathlonDataset, etc.), which include downloading, extracting data files and support generation of training/evaluation items with transforms. And they are flexible that users can easily modify the JSON config file to change the default behaviors. MONAI always welcome new contributions of public datasets, please refer to existing Datasets and leverage the download and extracting APIs, etc. Public datasets tutorial indicates how to quickly set up training workflows with MedNISTDataset and DecathlonDataset and how to create a new Dataset for public data. The common workflow of predefined datasets: Losses¶ There are domain-specific loss functions in the medical imaging research which are not typically used in the generic computer vision tasks. As an important module of MONAI, these loss functions are implemented in PyTorch, such as DiceLoss, GeneralizedDiceLoss, MaskedDiceLoss, TverskyLoss and FocalLoss, etc. Network architectures¶ Some deep neural network architectures have shown to be particularly effective for medical imaging analysis tasks. MONAI implements reference networks with the aims of both flexibility and code readability. To leverage the common network layers and blocks, MONAI provides several predefined layers and blocks which are compatible with 1D, 2D and 3D networks. Users can easily integrate the layer factories in their own networks. For example: # import MONAI’s layer factory from monai.networks.layers import Conv # adds a transposed convolution layer to the network # which is compatible with different spatial dimensions. name, dimension = Conv.CONVTRANS, 3 conv_type = Conv[name, dimension] add_module('conv1', conv_type(in_channels, out_channels, kernel_size=1, bias=False)) And there are several 1D/2D/3D-compatible implementations of intermediate blocks and generic networks, such as UNet, DenseNet, GAN. Evaluation¶ To run model inferences and evaluate the model quality, MONAI provides reference implementations for the relevant widely-used approaches. Currently, several popular evaluation metrics and inference patterns are included: 1. Sliding window inference¶ For model inferences on large volumes, the sliding window approach is a popular choice to achieve high performance while having flexible memory requirements (alternatively, please check out the latest research on model parallel training using MONAI). It also supports overlap and blending_mode configurations to handle the overlapped windows for better performances. A typical process is: Select continuous windows on the original image. Iteratively run batched window inferences until all windows are analyzed. Aggregate the inference outputs to a single segmentation map. Save the results to file or compute some evaluation metrics. The Spleen 3D segmentation tutorial leverages SlidingWindow inference for validation. Visualization¶ Beyond the simple point and curve plotting, MONAI provides intuitive interfaces to visualize multidimensional data as GIF animations in TensorBoard. This could provide a quick qualitative assessment of the model by visualizing, for example, the volumetric inputs, segmentation maps, and intermediate feature maps. A runnable example with visualization is available at UNet training example. Result writing¶ Currently MONAI supports writing the model outputs as NIfTI files or PNG files for segmentation tasks, and as CSV files for classification tasks. And the writers can restore the data spacing, orientation or shape according to the original_shape or original_affine information from the input image. A rich set of formats will be supported soon, along with relevant statistics and evaluation metrics automatically computed from the outputs. Workflows¶ To quickly set up training and evaluation experiments, MONAI provides a set of workflows to significantly simplify the modules and allow for fast prototyping. These features decouple the domain-specific components and the generic machine learning processes. They also provide a set of unify APIs for higher level applications (such as AutoML, Federated Learning). The trainers and evaluators of the workflows are compatible with pytorch-ignite Engine and Event-Handler mechanism. There are rich event handlers in MONAI to independently attach to the trainer or evaluator. The workflow and event handlers are shown as below: The end-to-end training and evaluation examples are available at Workflow examples. Research¶ There are several research prototypes in MONAI corresponding to the recently published papers that address advanced research problems. We always welcome contributions in forms of comments, suggestions, and code implementations. The generic patterns/modules identified from the research prototypes will be integrated into MONAI core functionality. COPLE-Net for COVID-19 Pneumonia Lesion Segmentation¶ A reimplementation of the COPLE-Net originally proposed by: G. Wang, X. Liu, C. Li, Z. Xu, J. Ruan, H. Zhu, T. Meng, K. Li, N. Huang, S. Zhang. (2020) “A Noise-robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions from CT Images.” IEEE Transactions on Medical Imaging. 2020. DOI: 10.1109/TMI.2020.3000314 LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation¶ A reimplementation of the LAMP system originally proposed by: Wentao Zhu, Can Zhao, Wenqi Li, Holger Roth, Ziyue Xu, and Daguang Xu (2020) “LAMP: Large Deep Nets with Automated Model Parallelism for Image Segmentation.” MICCAI 2020 (Early Accept, paper link:)
https://docs.monai.io/en/latest/highlights.html
2020-08-03T12:39:47
CC-MAIN-2020-34
1596439735810.18
[array(['_images/end_to_end.png', 'image'], dtype=object) array(['_images/medical_transforms.png', 'image'], dtype=object) array(['_images/affine.png', 'image'], dtype=object) array(['_images/multi_transform_chains.png', 'image'], dtype=object) array(['_images/post_transforms.png', 'image'], dtype=object) array(['_images/cache_dataset.png', 'image'], dtype=object) array(['_images/datasets_speed.png', 'image'], dtype=object) array(['_images/dataset_progress.png', 'image'], dtype=object) array(['_images/workflows.png', 'image'], dtype=object) array(['_images/coplenet.png', 'image'], dtype=object) array(['_images/unet-pipe.png', 'image'], dtype=object)]
docs.monai.io
; has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
https://docs.platformio.org/en/stable/boards/atmelavr/metro.html
2020-08-03T12:17:59
CC-MAIN-2020-34
1596439735810.18
[]
docs.platformio.org
For using the AAT SDK on a remote(or local) server, this section assumes you have basic knowledge of setting up and maintaining a server. In this tutorial, we are going to be using NPM and Node.js to setup the AAT SDK on a local node. Prerequisites: Installing the AAT-SDK On your server, create an NPM project. Then we will run the following command to download and install our dependencies. npm install --save @pokt-network/aat-js Import Pocket-AAT Library Note: If you haven't created an AAT token, then click here to learn how to obtain the: - clientPublicKey - applicationPublicKey - applicationPrivateKey Next, we will be creating a server.js file and insert the following code. // First require the PocketAAT class const PocketAAT = require('pocket-aat-js'); /* Define the arguments needed to build an AAT: - version: the version of the client - clientPublicKey: the account that the dApp is using to connect to the staked account - applicationPublicKey: the public key of the account staked on the network - applicationPrivateKey: the encrypted application private key */ const version = '0.0.1'; const clientPublicKey = 'ABCD...'; const applicationPublicKey = 'ABCD...'; const applicationPrivateKey = 'E73B...'; // Create a new PocketAAT instance const pocketAAT = PocketAAT.from(version, clientPublicKey, applicationPublicKey, applicationPrivateKey) // Example JSON output console.log(JSON.stringify(pocketAAT)); Updated 4 months ago
https://docs.pokt.network/docs/using-the-aat-sdk
2020-08-03T12:36:32
CC-MAIN-2020-34
1596439735810.18
[]
docs.pokt.network
Commit hook is a logic flow control pattern similar to trigger in relational databases. It enables to hook the CRUD events per objects of particular class. For cases when an object is being created (with a new operator), updated (by writing to a field) and deleted (when Deleteis called, and after the committed delete), additional event handlers of code might be added for execution. using System;using Starcounter;namespace TestHooks{[Database]public class Hooked{public string state { get; set; }}[Database]public class YetAnotherClass{public int Stock { get; set; }}class Program{static void Main(){Hook<Hooked>.BeforeDelete += (s, obj) =>{obj.state = "is about to be deleted";Console.WriteLine("Hooked: Object {0} is to be deleted", obj.GetObjectNo());};Hook<Hooked>.CommitInsert += (s, obj) =>{obj.state = "is created";Console.WriteLine("Hooked: Object {0} is created", obj.GetObjectNo());var nobj = new YetAnotherClass() { Stock = 42 };};Hook<Hooked>.CommitUpdate += (s, obj) =>{obj.state = "is updated";Console.WriteLine("Hooked: Object {0} is updated", obj.GetObjectNo());};Hook<Hooked>.CommitUpdate += (s, obj) => // a second callback{Console.WriteLine("Hooked: We promise you, object {0} is updated", obj.GetObjectNo());};Hook<Hooked>.CommitDelete += (s, onum) =>{Console.WriteLine("Hooked: Object {0} is deleted", onum);Hooked rp = (Hooked)DbHelper.FromID(onum); // returns null here// the following will cause an exception// Console.WriteLine("We cannot do like this: {0}", rp.state);};Hook<YetAnotherClass>.CommitInsert += (s, obj) =>{Console.WriteLine("Never triggered in this app, since it happens to get invoked inside another hook");};Hooked p = null;Db.Transact(() =>{p = new Hooked() { state = "created" };});Db.Transact(() =>{p.state = "property changed";Console.WriteLine("01: The changed object isn't yet commited", p.GetObjectNo());});Console.WriteLine("02: Change for property of {0} is committed", p.GetObjectNo());Db.Transact(() =>{Console.WriteLine("03: We have entered the transaction scope");Console.WriteLine("04: We are about to delete an object {0}, yet it still exists", p.GetObjectNo());p.state = "deleted";p.Delete();Console.WriteLine("05: The deleted object {0} is no longer be available", p.GetObjectNo());Console.WriteLine("06: Were are about to commit the deletion");});Console.WriteLine("07: Deletion is committed");}}} The output produced is as follows (accurate to ObjectNo): Hooked: Object 29 is created01: The changed object isn't yet commitedHooked: Object 29 is updatedHooked: We promise you, object 29 is updated02: Change for property of 29 is committed03: We have entered the transaction scope04: We are about to delete an object 29, yet it still existsHooked: Object 29 is to be deleted05: The deleted object 29 is no longer be available06: Were are about to commit the deletionHooked: Object 29 is deleted07: Deletion is committed Those familiar with .NET recognize Starcounter follows a convention of .NET EventHandler for commit hooks. Currently, the first argument of the callback isn't used. The second argument is a reference to an object being transacted (for create, update and pre-delete events) or an ObjectNo of the object which itself is already deleted (for post-delete event). As in the .NET convention one can have an arbitrary number of event handlers registered per event, which will be triggered in the order of registration on the event occurrence. Why there are separate pre-delete ( BeforeDelete) and post-delete ( CommitDelete) hooks? Remember that after object is physically deleted in the end of a successful transaction scope, you can no longer access it in a post-delete commit hook delegate. However you might still want to do something meaningful with it just around the moment of deletion. That is why the pre-delete hook is introduced. Note that a pre-delete hook triggers callback inside the transaction scope, but not in the end of transaction. It means that, in case a transaction has been retried N times, any pre-delete hook for any object deleted inside this transaction will also be executed N times, while all other hooks will be executed exactly once, right after a successful transaction commit. Thus, consider pre-delete hook behaving as a transaction side-effect. How much should commit hooks be used? In general, in situations where you can choose, we recommend to avoid using commit hooks. They introduce non-linear flows in the logic, hence producing more complicated and less maintainable code. Commit hooks is a powerful tool that should only be used in situations where benefits of using them overweight the drawbacks. One popular example is separate logging of changes in objects of selected classes. Can I do DB operations inside commit hooks? The answer is "Yes", since all commit hooks relate to write operations (create/update/delete), thus there must always be a transaction spanning these operations, and all event handlers are run inside this transaction. For example, in TestHooks we create an instance of a class YetAnotherClass inside CommitInsert, but do not introduce a transaction scope around this line. The reason being for it is that there is already a transaction from Main which spans this call. Notes. It is currently not possible to detach commit hook event handlers. CRUD operations introduced inside a hook are not triggering additional hooks. For instance, in TestHooks the insert hook for YetAnotherClass is never invoked, because the only place for it triggered is in CommitInsert, which is itself a commit hook. It is recommended to avoid sync tasks in commit hooks. Instead, wrap the tasks in Session.ScheduleTask or Scheduling.ScheduleTask. In essence, when doing anything more than updating database objects, an asynchronous task should be scheduled for it. Otherwise, unexpected behavior might occur, such as Self.GET calls returning null.
https://docs.starcounter.io/v/2.3.2/guides/transactions/commit-hooks/
2020-08-03T12:34:48
CC-MAIN-2020-34
1596439735810.18
[]
docs.starcounter.io
Create a deployment checklist To ensure the quality of a deployment pipeline, you can optionally associate environments in the pipeline with a checklist that each deployment package must satisfy before being deployed to the environment. This topic describes how to create a deployment checklist for an environment. Note: For an application to appear on the release dashboard, it must be associated with a deployment pipeline. For more information, see Create a development pipeline Step 1 - Define checklist items on udm.Environment Define all of the items that you want to add to a deployment checklist as type modifications on configuration item (CI) types in the synthetic.xml file. Add each checklist item as a property on the udm.Environment CI. The property name must start with requires, and kind must be boolean. The category can be used to group items. For example: <type-modification <property name="requiresReleaseNotes" description="Release notes are required" kind="boolean" required="false" category="Deployment Checklist" /> <property name="requiresPerformanceTested" description="Performance testing is required" kind="boolean" required="false" category="Deployment Checklist" /> <property name="requiresChangeTicketNumber" description="Change ticket number authorizing deployment is required" kind="boolean" required="false" category="Deployment Checklist" /> </type-modification> Step 2 - Define corresponding properties on udm.Version Add a corresponding property to the udm.Version CI type. This means that all deployment packages will have a property that satisfy the checklist item you created. Property name must start with satisfies. kind can be boolean, integer, or string. In the case of an integer or string, the check will fail if the field in the checklist is not empty. For example: <type-modification <property name="satisfiesReleaseNotes" description="Indicates the package contains release notes" kind="boolean" required="false" category="Deployment Checklist"/> <property name="rolesReleaseNotes" kind="set_of_string" hidden="true" default="senior-deployer" /> <property name="satisfiesPerformanceTested" description="Indicates the package has been performance tested" kind="boolean" required="false" category="Deployment Checklist"/> <property name="satisfiesChangeTicketNumber" description="Indicates the change ticket number authorizing deployment to production" kind="string" required="false" category="Deployment Checklist"> <rule type="regex" pattern="^[a-zA-Z]+-[0-9]+$" message="Ticket number should be of the form JIRA-[number]" /> </property> </type-modification> Repeat this process for each checklist item that you want available for deployment checklists. Save the synthetic.xml file and restart the Deploy server. Assign security roles to checks Optionally, you assign security roles to checks. Only users with the specified role can satisfy the checklist item. You can specify multiple roles in a comma-separated list. Roles are defined as extensions of the udm.Version CI type. The property name must start with roles, and the kind must be set_of_string. Also, the hidden property must be set to true. Note: The admin user is can satisfy checks in a checklist. Step 3 - Create a deployment checklist for an environment To build a checklist a checklist for a specific environment: - Log in to Deploy. - In the top navigation bar, click Explorer. - Expand Environments and double-click an environment. Go to the Deployment Checklist section and select the items you want to include in the environment checklist. - Click Save. Expand an application with a deployment pipeline, and include the environment edited, and click one of the application versions. Note: For more information on pipelines, see create a development pipeline. On the environment tile, you can see the Deployment checklist option. Click Deployment checklist to see the items. Deployment checklist verification Deployment checklists are verified at two points during a deployment: - When a deployment is configured. - When a deployment is executed. When configuring a deployment, Deploy validates that all checks for the environment have been met for the deployment package you selected. This validation happens when Deploy calculates the steps required for the deployment. Any deployment of a package to an environment with a checklist contains an additional step at the start of the deployment. This step validates that the necessary checklist items are satisfied and writes confirmation of this to the deployment log. An administrator can verify these later if necessary. Verification on package import The checks in deployment checklists are stored in the udm.Version CI. When you import a deployment package (DAR file), checklist properties can be initially set to true, depending on their values in the package manifest file. Deploy can verify checklist properties on imported and apply the these validations upon deployment. In the hidden property verifyChecklistPermissionsOnCreate on udm.Application, set hidden to false: <type-modification <property name="verifyChecklistPermissionsOnCreate" kind="boolean" hidden="false" required="false" description="If true, permissions for changing checklist requirements will be checked on import"/> </type-modification> You can control the behavior by setting the value to true or false on the application in the repository. false is the default behavior, and true means that the validation checks are done during import. Every udm.Application CI can have a different value. Note: If you want to configure this behavior but you have not imported any applications, create a placeholder application under which deployment packages will be imported, and set the value there.
https://docs.xebialabs.com/v.9.7/deploy/how-to/create-a-deployment-checklist/
2020-08-03T12:55:48
CC-MAIN-2020-34
1596439735810.18
[]
docs.xebialabs.com
MarkLogic Server provides a rich set of monitoring features that include a pre-configured monitoring dashboard and a Management API that allows you to integrate MarkLogic Server with existing monitoring applications or create your own custom monitoring applications. This chapter includes the following sections: In general, you will use a monitoring tool for the following: The monitoring metrics and thresholds of interest will vary depending on your specific hardware. Though this guide focuses on the tools available from MarkLogic that enable you to monitor MarkLogic Server, it is strongly recommended that you select an enterprise-class monitoring tool that monitors your entire computing environment to gather application, operating system, and network metrics alongside MarkLogic Server metrics. There are many monitoring tools on the market that have key features such as alerting, trending, and log analysis to help you monitor your entire environment. MarkLogic Server includes the following monitoring tools: All monitoring tools use a RESTful Management API to communicate with MarkLogic Server. The monitoring tool sends HTTP requests to a monitor host in a MarkLogic cluster. The MarkLogic monitor host gathers the requested information from the cluster and returns it in the form of an HTTP response to the monitoring tool. The Management API is described in Using the Management API. To gain access to the monitoring features described in this guide, a user must be assigned the manage-user role. Monitoring tools should authenticate as a user with that role. The manage-user role is assigned the execute privilege and provides access to the Management API, Manage App Server, and the UI for the Configuration Manager and Monitoring Dashboard. The manage-user role also provides read-only access to all of a cluster's configuration and status information, with the exception of the security settings. For details on assigning roles to users, see Users in the Administrator's Guide. If you have enabled SSL on the Manage App Server, your URLs must start with HTTPS, rather than HTTP. Additionally, you must have a MarkLogic certificate on your browser, as described in Accessing an SSL-Enabled Server from a Browser or WebDAV Client in the Security Guide. Monitoring tools enable you to set thresholds on specific metrics to alert you when a metric exceeds a pre-specified value. The topics in this section are: Many metrics that can help in alerting and troubleshooting are meaningful only if you understand normal patterns of performance. For example, monitoring an App Server for slow queries will require a different threshold on an application that spawns many long-running queries to the task server than on an HTTP App Server where queries are normally in the 100 ms range. Most enterprise-class monitoring tools support data storage to support this type of trend analysis. Developing a starting baseline and tuning it if your application profile changes will yield better results for developing your monitoring strategy. Collecting and storing monitoring metrics has a performance cost, so you need to balance completeness of desired performance metrics against their cost. The cost of collecting monitoring metrics can differ. In general, the more resources you monitor, the greater the cost. For example, if you have a lot of hosts, server status is going to be more expensive. If you have a lot of forests, database status is going to be more expensive. In most cases, you will use a subset of the available monitoring metrics. And there may be circumstances in which you temporarily monitor certain metrics and, once the issue have been targeted and resolved, you no longer monitor those metrics. One balancing technique is to measure system performance on a staging environment under heavy load, then enable your monitoring tool and calculate the overhead. You can reduce overhead by reducing collection frequency, reducing the number of metrics collected, or writing a Management API plugin to produce a custom view that pinpoints the specific metrics of interest. Each response from the underlying Management API includes an elapsed time value to help you calculate the relative cost of each response. For details, see Using the Management API. Environments and workloads vary. Each environment will have a unique set of requirements based on variables including cluster configuration, hardware, operating system, patterns of queries and updates, feature sets, and other system components. For example, if replication is not configured in your environment, you can remove templates or policies that monitor that feature. This section provides a set of guiding questions to help you understand and identify the relevant metrics. The topics in this section are: MarkLogic Server is designed to fully utilize system resources. Many settings, such as cache sizes, are auto-sized by MarkLogic Server at installation. Some questions to ask are: Many problems that impact MarkLogic Server originate outside of MarkLogic Server. Consider the health of your overall environment. Some questions to ask are: When you suspect an error or performance problem originates from MarkLogic Server, some questions to ask are: Under normal circumstances you will see loads go up as rates go up. As the workload (number of queries and updates) increases, a steadily high rates value indicates the maximum database throughput has been achieved. When this occurs, you can expect to see increasing loads, which reflect the additional time requests are spending in the wait queue. As the workload decreases, you can expect to see decreasing loads, which reflect fewer requests in the wait queue. If, while the workload is steady, rates decrease and loads increase, something is probably taking away I/O bandwidth from the database. This may indicate that MarkLogic Server has started a background task, such as a merge operation or some process outside of MarkLogic Server is taking away I/O bandwidth. If you are encountering a serious problem in which MarkLogic Server is unable to effectively service your applications, some questions to ask are:
http://docs.marklogic.com/guide/monitoring/intro
2020-08-03T12:05:51
CC-MAIN-2020-34
1596439735810.18
[]
docs.marklogic.com
AWS Tools and SDKs Shared Configuration and Credentials Reference Guide Go right to the list of global settings. Go right to the list of per-service settings. AWS SDKs and other AWS developer tools, such as the AWS Command Line Interface (AWS CLI) enable you to interact with AWS service APIs. Before attempting that, however, you must configure the SDK or tool with the information it needs to perform the requested operation. This information includes the following items: Credentials information that identifies who is calling the API. The credentials are used to encrypt the request to the AWS servers. Using this information, AWS confirms your identity and can retrieve permission policies associated with it. Then it can determine what actions you're allowed to perform. Other configuration details that enable you to tell the AWS CLI or SDK how to process the request, where to send the request (to which AWS service endpoint), and how to interpret or display the response. About credential providers Each tool or SDK can provide multiple methods, called credential providers, that you can use to supply the required credential and configuration information. Some credential providers are unique to the tool or SDK, and you must refer to the documentation for that tool or SDK for the details on how to use that method. However, most of the AWS tools and SDKs share a few common credential providers for finding the required information. These methods are the subject of this guide. Shared AWS config and credentials files – These files enable you to store settings that your tools and applications can use. The primary file is config, and you can put all settings into it. However, by default and as a security best practice, sensitive values such as secret keys are stored in a separate credentialsfile. This enables you to separately protect those settings with different permissions. Together, these files enable you to configure multiple groups of settings. Each group of settings is called a profile. When you use an AWS tool to invoke a command or use an SDK to invoke an AWS API, you can specify which profile, and thus which configuration settings, to use for that action. One of the profiles is designated as the defaultprofile and is used automatically when you don't explicitly specify a profile to use. The settings that you can store in these files are documented in this reference guide. Environment variables – Some of the settings can alternatively be stored in the environment variables of your operating system. While you can have only one set of environment variables in effect at a time, they are easily modified dynamically as your program runs and your requirements change. Per-operation parameters – A few settings can be set on a per-operation basis, and thus changed as needed for each operation you invoke. For the AWS CLI or AWS Tools for PowerShell, these take the form of parameters that you enter on the command line. For an SDK, they can take the form of a parameter that you set when you instantiate an AWS client session or service object, or sometimes when you call an individual API. Precedence and credential provider order When an AWS tool or SDK looks for credentials or a configuration setting, it invokes each credential provider in a certain order, and stops when it finds a value that it can use. Most AWS tools and SDKs check the credential providers in the following order: Per-operation parameter Environment variable Shared credentialsfile Shared configfile Some tools and SDKs might check in a different order. Also, some tools and SDKs support other methods of storing and retrieving parameters. For example, the AWS SDK for .NET supports an additional credential provider called the SDK Store. For more information about the credential provider order or credential providers that are unique to a tool or SDK, see the documentation for that tool or SDK. The order determines which methods take precedence and override others. For example, if you set up a default profile in the shared config file, it's only found and used after the SDK or tool checks the other credential providers first. This means that if you put a setting in the credentials file, it's is used instead of one found in the config file. If you configure an environment variable with a setting and value, it would override that setting in both the credentials and config files. And finally, a setting on the individual operation (CLI command-line parameter or API parameter), would override all other values for that one command.
https://docs.aws.amazon.com/credref/latest/refdocs/overview.html
2020-08-03T12:58:36
CC-MAIN-2020-34
1596439735810.18
[]
docs.aws.amazon.com
Frequently Asked Questions Check the FAQs below to get familiar with Recommendations offerings and functionalities. What Phoenix License should I subscribe to receive and view Recommendations? Recommendations is available for Phoenix customers with Elite and Enterprise License, and comes inclusive of Realize Storage Insights and Recommendations services enabled. I want to view Recommendations, what should I do? Recommendations are available for data protected using Druva Phoenix for NAS and File Servers (Windows/Linux) only. If you are a Druva Phoenix customer, contact your Druva Account Manager or Druva Support. What is non-critical data and what file extensions are scanned to generate Recommendations? Druva classifies non-critical data which may not be required to function a business or does not add value to your business making decisions. Non-critical data is usually a result of the applications running in the background, personal files stored by your users, and the files generated as part of random activities and so on. Any type of data under protection has a cost and an overhead associated it with to manage it. Non-Critical data can be deleted or excluded from backup to eliminate the cost of protecting it. The following type of files and their extensions are flagged as non-critical data and are scanned every backup set level for NAS and File Server configured for protection in Druva Phoenix. After what duration is data updated in Recommendations Recommendations engine runs once every 24 hours on the previous day backups occurred on NAS and File Server. Depending upon the factors like backup completion time, storage region, and Recommendations engine run time, data may be updated anytime between 24 hours to 48 hours after backup completion. Can I modify the default file types and extensions used to generate Recommendations? Yes. Recommendations are generated based on Rule which defines the threshold when Recommendations should be generated and for what file types and their extensions. You can modify this Rule anytime. For instructions, see Manage Recommendations.
https://docs.druva.com/Druva_Realize/025_Recommendations/080_Frequently_Asked_Questions
2020-08-03T12:31:12
CC-MAIN-2020-34
1596439735810.18
[]
docs.druva.com
Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB's API for MongoDB Before you migrate your data from MongoDB (either on-premises or in the cloud) to Azure Cosmos DB's API for MongoDB, you should: - Read the key considerations about using Azure Cosmos DB's API for MongoDB - Choose an option to migrate your data - Estimate the throughput needed for your workloads - Pick an optimal partition key for your data - Understand the indexing policy that you can set on your data If you have already completed the above pre-requisites for migration, you can Migrate MongoDB data to Azure Cosmos DB's API for MongoDB using the Azure Database Migration Service. Additionally, if you haven't created an account, you can browse any of the Quickstarts that show the steps to create an account. Considerations when using Azure Cosmos DB's API for MongoDB The following are specific characteristics about Azure Cosmos DB's API for MongoDB: Capacity model: Database capacity on Azure Cosmos DB is based on a throughput-based model. This model is based on Request Units per second, which is a unit that represents the number of database operations that can be executed against a collection on a per-second basis. This capacity can be allocated at a database or collection level, and it can be provisioned on an allocation model, or using the autoscale provisioned throughput. Request Units: Every database operation has an associated Request Units (RUs) cost in Azure Cosmos DB. When executed, this is subtracted from the available request units level on a given second. If a request requires more RUs than the currently allocated RU/s there are two options to solve the issue - increase the amount of RUs, or wait until the next second starts and then retry the operation. Elastic capacity: The capacity for a given collection or database can change at any time. This allows for the database to elastically adapt to the throughput requirements of your workload. Automatic sharding: Azure Cosmos DB provides an automatic partitioning system that only requires a shard (or a partition key). The automatic partitioning mechanism is shared across all the Azure Cosmos DB APIs and it allows for seamless data and throughout scaling through horizontal distribution. Migration options for Azure Cosmos DB's API for MongoDB The Azure Database Migration Service for Azure Cosmos DB's API for MongoDB provides a mechanism that simplifies data migration by providing a fully managed hosting platform, migration monitoring options and automatic throttling handling. The full list of options are the following: Estimate the throughput need for your workloads In Azure Cosmos DB, the throughput is provisioned in advance and is measured in Request Units (RU's) per second. Unlike VMs or on-premises servers, RUs are easy to scale up and down at any time. You can change the number of provisioned RUs instantly. For more information, see Request units in Azure Cosmos DB. You can use the Azure Cosmos DB Capacity Calculator to determine the amount of Request Units based on your database account configuration, amount of data, document size, and required reads and writes per second. The following are key factors that affect the number of required RUs: Document size: As the size of an item/document increases, the number of RUs consumed to read or write the item/document also increases. Document property count:The number of RUs consumed to create or update a document is related to the number, complexity and length of its properties. You can reduce the request unit consumption for write operations by limiting the number of indexed properties. Query patterns: The complexity of a query affects how many request units are consumed by the query. The best way to understand the cost of queries is to use sample data in Azure Cosmos DB, and run sample queries from the MongoDB Shell using the getLastRequestStastistics command to get the request charge, which will output the number of RUs consumed: db.runCommand({getLastRequestStatistics: 1}) This command will output a JSON document similar to the following: { "_t": "GetRequestStatisticsResponse", "ok": 1, "CommandName": "find", "RequestCharge": 10.1, "RequestDurationInMilliSeconds": 7.2} You can also use the diagnostic settings to understand the frequency and patterns of the queries executed against Azure Cosmos DB. The results from the diagnostic logs can be sent to a storage account, an EventHub instance or Azure Log Analytics. Choose your partition key Partitioning, also known as Sharding, is a key point of consideration before migrating data. Azure Cosmos DB uses fully-managed partitioning to increase the capacity in a database to meet the storage and throughput requirements. This feature doesn't need the hosting or configuration of routing servers. In a similar way, the partitioning capability automatically adds capacity and re-balances the data accordingly. For details and recommendations on choosing the right partition key for your data, please see the Choosing a Partition Key article. Index your data The Azure Cosmos DB's API for MongoDB server version 3.6 automatically indexes the _id field only. This field can't be dropped. It automatically enforces the uniqueness of the _id field per shard key. To index additional fields, you apply the MongoDB index-management commands. This default indexing policy differs from the Azure Cosmos DB SQL API, which indexes all fields by default. The indexing capabilities provided by Azure Cosmos DB include adding compound indices, unique indices and time-to-live (TTL) indices. The index management interface is mapped to the createIndex() command. Learn more at Indexing in Azure Cosmos DB's API for MongoDBarticle. Azure Database Migration Service automatically migrates MongoDB collections with unique indexes. However, the unique indexes must be created before the migration. Azure Cosmos DB does not support the creation of unique indexes, when there is already data in your collections. For more information, see Unique keys in Azure Cosmos DB.
https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-pre-migration
2020-08-03T12:43:02
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Object Store Resource OverviewResource Overview An Object Store provisions an Amazon S3 bucket. Amazon Simple Storage Service (S3) provides users with scalable object storage in the cloud and a simple way to retrieve your data from anywhere on the web. An object store like an Amazon S3 bucket makes it possible for you to store limitless amounts of your data and addresses any concerns regarding its growth. S3 buckets contain objects that hold the data itself, any optional metadata you choose to provide, and a unique identifier to assist with faster data retrieval. Key Features of Amazon S3 include: - High durability, availability, and scalability of your data - Support for security standards and compliance requirements for data regulation - A range of options to transfer data to and from your object store quickly - Supports popular disaster recovery architectures for data protection Stackery automatically assigns your Object Store with a globally unique name that allows it to be referenced by a Function or Docker Task. These resources adopt AWS permissions necessary to handle the creation, reading, updating, and deletion of your data in the Object Store. Event SubscriptionEvent Subscription Event subscription wires (solid line) visualize and configure event subscription integrations between two resources. The following resources can be subscribed to a Object Store: - CDN (Content Delivery Network) - an Object Store Object Store will be prefixed with this value. The identifier you provide must only contain alphanumeric characters (A-Za-z0-9) and be unique within the stack. Default Logical ID Example: ObjectStore. Enable Website Hosting When enabled, allows you to host a static website from this Object Store. An Index Document that is stored in the Object Store will need to be specified to act as the root of the website (default page); Index Document When Enable Website Hosting above is enabled, the HTML file specified here will be rendered when a user visits the Object Store's URL. This file will need to be inside of the Object Store. When navigating to a directory, the website will respond with the contents of the index document within the directory. IAM PermissionsIAM Permissions When connected by a service discovery wire (dashed wire), a Function or Docker Task will add the following IAM policy to its role and gain permission to access this resource. S3CrudPolicy Grants a Function or Docker Task permission to create, read, update, and delete objects from your Object Store. In addition to the above policy, Function and Docker Task resources will be granted permission to perform the following actions: s3:GetObjectAcl: Read the Access Control List specified on an object within the Object Store s3:PutObjectAcl: Modify the Access Control List of an object within the Object Store The above Access Control List actions make it possible for you to create public objects (files). Public-accessible objects within an Object Store are typical for static websites hosted on Amazon S3. Environment VariablesEnvironment Variables When connected by a service discovery wire (dashed wire), a Function or Docker Task will automatically populate and reference the following environment variables in order to interact with this resource. BUCKET_NAME The Logical ID of the Object Store resource. Example: ObjectStore2 BUCKET_ARN The Amazon Resource Name of the Amazon S3 Bucket. Example: arn:aws:s3:::ObjectStore2 AWS SDK Code ExampleAWS SDK Code Example Language-specific examples of AWS SDK calls using the environment variables discussed above. Add a file to an Object StoreAdd a file to an Object Store // Load AWS SDK and create a new S3 object const AWS = require("aws-sdk"); const s3 = new AWS.S3(); const bucketName = process.env.BUCKET_NAME; // supplied by Function service-discovery wire exports.handler = async message => { const testObject = "Sample Text"; // Construct parameters for the putObject call const params = { Bucket: bucketName, Body: testObject, Key: 'Object Name', ACL: 'public-read' // ** Not required. updates access control list of object }; await s3.putObject(params).promise(); console.log('Object stored in ' + bucketName); } import boto3 import os # Create an S3 client s3 = boto3.client('s3') bucket_name = os.environ['BUCKET_NAME'] # Supplied by Function service-discovery wire def handler(message, context): # Add a file to your Object Store response = s3.put_object( Bucket=bucket_name, Key='Object Name', Body='Sample Text', ACL='public-read' ) return response Related AWS DocumentationRelated AWS Documentation AWS Documentation: AWS::S3:Bucket AWS SDK Documentation: Node.js | Python | Java | .NET
https://docs.stackery.io/docs/3.12.2/api/nodes/ObjectStore/
2020-08-03T12:40:56
CC-MAIN-2020-34
1596439735810.18
[array(['/docs/assets/resources/service-discovery/func-object-store.png', 'screenshot'], dtype=object) ]
docs.stackery.io
callback that can be populated to be notified when the client-authority state of objects changes. Whenever an object is spawned using SpawnWithClientAuthority, or the client authority status of an object is changed with AssignClientAuthority or RemoveClientAuthority, then this callback will be invoked. This callback is used by the NetworkMigrationManager to distribute client authority state to peers for host migration. If the NetworkMigrationManager is not being used, this callback does not need to be populated. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Networking.NetworkIdentity-clientAuthorityCallback.html
2020-08-03T13:13:12
CC-MAIN-2020-34
1596439735810.18
[]
docs.unity3d.com
max() function The max() function selects record with the highest _value from the input table. Function type: Selector Output data type: Object max(column: "_value") Empty tables max() drops empty tables. Parameters column The column to use to calculate the maximum value. Default is "_value". Data type: String Examples from(bucket:"example-bucket") |> range(start:-1h) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" ) |>.
https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/built-in/transformations/selectors/max/
2020-08-03T12:23:34
CC-MAIN-2020-34
1596439735810.18
[]
v2.docs.influxdata.com
xdmp:estimate( $expression as item()*, [$maximum as xs:double?] ) as xs:integer Returns the number of fragments selected by an expression. This can be used as a fast estimate of the number of items in a sequence. xdmp:estimaterequire:estimateoperation on it. xdmp:estimate(collection()) => 10476 xdmp:estimate(/PLAY/TITLE, 1000) => Returns the number of fragments selected by the XPath expression, up to a maximum of 100. xdmp:estimate(cts:search(fn:doc(), cts:word-query("merry"))) => Returns an estimate of the number of fragments matched by the search Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
http://docs.marklogic.com/xdmp:estimate
2020-08-03T12:15:26
CC-MAIN-2020-34
1596439735810.18
[]
docs.marklogic.com
. Using the PowerShellGet cmdlets is the preferred installation method. Install the Az module for the current user only. This is the recommended installation scope. This method works the same on Windows, macOS, and Linux platforms. Run the following command from a PowerShell session:. following address: To start working with Azure PowerShell, sign in with your Azure credentials. # Connect to Azure with a browser sign in token Connect-AzAccount Note If you've disabled module autoloading, manually import the module with Import-Module -Name, }.
https://docs.microsoft.com/en-us/powershell/azure/install-Az-ps?view=azps-4.4.0&viewFallbackFrom=azps-3.6.1
2020-08-03T13:51:28
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Delete a bucket Use the InfluxDB user interface (UI) or the influx command line interface (CLI) to delete a bucket. Delete a bucket in the InfluxDB UI In the navigation menu on the left, select Data (Load Data) > Buckets. Hover over the bucket you would like to delete. Click Delete Bucket and Confirm to delete the bucket. Delete a bucket using the influx CLI Use the influx bucket delete command to delete a bucket a bucket by name or ID. Delete a bucket by name To delete a bucket by name, you need: - Bucket name - Bucket’s organization name or ID # Syntax influx bucket delete -n <bucket-name> -o <org-name> # Example influx bucket delete -n my-bucket -o my-org Delete a bucket by ID To delete a bucket by ID, you need: - Bucket ID (provided in the output of influx bucket list) # Syntax influx bucket delete -i <bucket-id> # Example influx bucket.
https://v2.docs.influxdata.com/v2.0/organizations/buckets/delete-bucket/
2020-08-03T12:40:09
CC-MAIN-2020-34
1596439735810.18
[]
v2.docs.influxdata.com
Using an iPad, download the proper Erply Pos for Cayan Using the iPad, install the Cayan root certificate. Just click on the link, download the certificate and open it to install it. Follow the onscreen instructions. Using the iPad, manually trust the Cayan root certificate on your iOS Go to Settings > General > About > Certificate Trust Settings. • Under "Enable full trust for root certificates," turn on trust for the certificate. (The instruction are for all certificates and are tested on an iPhone but the process is the same for the iPad as well) (Ex.). In the Erply Back Office. Set payment terminal to “Merchant Warehouse” on the register card. Retail Chain > Register > Card terminal. In Erply, in the POS settings, go to Payment Configurations > Setup payment gateway and enter in Cayan credentials. At the bottom of the page, check the box ⌧ “use of Cayan Mini”, next, turn on the Cayan Mini Launch the Cayan App, in the upper corner of the screen there is an option to connect device. Select Bluetooth and then select the Mini device. Once connected, open the Erply app and process a Credit Card (CC) transaction. When accepting CC payment, the Cayan app will launch and then close once the sale is complete and brings it back to Erply POS.
http://docs-eng.nimi24.com/installation/cayan-mini-setup
2020-08-03T12:32:39
CC-MAIN-2020-34
1596439735810.18
[]
docs-eng.nimi24.com
Managing payments via Stripe What. What is the “live mode” and “test mode” of Stripe? The CoopCycle platform can be configured to use the “live” or “test” mode from the “settings” tab. In test mode clients will not be debited. You have to use test cards (typically 4242 4242 4242 4242 4242). Don’t forget to switch to “live” mode before going into production ! How to configure Stripe/Stripe Connect on your platform? - Create a Stripe account here , then: - Get the four API stripe keys (Private/Public live, Private/Public test) here: (in video) - Get the two Stripe Connect identifiers (Live/Test) here: - Past them in the corresponding fields on the administrator’s “parameters” tab. Note: Live and test keys and IDs are not displayed on the screen at the same time. There is a switch on the page to display either test or live data. - Configure the redirection url for Stripe Connect - Go to - Click on “Add redirect URI” (It has to be done in live and test mode). Then enter this value “https:// /stripe/connect/standard" (e.g. ``) How to activate your Stripe account to use the platform? You need to activate your Stripe account to start using the platform in “live”. Click on “Activate your account” on the left and enter the required informations. (video:) How to see the money earned through the platform? The funds earned by the platform (the delivery cooperative) are calculated as a commission on the merchant’s payment. Go to this URL. How to receive this money on your account? Payments from your Stripe account to your bank account will be made regularly (“payouts”). You can access the list of payouts here :. You can also request on this page an immediate transfer to your account.
https://docs.coopcycle.org/en/admin/stripe-payments.html
2020-08-03T12:06:52
CC-MAIN-2020-34
1596439735810.18
[]
docs.coopcycle.org
In Holistics, Metric/KPI visualization presents your data as a single number. This is a simple yet effective way to draw attention and invite further data exploration. When to use Metric/KPI? Sometimes, a number is worth a thousand words. Metric/KPI visualization is the best choice when you just want a snapshot of your performance, or quickly compare your number with another value (for example, past performance, goals...) Create a Metric/KPI To create a Metric/KPI, simply select the visualization and drag the necessary measure into Value field. You can optionally include a Comparison Value to give your number more context, for example, last month's number, or a goal. When you doing so, both the value and comparison value will be displayed in a tooltip. Styling options In the Styles tab, you can specify how you want to display your number. There are four modes: - Single: Only the Value will be displayed - Compare by number: the difference between Value and Comparison Value will be displayed as a raw number - Compare by percent: the difference will be displayed as a percentage - Progress: a progress bar with goal completion percentage will be displayed under the Value. - Reverse color: By default, in comparison modes, green means increasing (positive) and red means decreasing (negative). For metrics that hold negative meanings (for example, churn rate, revenue lost...) you can reverse the color: Updated 9 months ago
https://docs.holistics.io/docs/metric-kpi
2020-08-03T12:39:13
CC-MAIN-2020-34
1596439735810.18
[array(['https://files.readme.io/e035cb2-Selection_436.png', 'Selection_436.png'], dtype=object) array(['https://files.readme.io/e035cb2-Selection_436.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/5790dff-Selection_438.png', 'Selection_438.png'], dtype=object) array(['https://files.readme.io/5790dff-Selection_438.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d529b0a-styling_metrickpi.png', 'styling_metrickpi.png'], dtype=object) array(['https://files.readme.io/d529b0a-styling_metrickpi.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9db9beb-Selection_436.png', 'Selection_436.png'], dtype=object) array(['https://files.readme.io/9db9beb-Selection_436.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6b13685-Selection_518.png', 'Selection_518.png'], dtype=object) array(['https://files.readme.io/6b13685-Selection_518.png', 'Click to close...'], dtype=object) ]
docs.holistics.io
Radar Chart presents your multivariate data on axes starting from the same point. Values on the axes of the same subject are connected with a straight line, which makes the visualization looks like a "radar". When to use Radar Chart? Radar Chart is a visually striking means to show outliers and commonality between objects, or to show that a subject is superior to others in every variable. Radar Chart is only suitable for ordinal measurements, for example ranking or ratings. Create a Radar Chart To use Radar Chart, your data need to have the following form: - Name / ID of the subject to be compared - The ordinal variables that represent the measurements. These variables must be of the same scale, and they quantify the subject's better or worse quality. Compare multiple subjects on multiple measurements Place the Name / ID of the subject in Legend field, and the measurements in Y Axis field. This way the measurements will be visualized for individual subjects: Compare multiple subjects on one measurement If you only need to compare your subjects on one measurement, simply put your subject's Name / ID in the X Axis field: Updated 9 months ago
https://docs.holistics.io/docs/radar-chart
2020-08-03T11:26:33
CC-MAIN-2020-34
1596439735810.18
[array(['https://files.readme.io/5199d53-Selection_508.png', 'Selection_508.png'], dtype=object) array(['https://files.readme.io/5199d53-Selection_508.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/ea9e969-Selection_509.png', 'Selection_509.png'], dtype=object) array(['https://files.readme.io/ea9e969-Selection_509.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/37b8cf7-Selection_508.png', 'Selection_508.png'], dtype=object) array(['https://files.readme.io/37b8cf7-Selection_508.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/4528afe-Selection_511.png', 'Selection_511.png'], dtype=object) array(['https://files.readme.io/4528afe-Selection_511.png', 'Click to close...'], dtype=object) ]
docs.holistics.io
What is the edition currently on sale? Our development team have worked extremely hard to ensure all of the books are accurate and error free. You can read more about that in this article. That said, no work of a human is perfect and we constantly receive editing requests where our honourable readers and customers point out issues and errors. We take as much of that as we can on board. Since the first editions, all of our books have had incremental changes made to them. The current editions of books that are on sale are as follows: Previous editions of all the books are now out of print and we would advise using the current editions on sale, listed above. The latest editions of our books are the most accurate representation of our works as we continuously update editions on yearly basis.
https://docs.safarpublications.org/article/132-what-is-the-edition-currently-on-sale
2020-08-03T12:39:36
CC-MAIN-2020-34
1596439735810.18
[]
docs.safarpublications.org
In order to foster interest and excitement amongst consumers in the process of providing ratings TR includes a system of consumer treats or prizes, referred to as a TruTreat. This feature is not enabled by default for a merchant. It can be enabled by TruRating for specific merchants once the relevant contracts have been put into place. The design for TruTreat has been done in a way that has the least amount of impact on partners implementing TruModule. Therefore the feature will operate seamlessly within the current specification of TruModule as it stands. The TruRating host will randomly allocate a TruTreat code to instances of TruService. TruService will then include the TruTreat code in the receipt text message for a consumer who has provided a rating. (The receipt text for consumers who don’t provide ratings will be unchanged). Additionally, the “Thank you” screen acknowledgement for consumers who have provided ratings will change to say something along the lines of: “Congratulations, you’ve won a prize, Check your receipt for details” Printing the TruTreat code on the receipt When a TruTreat has been awarded it is vital that the customer receipt is printed out. The consumer has already been told to check the receipt and so will be expecting it. A number of merchants do not routinely print receipts, waiting for the customer to request one before doing so. Consequently there is no printed TruRating acknowledgement in normal transactions. Therefore, in order to programmatically ensure that the receipt is printed out when a prize code has been awarded the receipt text message carries a priority attribute. If the priority is set to true then the receipt text will always be printed.
https://docs.trurating.com/get-started/payment-devices/core-concepts/prizes/
2020-08-03T11:52:50
CC-MAIN-2020-34
1596439735810.18
[]
docs.trurating.com
Running A Wallaroo Application Running the Application Wallaroo uses an embedded Python runtime wrapped with a C API around it that lets Wallaroo execute Python code and read Python variables. So when machida --application-module my_application is run, machida (the binary we previously compiled), loads up the my_application.py module inside of its embedded Python runtime and executes its application_setup() function to retrieve the application topology it needs to construct in order to run the application. machida3 does the same, but with an embedded Python 3 runtime instead of Python 2.7. Generally, in order to build a Wallaroo Python application, the following steps should be followed: - Build the machidaor machida3binary (this only needs to be done once) import wallarooin the Python application’s .pyfile - Create classes that provide the correct Wallaroo Python interfaces (more on this later) - Define an application_setupfunction that uses the ApplicationBuilderfrom the wallaroomodule to construct the application topology. - Run machidaor machida3with the application module as the --application-moduleargument Once loaded, Wallaroo executes application_setup(), constructs the appropriate topology, and enters a ready state where it awaits incoming data to process. A Note About PYTHONPATH Machida uses the PYTHONPATH environment variable to find modules that are imported by the application. You will have at least two modules in your PYTHONPATH: the application module and the wallaroo module. If you have installed Wallaroo as instructed and follow the Starting a new shell for Wallaroo instructions each time you start a new shell, the wallaroo module and any application modules in your current directory will be automatically added to the PYTHONPATH. If the Python module you want is in a different directory than your current, like: $HOME/wallaroo-tutorial/wallaroo-0.6.1/examples/python/alerts_stateless/alerts.py, in order to use the module you would export the PYTHONPATH like this: export PYTHONPATH="$HOME/wallaroo-tutorial/wallaroo-0.6.1/examples/python/alerts_stateless:$PYTHONPATH" Next Steps To try running an example, go to the Alerts example application and follow its instructions. To learn how to write your own Wallaroo Python application, continue to Writing Your Own Application To find out more detail on the command line arguments and other aspects of running Wallaroo application, see the Running Wallaroo section.
https://docs.wallaroolabs.com/python-tutorial/running-a-wallaroo-application/
2020-08-03T12:31:13
CC-MAIN-2020-34
1596439735810.18
[]
docs.wallaroolabs.com
Upgrade Citrix Provisioning supports upgrading to the latest product version from versions starting with 7.6 LTSR. Important: When upgrading from Citrix Provisioning 1808, you must uninstall Citrix Provisioning server 1808 before installing the new Citrix Provisioning server. If you are upgrading from Provisioning Services 7.17 to this version of Citrix Provisioning, you must manually uninstall CDF on the provisioning server, console, and target devices. Before attempting to upgrade a Citrix Provisioning farm: - Select a maintenance window that has the least amount of traffic - Back up the Citrix Provisioning database - Back up all virtual disks Tip: Mirror if you are in a high-availability scenario; for more information, see Database mirroring. No special action is required during the upgrade once mirroring is set up. When upgrading Citrix Provisioning, consider the following: - Upgrade to the latest licensing server. Note the following when upgrading the license server: - License servers are backward compatible and provide the latest security fixes. - If necessary, upgrade individual licenses. New features require that the Citrix license has a minimum subscription advantage (SA) date. - Back up the Citrix Provisioning database. While Citrix always tests to ensure a successful database upgrade, unforeseen circumstances might arise. Citrix strongly recommends backing up the database before upgrading. - Back up the Citrix Provisioning virtual disk. Citrix recommends backing up the virtual disk before upgrading. This process is only necessary if you plan to use reverse imaging with private images. - When running the installer to update either the server or console components, if an older version of Citrix Provisioning is detected both components are automatically updated. - If you are upgrading from version 7.17 to this Citrix Provisioning 1903, you must manually uninstall CDF on the provisioning server, console, and target devices. - Files located in C:\Program Files\Citrix\PowerShell SDK might. Upgrade the environmentUpgrade the environment To upgrade from a previous Citrix Provisioning farm, complete the following procedures: - Upgrade consoles. The console is a separate executable that can be installed on upgraded servers (PVS_Console.exe or PVS_Console_64.exe). Citrix recommends upgrading the console, followed by the server software for each provisioning server in the farm. Remote consoles can be upgraded at any time. - Upgrade the first provisioning server in the farm, which upgrades the Citrix Provisioning database. - Upgrade the remaining provisioning servers within the farm. - Upgrade vDisks. Important: When upgrading a virtual disk within a Citrix Virtual Apps and Desktops deployment, upgrade the master target device software before upgrading the VDA software. Upgrade utilitiesUpgrade utilities The Upgrade Wizard includes the following utilities: - The UpgradeAgent.exe runs on the target device to upgrade previously installed product software. - The UpgradeManager.exe runs on the provisioning server to control the upgrade process on the target device. Upgrading at a glanceUpgrading at a glance The information in this section provides step-by-step guidance for upgrading Citrix Provisioning components. For server upgrade information, see the server article. For information about upgrading vDisks, see vDisks. Important: When upgrading from Citrix Provisioning 1808, you must uninstall Citrix Provisioning server 1808 before installing the new Citrix Provisioning server. Upgrade the console and serverUpgrade the console and server Follow these steps to upgrade the console and server: Run the console and server executables to initiate the upgrade process automatically. Citrix recommends that you upgrade the console first, followed by the server. Tip: To keep the Citrix Provisioning farm and target devices running during the upgrade process, use the rolling server upgrade procedure. This process upgrades one Provisioning Server at a time. The rolling server upgrade performs an upgrade on one server at a time. Note: While upgrading the Provisioning Server, it cannot service any target device. Ensure that the remaining servers in the farm support the target devices (clients) during the failover process while the upgrading the server. To perform the rolling upgrade, update the first Provisioning Server in the farm: a. Open the services MSC file (services.msc) and halt the Citrix PVS Stream Service. This process causes all provisioning targets connected to this server to fail over to other servers in the farm. Once finished, upgrade the Provisioning Server and console components. b. Upgrade the Citrix Provisioning database. This process is only done once: - Use dbScript.exe to generate the SQL script. Choose the option to upgrade database and enter the name of the dB. Use that script in SQL Management or SQL command line to upgrade the provisioning database. - Use configuration wizard to upgrade the provisioning database; when using this method, consider: - The Citrix Provisioning Configuration Wizard automatically starts when the Finish button is selected after successfully upgrading the Provisioning Server. - Use the default settings so that the Citrix Provisioning Configuration Wizard uses the previously configured settings. On the Farm Configuration page, select the option Farm is already configured. After all configuration information is entered, review the information on the Finish page; click Finish to begin configuring the provisioning server. At this point, the provisioning database is not configured. A message appears indicating that the database was upgraded. Click OK to confirm the message and upgrade the database. - Verify that Citrix Provisioning processes have started using services.msc. Boot a target device to confirm that it can connect to the provisioning server. Considerations for provisioning database migration using a different SQL server The Provisioning Console could fail to display the virtual disk attached to a site when migrating a database to a different SQL server. This condition exists when you use the configuration wizard to point to a different SQL server. Despite the console view, the database dbo.disk displays the updated virtual disk entries. To migrate a database: - Back up the database. - Restore the database on the new SQL server. - Run the configuration wizard and retain the default settings on all pages except the database configuration pages. - On the Farm Configuration page, select Join existing farm. - On the Database Server page, select the new database server and instance names. On the Farm Configuration page, the default option is the database imported into the new SQL server. - In the configuration wizard, choose the defaults for all other options presented by the wizard. Important: During the migration to a different SQL server, do not create a site/store. In the preceding sequence, steps 4 and 5 point to the new SQL server, instance, and database. Upgrade remaining Provisioning servers After upgrading the first provisioning server, upgrade the remaining servers in the farm: Open the services MSC file (services.msc) and halt the Citrix Provisioning Stream Service. This process causes all provisioning targets connected to this provisioning server to fail over to other provisioning servers in the farm. Once finished, upgrade the provisioning server and console components. Tip: Once the server is successfully upgraded, the Citrix Provisioning Configuration Wizard starts automatically after clicking Finish. The provisioning database is only updated after upgrading the first provisioning server. Use the default settings. The Citrix Provisioning Configuration Wizard uses the previously configured settings. On the Farm Configuration page, make sure that the option Farm is already configured is selected. After all configuration information is entered, review the information on the Finish page; click Finish to being configuring the provisioning server. Repeat these steps to finish upgrading all remaining provisioning servers in the farm. Rebalance Citrix Provisioning clientsRebalance Citrix Provisioning clients After upgrading and configuring all Citrix Provisioning servers, Citrix recommends that you rebalance all provisioning clients (target devices) within the farm. To rebalance provisioning clients: - Start the Citrix Provisioning console and log into the farm. - Navigate to the Servers tab. - Highlight all the provisioning servers that were recently upgraded, right-click to expose a contextual menu. - Select Rebalance clients. Upgrade the Citrix Provisioning target deviceUpgrade the Citrix Provisioning target device Citrix Provisioning supports three methods for upgrading target devices: - In-place upgrade - Direct VHD\VHDX boot - Manual upgrade using reverse imaging Important: Citrix strongly recommends backing up the virtual disk if versioning is not used in the upgrade process. When using Citrix Provisioning target installers: - If the system is running Provisioning Services version 7.6.2 (7.6 CU1) or a newer target device, run the new target installer. It must be the same version installed on the target device. This process effectively allows the installer to take care of the upgrade. - If the system is running Provisioning Services version 7.6.0 or earlier target devices, uninstall the old target device software. Reboot, then install the new Citrix Provisioning target device version. In-place upgrades For in-place upgrades, a maintenance version of the virtual disk is interchangeable with the private image. However, Citrix recommends that you take advantage of Citrix Provisioning versioning to perform an in-place upgrade. To perform an in-place upgrade: - Create a maintenance version of the virtual disk. - Using the provisioning console, navigate to the device’s properties and set the device type to Maintenance. - In the Boot menu, select option 1 to boot a client into virtual disk mode using the maintenance version. - Log into Windows and run the new target device installer. Install the software and perform a full installation. The target device installer performs the upgrade; do not run the imaging wizard. Reboot the target device when prompted. - Once Windows has loaded, log into the system and verify that the target device software is the expected version by viewing the status tray. If the status tray is hidden by Windows, locate it by clicking the up arrow on the status tray icon. - Shut down the target device. - If versioning is invoked, use the provisioning console to promote the maintenance version to test version functionality. Verify the new version and promote it to the production version when it is deemed production quality. Roll this version out to users by rebooting all the target devices using this virtual disk. Upgrading using VHD\VHDX boot When using method to upgrade a target device, consider: - Citrix Hypervisor only supports .vhd - Hyper-V 2012 and 2008 R2 only support .vhd - Hyper-V 2012 R2 and 2016 supports both .vhd and .vhdx Obtain the .vhdx file. Consider: - If the virtual disk does not have a version, copy the .vhdx file to the Hyper-V server or import the file to XenServer using XenCenter (Files > Import). - If the virtual disk has a version, perform a base merge and create a .vhdx file in maintenance mode. Perform a direct VHD boot using XenServer: a. Copy the .vhd file to a system running XenCenter and import the file to XenServer using Files > Import. b. Create a VM using the imported .vhd file. Refer to the Importing and Exporting VMs section of the Citrix Virtual Apps and Desktops documentation for more information. c. Boot the VM. d. Upgrade the target device software. See the information at the beginning of this section for using the Citrix Provisioning target device installers. Perform a direct VHD\VHDX boot using Hyper-V: Copy the .vhdx file to the Hyper-V server, or Create a Hyper-V VM using the “Use an existing virtual hard disk” and point to the .vhdx file. Refer the following links for creating VMs in Hyper-V. For Hyper-V 2012 R2 and 2016, ensure that the generated VM matches those VMs of the virtual disk: - Generation 1 = traditional BIOS VMs and systems - Generation 2 = UEFI VMs and systems For Hyper-V 2016 environments: For Hyper-V 2012 and 2012 R2: For Hyper-V 2008 R2 and 2008 R2 Sp1: Boot the VM. Upgrade the target device software. Upgrade the target device software. See the information at the beginning of this section for using the Citrix Provisioning target device installers. Copy the .vhdx.vhd file back to the virtual disk store location where it was originally located: - If the .vhdx.vhd file is taken from a based merge version, the file is ready for testing and verification. - If the file is copied from the base virtual disk, import the virtual disk into the provisioning database using the Add or import Existing vDisk option. Run this option from the virtual disk Pool\Store level in the provisioning console. Upgrading using manual reverse imaging with P2PVSUpgrading using manual reverse imaging with P2PVS Use the information in this section to upgrade Citrix Provisioning using reverse imaging with P2PVS. The following table illustrates supported upgrade methods: Boot the Citrix Provisioning target device into the virtual disk using private\maintenance mode. Install PVS_UpgradeWizard.exe or PVS_UpgradeWizard_x64.exe from the Upgrade folder of the ISO image. This folder is located in the latest Citrix Provisioning release area (containing the latest P2PVS.exe file). The upgrade wizard can also be installed through the Citrix Provisioning meta-installer using the Target Device Installation > Install Upgrade Wizard option. Run P2PVS.exe from the Citrix Provisioning upgrade wizard directory. By default, this file is located in C:\Program Files\Citrix\Citrix Provisioning Upgrade Wizard. Click the From drop-down menu to choose the Citrix Provisioning virtual disk. Click Next. In the partition screen, select the partitions undergoing reverse imaging. All system partitions, regardless of whether they have a drive letter or not, are used in reverse imaging. Click Next. Click Convert on the final page to being reverse imaging. Note: When using reverse imaging, consider: - reverse imaging for BIOS systems is non-destructive. The partition table of the system is not altered. Because Citrix Provisioning imaging is blocked base, the partition table of the local hard disk must be the same as those of the virtual disk. - reverse imaging for UEFI systems is destructive. All partitions on the local hard disk are destroyed and re-created to match those of the virtual disk. Once reverse imaging finishes, reboot the VM from hard disk without network booting. Upgrade the target device. Refer to the information at the beginning of this section for more information. Image the OS to virtual disk again. You can accomplish this imaging by creating a virtual disk or using the existing one. Using reverse imaging to upgrade Windows 10 machinesUsing reverse imaging to upgrade Windows 10 machines To upgrade a Windows 10 image using reverse imaging: - Create a target device with a virtual hard disk that is the same size or bigger than the virtual disk. - Network boot (PXE/ISO) the VM into the virtual disk using maintenance version or private image mode. - If the virtual disk is using Provisioning Services 7.15 or older, install PVS_UpgradeWizard.exe or PVS_UpgradeWizard\x64.exe from the Upgrade folder of the ISO image. This process retrieves the latest P2PVS.exe file. The upgrade wizard can also be installed with the Citrix Provisionings meta-installer using the Target Device Installation > Install Upgrade Wizard option. - Run P2PVS.exe from the Citrix Provisioning target device\ Upgrade Wizard directory. By default, this directory is C:\Program Files\Citrix\Citrix Provisioning, or C:\Program Files\Citrix\Citrix Provisioning Upgrade Wizard, respectively. - Click the From drop-down menu and choose Citrix Provisioning vDisk and click Next. - In the partition screen, select the partitions for reverse imaging. All system partitions, regardless of whether they have a drive letter or not, are used in reverse imaging. Click Next. - Click Convert on the last page to begin reverse imaging. - Once reverse imaging has completed successfully, set the VM to boot from HDD and reboot the VM. - Uninstall the Citrix Provisioning target device. - Shut down the VM. Note: The amount of free space in the c:\ partition. Some used space can be freed up by deleting the Windows.old folder in C:. Refer to the Windows Support page for more information. - Judging by the free space on the C:\ partition, increase the size of the VM’s hard disk if needed. Note: If this operating system is Windows 10 1607 (code name Redstone 1 or Anniversary Update), Windows 10 update will create another system partition after the C:\ partition. Currently, it is not possible to increase the size of C:\ partition. - Boot the VM. Please note the local admin of the VM and remember the local admin password. - Run Windows 10 update to upgrade Windows 10. - Use local admin credentials to log in since the Windows 10 upgrade process can impact active directory. - Rejoin the VM to active directory if needed. - Install new drivers and more Windows updates if needed. - Once updates are done, install Citrix Provisioning target device software. - Use the Imaging Wizard or P2PVS to create a virtual disk. The old virtual disk can be used if the size of the VM’s virtual hard disk has not been increased in step 11.
https://docs.citrix.com/en-us/provisioning/1912-ltsr/upgrade.html
2020-08-03T11:27:07
CC-MAIN-2020-34
1596439735810.18
[]
docs.citrix.com
- ! Adding target devices to the database To create target device entries in the Provisioning Services database, select one of the following methods: - Using the Console to Manually Create Target Device Entries - Using Auto-add to Create Target Device Entries - Importing Target Device Entries After the target device exists in the database, you can assign a vDisk to the device. Refer to assign a vDisk to the device for more details. Using vDisk, the machine name of the device becomes the name entered. For more information about target devices and Active Directory or NT 4.0 domains, refer to “Enabling Automatic Password Management” - Optionally, if a collection template exists for this collection, you have the option to enable the checkbox next to Apply the collection template to this new device. - Click the Add device button. The target device inherits all the template properties except for the target device name and MAC address. - Click OK to close the dialog box. The target device is created and assigned to a vDisk. Importing Target Devices Entries Target device entries can be imported into any device collection from a .csv file. The imported target devices can then inherit the properties of the template target device that is associated with that collection. For more details, refer to Importing Target Devices into Collections. Adding target devices to the.
https://docs.citrix.com/en-us/provisioning/7-15/managing-target-device/target-database-add.html
2020-08-03T12:42:16
CC-MAIN-2020-34
1596439735810.18
[]
docs.citrix.com
View vulnerabilities The Vulnerabilities page in Contrast allows you to browse, search and filter through all vulnerabilities in your organization. Click on each vulnerability for more details, including guidelines for determining how to resolve them to prevent an attack. You can also go to the Vulnerabilities tab from the application's details page to see a list of all vulnerabilities found in that application. Note For Contrast to find weaknesses and present findings, you must exercise your application. The grid of discovered vulnerabilities provides basic information on each one, including the severity, type and status. Learn more about vulnerability statuses and workflows in Contrast or how to find and manage vulnerabilities. Click on a vulnerability in the grid to see all the details reported. Each tab in the page provides different sets of information for different tasks you may want to perform. In the Overview tab for each vulnerability, see where Contrast found the vulnerability. In the next tab, learn How to Fix the vulnerability with examples of the techniques discussed. In the Notes tab, Contrast provides further details about the identity, timing and location of the vulnerability, such as: Build numbers Reporting servers Category Security standards
https://docs.contrastsecurity.com/en/view-vulnerabilities.html
2020-08-03T12:09:26
CC-MAIN-2020-34
1596439735810.18
[]
docs.contrastsecurity.com
Additional BOM may be generated for RosettaNet solution with BizTalk 2013 R2 CU2 or higher and BizTalk 2016 The problem: You have RosettaNet solutions deployed on BizTalk 2013 R2. After urpgrading to CU2 (or higher) or BizTalk 2016, your trading partner complained that they cannot process the documents sent from you with existing configuration due to an extra unicode Byte Order Mark(BOM) in the beginning of the doc. Analysis: We debug the issue and found the root cause is because some code change within MIMEEncoder pipeline component(in CU2). Also, setting PreserveBOM to true or false in Pipeline setting doesn't change the behavior. We've verified that the issue only occurs when content-encoding is set to 8bit or quoted-printable in RosettaNet agreement setting. Using Base64 encoding will not hit the same problem. The issue has been reported to BizTalk production group for further investigation and judgement about whether a fix is required. Solution: Please understand BOM isn't unexpected character for unicode encoded content. So normally the partner should be able to configure their application to accept the documents with BOM. If no luck, currently a work around is to switch to Base64 encoding instead of 8bit or quoted-printable(see below). Best regards, WenJun Zhang
https://docs.microsoft.com/en-us/archive/blogs/apacbiztalk/additional-bom-may-be-generated-for-rosettanet-solution-with-biztalk-2013-r2-cu2-or-higher-and-biztalk-2016
2020-08-03T12:47:18
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Heatmap visualization A Heatmap displays the distribution of data on an x and y axes where color represents different concentrations of data points. Select the Heatmap option from the visualization dropdown in the upper right. Heatmap behavior Heatmaps divide data points into “bins” – segments of the visualization with upper and lower bounds for both X and Y axes. The Bin Size option determines the bounds for each bin. The total number of points that fall within a bin determine the its value and color. Warmer or brighter colors represent higher bin values or density of points within the bin. Heatmap Controls To view Heatmap controls, click Customize next to the visualization dropdown. Data - X Column: Select a column to display on the x-axis. - Y Column: Select a column to display on the y-axis. Options - Color Scheme: Select a color scheme to use for your heatmap. - Bin Size: Specify the size of each bin. Default is 10. X Axis - X Axis Label: Label for the x-axis. - X Tick Prefix: Prefix to be added to x-value. - X Tick Suffix: Suffix to be added to x-value. - X Axis Domain: The x-axis value range. - Auto: Automatically determine the value range based on values in the data set. - Custom: Manually specify the minimum y-axis value, maximum y-axis value, or range by including both. - Min: Minimum x-axis value. - Max: Maximum x-axis value.. Heatmap examples Cross-measurement correlation The following example explores possible correlation between CPU and Memory usage. It uses data collected with the Telegraf Mem and CPU input plugins. Join CPU and memory usage The following query joins CPU and memory usage on _time. Each row in the output table contains _value_cpu and _value_mem columns. cpu = from(bucket: "example-bucket") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system" and r.cpu == "cpu-total" ) mem = from(bucket: "example-bucket") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" ) join(tables: {cpu: cpu, mem: mem}, on: ["_time"], method: "inner") Use a heatmap to visualize correlation In the Heatmap visualization controls, _value_cpu is selected as the X Column and _value_mem is selected as the Y Column. The domain for each axis is also customized to account for the scale difference between column values. Important notes Differences between a heatmap and a scatter plot Heatmaps and Scatter plots both visualize the distribution of data points on X and Y axes. However, in certain cases, heatmaps provide better visibility into point density. For example, the dashboard cells below visualize the same query results: The heatmap indicates isolated high point density, which isn’t visible in the scatter plot. In the scatter plot visualization, points that share the same X and Y coordinates appear as a single.
https://v2.docs.influxdata.com/v2.0/visualize-data/visualization-types/heatmap/
2020-08-03T12:43:32
CC-MAIN-2020-34
1596439735810.18
[array(['/img/2-0-visualizations-heatmap-example.png', 'Heatmap example'], dtype=object) array(['/img/2-0-visualizations-heatmap-correlation.png', 'Heatmap correlation example'], dtype=object) array(['/img/2-0-visualizations-heatmap-vs-scatter.png', 'Heatmap vs Scatter plot'], dtype=object) ]
v2.docs.influxdata.com
1) In v6.4 some methods, properties and events of the Combobox DataView. The following API methods have been deprecated: getFocusIndex(), setFocusIndex(). These methods are still available but not recommended for use. Instead of using the getFocusIndex() method, you can get the index of an item in focus as follows: var id = dataview.getFocus(); var index = dataview.data.getIndex(id); But we recommended you to use the corresponding getFocus()/setFocus() methods for getting the id of an item in focus and setting focus to an item by its id: Back to topBack to top dataview.getFocus(); // -> "item_id" dataview.setFocus("item_id");
https://docs.dhtmlx.com/suite/dataview__migration.html
2020-08-03T12:30:09
CC-MAIN-2020-34
1596439735810.18
[]
docs.dhtmlx.com
Catalog settings What are the Catalogs for? The catalog is usually used if you want to allow users to self-register for courses. In some cases - since the courses can be divided into "categories" - it is also useful only to show the courses divided into categories. This is especially useful if there are many courses. Otherwise, it may be better to leave the "catalog" tab disabled and keep only the "elearning" button, which shows the courses to which a user is already enrolled. The possible uses of the "catalog" function are: - in general, distinguish between the courses that you want to appear in the "catalog" tab and those that should not appear - in more detail, it serves to show different catalogs to different users: for example, to show specific topics to specific organization chart nodes or specific roles. Or to specific customers. In this way we will be able to allow each type of user to see the most suitable courses and to register independently. Let's take as an example, a company that wants to offer employees in the Sales Area, a catalog that includes a list of courses from which the user can choose which one to enroll in. Below is the simple procedure for creating and configuring the catalog. Create a Catalog Go to Administration / E-learning / Courses / Course Catalog To create a catalog, click on the "New catalog" button Enter the name of the catalog and click on the "Save changes" button. Following our example, we will call the catalog "Sales Office". Configure the Catalog For the configuration of the catalog we just created, we have several available buttons / icons . Each of these allows for a different configuration. Let's see them in detail. By clicking on the first "Show catalog items" icon, you can choose and insert courses within the Catalog. Just put the check on the courses you want to insert. Following our example, we will insert "Course for Sales Area" and "Course for Advanced Sales Area" By clicking on the second "Assign Users" icon, the users that you can assign to this catalog can be chosen as follows: - users belonging to one or more specific groups - users belonging to one or more nodes of the organization chart - users who play a role that we want to be able to view the Catalog. According to our example, we choose that all users entered in the "Sales Area" node of the organization chart view the Catalog and the courses it contains. Then put the check on "Yes" and then click on "Save Changes" By clicking on the third "Register" icon, you can choose the individual users who are supposed to view the Catalog or the users belonging to one or more specific groups or the users entered at one or more nodes of the organization chart or the users who hold a role By clicking on the fourth and fifth "Edit" and "Delete" icons, you can change the title of the catalog and delete it respectively. View the Catalog in the menu Forma Lms gives the opportunity to choose whether or not to display the "Catalog" label in the student menu or to show it only to certain students and not to others. Following our example, we only want users belonging to the "Sales Area" node to see the Catalog label once they have entered the platform. How to do? Go to Administration / Configuration / E-learning configuration / User area in the LMS On the page where all the labels are displayed, click on the "Users" icon relating to the "Course catalog" label At that point you have the possibility to choose which users to display the "Catalog" label, which groups, which nodes of the organization chart, the users who hold a certain role. Following our example, we choose that the Course Catalog label "is displayed by users belonging to the" Sales Area "node At this point, all users belonging to the "Sales Area" node of the organization chart, once they have entered the platform, view the Course Catalog item
https://docs.formalms.org/tutorial/105-forma-lms-how-to-configure-the-catalogs.html
2020-08-03T11:30:37
CC-MAIN-2020-34
1596439735810.18
[array(['/jfiles/images/documentation/tutorial/catalogue_1.png', 'catalogue 1'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_2.png', 'catalogue 2'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_3.png', 'catalogue 3'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_4.png', 'catalogue 4'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_5.png', 'catalogue 5'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_6.png', 'catalogue 6'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_7.jpg', 'catalogue 7'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_8.png', 'catalogue 8'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_9.png', 'catalogue 9'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_10.jpg', 'catalogue 10'], dtype=object) array(['/jfiles/images/documentation/tutorial/catalogue_11.png', 'catalogue 11'], dtype=object) ]
docs.formalms.org
# Integrate a relevant search bar to your documentation You might have noticed the search bar in this documentation. And you are probably wanting the same for your own documentation! This tutorial will guide you through the steps of building a relevant and powerful search bar for your documentation 🚀 # Run a MeiliSearch Instance First of all, you need your documentation content to be scraped and pushed into a MeiliSearch instance. You can install and run MeiliSearch on your machine using curl. $ curl -L | sh $ ./meilisearch --master-key=myMasterKey There are other ways to install MeiliSearch. MeiliSearch is open-source and can run either on your server or on any cloud provider. NOTE The host URL and the API key you will provide in the next steps correspond to the credentials of this MeiliSearch instance. In the example above, the host URL is and the API key is myMasterKey. # Scrape your Content The Meili team provides and maintains a scraper tool to automatically read the content of your website and store it into an index in MeiliSearch. # Configuration File The scraper tool needs a configuration file to know what content you want to scrape. This is done by providing selectors (e.g. the HTML tag). Here is an example of a basic configuration file: { "index_uid": "docs", "start_urls": [""], "sitemap_urls": [""], "stop_urls": [], "selectors": { "lvl0": { "selector": ".docs-lvl0", "global": true, "default_value": "Documentation" }, "lvl1": { "selector": ".docs-lvl1", "global": true, "default_value": "Chapter" }, "lvl2": ".docs-content .docs-lvl2", "lvl3": ".docs-content .docs-lvl3", "lvl4": ".docs-content .docs-lvl4", "lvl5": ".docs-content .docs-lvl5", "lvl6": ".docs-content .docs-lvl6", "text": ".docs-content p, .docs-content li" } } The index_uid field is the index identifier in your MeiliSearch instance in which your website content is stored. The scraping tool will create a new index if it does not exist. The docs-content class is the main container of the textual content in this example. Most of the time, this tag is a <main> or an <article> HTML element. lvlX selectors should use the standard title tags like h1, h2, h3, etc. You can also use static classes. Set a unique id or name attribute to these elements. Every searchable lvl elements outside this main documentation container (for instance, in a sidebar) must be global selectors. They will be globally picked up and injected to every document built from your page. If you use VuePress for your documentation, you can check out the configuration file we use in production. In our case, the main container is theme-default-content and the selector the titles and sub-titles are h1, h2... TIP More optional fields are available to fit your need. # Run the Scraper You can run the scraper with Docker. With our local MeiliSearch instance set up at the first step, we run: $ docker run -t --rm \ --network=host \ -e MEILISEARCH_HOST_URL='' \ -e MEILISEARCH_API_KEY='myMasterKey' \ -v <absolute-path-to-your-config-file>:/docs-scraper/config.json \ getmeili/docs-scraper:latest pipenv run ./docs_scraper config.json NOTE If you don't want to use Docker, here are other ways to run the scraper. <absolute-path-to-your-config-file> should be the absolute path of your configuration file defined at the previous step. The API key you must provide should have the permissions to add documents into your MeiliSearch instance. In a production environment, we recommend providing the private key instead of the master key, as it is safer and it has enough permissions to perform such requests. More about MeiliSearch authentication. TIP We recommend running the scraper at each new deployment of your documentation, as we do for the MeiliSearch's one. # Integrate the Search Bar If your documentation is not a VuePress application, you can directly go to this section. # For a VuePress Documentation If you use VuePress for your documentation, we provide a Vuepress plugin. This plugin is used in production in the MeiliSearch documentation. In your VuePress project: $ yarn add vuepress-plugin-meilisearch # or $ npm install vuepress-plugin-meilisearch In your config.js file: module.exports = { plugins: [ [ "vuepress-plugin-meilisearch", { "hostUrl": "<your-meilisearch-host-url>", "apiKey": "<your-meilisearch-api-key>", "indexUid": "docs" } ], ], }. These three fields are mandatory, but more optional fields are available to customize your search bar. WARNING Since the configuration file is public, we strongly recommend providing the MeiliSearch public key in a production environment, which is enough to perform search requests. Read more about MeiliSearch authentication. # For All Kinds of Documentation If you don't use VuePress for your documentation, we provide a front-end SDK to integrate a powerful and relevant search bar to any documentation website. <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="[email protected]{version}/dist/cdn/docs-searchbar.min.css" /> </head> <body> <input type="search" id="search-bar-input"> <script src="[email protected]{version}/dist/cdn/docs-searchbar.min.js"></script> <script> docsSearchBar({ hostUrl: '<your-meilisearch-host-url>', apiKey: '<your-meilisearch-api-key>', indexUid: 'docs', inputSelector: '#search-bar-input', debug: true // Set debug to true if you want to inspect the dropdown }); </script> </body> </html>. inputSelector is the id attribute of the HTML search input tag. WARNING We strongly recommend providing the MeiliSearch public key in a production environment, which is enough to perform search requests. Read more about MeiliSearch authentication. The default behavior of this library fits perfectly for a documentation search bar, but you might need some customizations. NOTE For more concrete examples, you can check out this basic HTML file or this more advanced Vue file. # What's next? At this point you should have a working search engine on your website, congrats! 🎉 You can check this tutorial if you now want to run MeiliSearch in production!
https://docs.meilisearch.com/resources/howtos/search_bar_for_docs.html
2020-08-03T11:23:10
CC-MAIN-2020-34
1596439735810.18
[array(['/tuto-searchbar-for-docs/vuepress-searchbar-demo.gif', 'MeiliSearch docs search bar demo'], dtype=object) array(['/tuto-searchbar-for-docs/vuepress-plugin-example.png', 'VuePress plugin example'], dtype=object) array(['/tuto-searchbar-for-docs/docs-searchbar-example.png', 'docs-searchbar.js example'], dtype=object) ]
docs.meilisearch.com
Table of Contents Product Index Meet Dallas for Victoria 8. This pack includes: Custom 3D Eyebrows especially designed for Dallas' Face and Textures Separate Head and Body Morph with slight facial asymmetry Separate Head and Body HD details (build resolution 4) Separate HD Navel Morph Separate HD Iris and Lacrimal Morph Customized Smile Full Face HD and Smile Full Face Open HD expressions High-Quality Textures created from scratch from HD photo resources No Brow Option Normal Maps - normal face map created from photometric surface.
http://docs.daz3d.com/doku.php/public/read_me/index/63665/start
2020-08-03T13:00:11
CC-MAIN-2020-34
1596439735810.18
[]
docs.daz3d.com
Constructor: chatAdminRights Back to constructors index Represents the rights of an admin in a channel/supergroup. Attributes: Type: ChatAdminRights Example: $chatAdminRights = ['_' => 'chatAdminRights', 'change_info' => Bool, 'post_messages' => Bool, 'edit_messages' => Bool, 'delete_messages' => Bool, 'ban_users' => Bool, 'invite_users' => Bool, 'pin_messages' => Bool, 'add_admins' => Bool]; Or, if you’re into Lua: chatAdminRights={_='chatAdminRights', change_info=Bool, post_messages=Bool, edit_messages=Bool, delete_messages=Bool, ban_users=Bool, invite_users=Bool, pin_messages=Bool, add_admins=Bool} This site uses cookies, as described in the cookie policy. By clicking on "Accept" you consent to the use of cookies.
https://docs.madelineproto.xyz/API_docs/constructors/chatAdminRights.html
2020-08-03T11:59:25
CC-MAIN-2020-34
1596439735810.18
[]
docs.madelineproto.xyz
What should I teach after Year 6 (primary syllabus)? We currently have the first book in the secondary syllabus, Textbook and Workbook 7 available to buy and use. This continues and extends the teaching from our primary syllabus, but is also great for new starters too. You can find out more about the secondary syllabus here.
https://docs.safarpublications.org/article/89-what-should-i-teach-after-year-6-primary-syllabus
2020-08-03T12:58:25
CC-MAIN-2020-34
1596439735810.18
[]
docs.safarpublications.org
The error messages that might be displayed during the NFS hierarchy creation are listed below, along with a suggested explanation for each. The messages listed cover both the creation of the nfs and hanfs resources. Error messages displayed by the LifeKeeper core and by other recovery kits are not listed in this guide. Note that you may stop to correct any problem described here, and then continue with hierarchy creation from the point where you left off – including creating any new LifeKeeper resources you might need for your NFS configuration. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/spslinux/9.5.0/en/topic/nfs-hierarchy-creation-errors
2020-08-03T12:57:14
CC-MAIN-2020-34
1596439735810.18
[]
docs.us.sios.com
Download New Version With the Progress Telerik UI for Silverlight Extension you keep your projects in an up-to-date state. The Latest Version Acquirer tool automatically retrieves the most fresh Telerik UI for Silverlight distribution, available on the Telerik website. Running the Upgrade Wizard as a next step makes the task of latest Telerik UI for Silverlight package utilization extremely easy. Once a day, upon Visual Studio launch, the Telerik Silverlight Extension queue the Telerik website for a new version of Telerik UI for Silverlight. A dialog gets displayed when a new version is discovered: . . Once the download succeeds, the latest version of the Telerik UI for Silverlight will be available for use in the Upgrade Wizard and the New Project Wizard. The Download buttons of the Upgrade Wizard and the and the New Project Wizard launch the Latest Version Acquirer tool too. The Latest Version Acquirer tool actually downloads the hotfix zip files, containing the latest Telerik binaries and any resources vital for the Telerik project creation. These get unpacked to the %appdata%\Telerik\Updates folder. If you find the list of packages offered too long and you don't need the older versions, you can close Visual Studio and use Windows Explorer to delete these distributions.
https://docs.telerik.com/devtools/silverlight/integration/visual-studio-extensions/vs-extensions-project-latest-version-acquirer
2021-01-16T01:12:32
CC-MAIN-2021-04
1610703497681.4
[array(['images/extensions_acquirertool_sl_1.png', 'extensions acquirertool sl 1'], dtype=object) array(['images/extensions_acquirertool_sl_2.png', 'extensions acquirertool sl 2'], dtype=object) array(['images/extensions_acquirertool_sl_3.png', 'extensions acquirertool sl 3'], dtype=object) array(['images/extensions_acquirertool_sl_4.png', 'extensions acquirertool sl 4'], dtype=object) array(['images/extensions_acquirertool_sl_5.png', 'extensions acquirertool sl 5'], dtype=object)]
docs.telerik.com
Assigning a default channel to a location is an ideal solution for organizations requiring to quickly replace, or add a device to a display without having to worry about which channel to select or mode to put it on. This allows almost anyone to just plug in and register a digital media device, and leave the channel selection to your content team that can be based anywhere. This article provides the instructions to assign a default channel or playlist channel to a location or sub-location, enabling devices to automatically be assigned with a channel once the device is registered. Prerequisites - An existing channel or channel playlist within the location. Click here for instructions to create a playlist channel. - An Account Owner or Location Administrator. Assign Channel to Location - Log in to the Appspace console. - Click Devices from the ☰ Appspace menu. - Click the Locations tab, and select the desired location or sub-location to assign the default channel to. - Click the Overview tab, and click the None selected link for Default Channel. - In the Channel selection mode field, select one of the following options:NotePlease be reminded that only channels or playlist channels within the location can be selected as the default channel. Thus, you must create or move an existing channel to the location before being able to assign it as the default channel. - – Ideally for digital signage, thus only a single channel will be played. - Once done, click SAVE. The device will begin syncing the selected channel(s). Please note that this may take some time depending on the channel size and network speed.ImportantIf the device is not displaying any channels, please ensure that the desired channel has been published to the location. If not, please proceed to publish content to the location, by following the Publish Content to Device, and selecting the desired location the device is on.
https://docs.appspace.com/latest/channel/assign-default-channel-to-network/
2021-01-16T00:25:12
CC-MAIN-2021-04
1610703497681.4
[]
docs.appspace.com
Regex. Infinite Match Timeout Field Definition Specifies that a pattern-matching operation should not time out. public: static initonly TimeSpan InfiniteMatchTimeout; public static readonly TimeSpan InfiniteMatchTimeout; staticval mutable InfiniteMatchTimeout : TimeSpan Public Shared ReadOnly InfiniteMatchTimeout As TimeSpan Field Value Remarks The Regex(String, RegexOptions, TimeSpan) class constructor and a number of static matching methods use the InfiniteMatchTimeout constant to indicate that the attempt to find a pattern match should not time out. Warning Setting the regular expression engine's time-out value to InfiniteMatchTimeout can cause regular expressions that rely on excessive backtracking to appear to stop responding when processing text that nearly matches the regular expression pattern. If you disable time-outs, you should ensure that your regular expression does not rely on excessive backtracking and that it handles text that nearly matches the regular expression pattern. For more information about handling backtracking, see Backtracking. The InfiniteMatchTimeout constant can be supplied as the value of the matchTimeout argument of the following members: Regex(String, RegexOptions, TimeSpan) RegexCompilationInfo.RegexCompilationInfo(String, RegexOptions, String, String, Boolean, TimeSpan) IsMatch(String, String, RegexOptions, TimeSpan) Match(String, String, RegexOptions, TimeSpan) Matches(String, String, RegexOptions, TimeSpan) Replace(String, String, String, RegexOptions, TimeSpan) Replace(String, String, MatchEvaluator, RegexOptions, TimeSpan) Split(String, String, RegexOptions, TimeSpan)
https://docs.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.regex.infinitematchtimeout?view=netframework-4.7.1
2021-01-16T01:00:01
CC-MAIN-2021-04
1610703497681.4
[]
docs.microsoft.com
Using the Stack Auditor Plugin Page last updated: This topic describes how to use the Stack Auditor plugin for the Cloud Foundry Command Line Interface (cf CLI). Overview Stack Auditor is a cf CLI plugin that provides commands for listing apps and their stacks, migrating apps to a new stack, and deleting a stack. One use case for Stack Auditor is when you must migrate a large number of apps to a new stack. This includes moving from cflinuxfs2 to cflinuxfs3 in preparation to upgrade your deployment to a version that does not contain cflinuxfs2. The following table describes the workflow you can use: Install Stack Auditor To install Stack Auditor, do the following: Download the Stack Auditor binary for your OS from Releases in the Stack Auditor repository on GitHub. Unzip the binary file you downloaded: tar xvzf PATH-TO-BINARY Install the plugin with the cf CLI: cf install-plugin PATH-TO-BINARY Use Stack Auditor The sections below describe how to use Stack Auditor. List Apps and Their Stacks This section describes how to see the apps in each org and space and what stack they are using. To see which apps are using which stack, run the following command. It lists apps for each org you have access to. To see all the apps in your deployment, ensure that you are logged in to the cf CLI as a user who can access all orgs. cf audit-stack See the following example output: $ cf audit-stack first-org/development/first-app cflinuxfs2 first-org/staging/first-app cflinuxfs2 first-org/production/first-app cflinuxfs2 second-org/development/second-app cflinuxfs3 second-org/staging/second-app cflinuxfs3 second-org/production/second-app cflinuxfs3 ... Change Stacks This section describes how to change the stack that an app uses. Stack Auditor rebuilds the app onto the new stack without a change in the source code of the app. If you want to move the app to a new stack with updated source code, follow the procedure in the Changing Stacks topic. Warning: After successfully staging the app on cflinuxfs3, Stack Auditor attempts to restart the app on cflinuxfs3. This causes brief downtime. To avoid this brief downtime, use a blue-green strategy. See Using Blue-Green Deployment to Reduce Downtime and Risk. To change the stack an app uses, do the following: Target the org and space of the app: cf target ORG SPACE Where: ORGis the org the app is in SPACEis the space the app is in Run the following command: Note: If the app is in a stoppedstate, it remains stopped after changing stacks. Note: When attempting to change stacks, your app is stopped. If the app fails on cflinuxfs3, Stack Auditor attempts to restage your app on cflinuxfs2. cf change-stack APP-NAME STACK-NAME Where: APP-NAMEis the app that you want to move to a new stack STACK-NAMEis the stack you want to move the app to See the following example output: $ cf change-stack my-app cflinuxfs3 Attempting to change stack to cflinuxfs3 for my-app... Starting app my-app in org pivotal-pubtools / space pivotalcf-staging as [email protected]... Downloading staticfile_buildpack... . . . requested state: started instances: 1/1 usage: 64M x 1 instance urls: example.com last uploaded: Thu Mar 28 17:44:46 UTC 2019 stack: cflinuxfs3 buildpack: staticfile_buildpack state since cpu memory disk details #0 running 2019-04-02 03:18:57 PM 0.0% 8.2M of 64M 6.9M of 1G Application my-app was successfully changed to Stack cflinuxfs3 Delete a Stack This section describes how to delete a stack from your deployment. You must be an admin user to complete this step. To delete a stack, run the following command. This action cannot be undone, with the following exception: If you upgrade your deployment to a version that contains the stack you deleted, the stack returns on upgrade. cf delete-stack STACK-NAME Where STACK-NAMEis the name of the stack you want to delete. $ cf delete-stack cflinuxfs2 Are you sure you want to remove the cflinuxfs2 stack? If so, type the name of the stack [cflinuxfs2] >cflinuxfs2 Deleting stack cflinuxfs2... Stack cflinuxfs2 has been deleted. If you have any apps still running on cflinuxfs2, the command returns the following error: Failed to delete stack cflinuxfs2 with error: Please delete the app associations for your stack.
https://docs.cloudfoundry.org/adminguide/stack-auditor.html
2021-01-16T00:12:11
CC-MAIN-2021-04
1610703497681.4
[]
docs.cloudfoundry.org
Git Integration Blog Posts You can read the CollabNet blog posts on Git integration and follow the latest developments in the CollabNet TeamForge-Git integration space. Here’s a list of few useful blog posts: - Bulletproof, Military Grade Security – Visualizing the Access Control Mechanisms of Your SCM Solution - You shall not pass – Control your code quality gates with a wizard – Part I - You shall not pass – Control your code quality gates with a wizard – Part II - Migrating from Subversion to Git: What Your PCI-DSS Guy Will Not Tell You, Part I - Migrating from Subversion to Git: What Your PCI-DSS Guy Will Not Tell You, Part II - Seamlessly navigate between TeamForge projects and related Gerrit reviews - TeamForge Git /Gerrit Integration with Jenkins CI - CollabNet Gerrit Notifications – For all who miss the good ol’ git push notifications - TeamForge Just Got Even Better with Git Pull Request Feature! - Gerrit Rebranding – The missing Guide to a customized Look & Feel - Easy guide to mappings between Gerrit Access Control and TeamForge Source Code Permissions Mappings Between TeamForge and Gerrit These tables shows how objects and relationships are mapped between TeamForge and Gerrit. Access Rights in Gerrit The Git integration maps Gerrit access rights to TeamForge Role Based Access Control (RBAC) permissions. The mappings file TeamForgeGerritMappings.xml is located in the refs/meta/config branch of TF-Projects project. How to view/access the TeamForgeGerritMappings.xml file? Log on to TeamForge as a Site Administrator. Select My Workspace > More > Git <hostname>.Note: hostnamerefers to the server where your Git integration is hosted. Select Projects > List. Select TF-Projects from the list of projects. Select the Branches tab. Click Browse against the refs/meta/configbranch name. The TeamForgeGerritMappings.xmlfile can be found here. The following table shows how TeamForge RBAC permissions are now mapped to Gerrit access rights by default. To make changes to the mappings, modify the TeamForgeGerritMappings.xml file in the refs/meta/config branch of TF-Projects project on the server where your Git integration is hosted. For instance, if you want to add a user-defined category to your repository, first you need to add the user-defined category to the TeamForgeGerritMappings.xml file. For instructions, see Create a User-defined Repository Category. Gerrit Configuration Options Gerrit provides many configuration options. In addition, CollabNet Gerrit plugins also have configuration options. For more information on Gerrit’s configuration options, see Gerrit Code Review - Configuration. In addition, see Gerrit Performance Cheat Sheet to know more about tuning Gerrit for optimal performance. CollabNet Gerrit plugins have these configuration options: Section.teamforge Replication Configuration This feature requires TeamForge 8.1 or later. These options are ignored if you have TeamForge 8.0 or earlier. Relication Master Configuration Replication Mirror Configuration Log Files From TeamForge 18.1, Gerrit’s internal log rotation and compression feature is disabled as it is handled automatically by the TeamForge runtime environment. Appendix - History Protection FAQs - History Protection Slide Deck - Git reflog vs History Protection - Gerrit Performance Cheat Sheet []:
https://docs.collab.net/teamforge191/gitreference.html
2021-01-16T00:16:01
CC-MAIN-2021-04
1610703497681.4
[]
docs.collab.net
Certificates¶ Zercurity keeps track of all of the certificates that are used to sign the applications that your Assets run. Table view¶ Risk Represented by either a; red, orange or green icon of the applications platform. Red Caution. The certificate is known to be malicious and will have been blocked from running. However, administrators should investigate the incident. Orange Warning. The certificate is untrusted or suspicious. Or may have just expired. This could mean the application signed by this certificate is malicious and depending on your configuration may have been executed as a result. You will need to check which Assets this application has been installed on. Green Approved. This is a known good and trusted certificate. Which has been allowed to run on an asset. Grey Unknown. This certificate’s status is unknown. It will be in the process of being checked. Name The certificate name. This is the certificates common name. Organisation This is the name of the organisation the certificate has been issued to. Usually this is the company name of the developer of the application. Applications Represents the number of applications that this certificate has been used to sign. Issued This is the date and time of when the certificate was issued and when the certificate is valid from. Expired This is the date and time of when the certificate will expire. If a certificate expires it should be considered invalid. Applications installed with an expired certificate should be updated immediately.
https://docs.zercurity.com/inventory/certificates.html
2021-01-15T23:35:33
CC-MAIN-2021-04
1610703497681.4
[array(['../_images/certificates.png', '../_images/certificates.png'], dtype=object) ]
docs.zercurity.com
Redirect a Page to the BuddyPress Member Profile In BuddyPress, we do not have a page for the member profile. This makes ist difficult to redirect to the member profile. If you want to redirect to the member profile after "Submitting a Form", "Login", "Reset the Password" or after the registration when the user clicks on the activation link you need to use a custom function. Copy the above function into your theme function.php and change the id to the page id you want to redirect. If the user is logged in and visits the page he gets redirected to his profile automatically. Now if you select this page in your form settings redirect or as activation page the user will get redirected to his profile. Please watch this video where I go in detail over the process.
https://docs.buddyforms.com/article/581-redirect-a-page-to-the-buddypress-member-profile
2021-01-15T22:58:06
CC-MAIN-2021-04
1610703497681.4
[]
docs.buddyforms.com