content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Zeit Now Continuous Deployment# This guide shows you how to use Semaphore to set up continuous integration and deployment to ZEIT Now. Demo project# Semaphore provides a demo project: The demo deploys a serverless function that replies “Hello World!” to all HTTP requests. Testing serverless functions can be challenging. To emulate the cloud environment in Semaphore, the project uses a combination of Node.js, Express, Jest and Supertest. Overview of the pipelines# The pipeline performs the following tasks: - Install dependencies. - Run unit tests. - Continuously deploy the master branch to the production site. On manual approval: - Deploy to the staging site. The complete CI/CD workflow looks like this: Continuous Integration Pipeline (CI)# In the repository, the .semaphore directory contains the annotated pipeline configuration files. The CI pipeline takes place in 2 blocks: - npm install and cache: - Downloads and installs the Node.js packages. - Builds the app and saves it to the cache. - npm test: - Runs unit and coverage tests. version: v1.0 name: Build and test # An agent defines the environment in which your code runs. # It is a combination of one of: npm install and cache commands: - checkout - nvm use - node --version - npm --version - cache restore - npm install - cache store - name: Run tests task: jobs: - name: npm test commands: - checkout - nvm use - cache restore - npm test promotions: # Deployment to staging can be triggered manually: - name: Deploy to staging pipeline_file: deploy-staging.yml # Automatically deploy to production on successful builds on the master branch: - name: Deploy to production pipeline_file: deploy-production.yml auto_promote_on: - result: passed branch: - master Two promotions branch out of the CI pipeline: - Deploy to production: automatically started once all tests are green for the master branch. - Deploy to staging: can be manually initiated from a Semaphore workflow on any branch. Continuous Deployment Pipeline (CD)# The CD pipeline consists of a block with a single job: ZEIT Now is a cloud service for web services and serverless functions. Deployment is performed with the Now CLI. No configuration file is required as long as the project files are located in these special directories: publicfor static files. apior pages/apifor serverless functions. In addition, ZEIT Now can automatically build many popular frameworks. Both staging and production pipelines are almost identical. They only differ on the app name, which maps to the final deployment URL like this: - Production: semaphore-demo-zeit-now.YOUR_USERNAME.now.sh - Staging: semaphore-demo-zeit-now-staging.YOUR_USERNAME.now.sh Run The Demo Yourself# You can get started right away with Semaphore. Running and deploying the demo by yourself takes only a few minutes: Get a Token# - Create a ZEIT Now account. - Open your account Settings - Go to the Tokens tab. - Click on the Create button. - Enter a name for the token, something descriptive like: semaphore-zeit-now - Copy the generated token and keep it safe. Create the pipeline on Semaphore# Now, add the token to Semaphore: - On the left navigation bar, under Configuration click on Secrets. - Hit the Create New Secret button. - Create the secret, as shown below. Copy the token obtained earlier. To run the project on Semaphore: - Fork the Demo project on GitHub. - Clone the repository on your local machine. - In Semaphore, follow the link on the sidebar to create a new project. - Edit any file and do a push to GitHub, Semaphore starts automatically. Once the deployment is complete, the API service should be online. Browse the production URL: $ curl -w "\n" Hello World!
https://docs.semaphoreci.com/examples/zeit-now-continuous-deployment/
2020-09-18T10:43:44
CC-MAIN-2020-40
1600400187390.18
[array(['https://raw.githubusercontent.com/semaphoreci-demos/semaphore-demo-zeit-now/master/images/ci-pipeline.png', 'CI+CD Pipelines for ZEIT Now'], dtype=object) array(['https://github.com/semaphoreci-demos/semaphore-demo-zeit-now/raw/master/images/zeit-create-token.png', 'Create Token'], dtype=object) array(['https://github.com/semaphoreci-demos/semaphore-demo-zeit-now/raw/master/images/semaphore-create-secret.png', 'Create Secret'], dtype=object) ]
docs.semaphoreci.com
Adding BriteForms to Your Existing Unbounce Page Step 1 Create a BriteVerify account by visiting Step 2 Once your account has been created, you will need to upgrade to BriteForms. This can be done by either logging into BriteVerify and either clicking the "Upgrade to BriteForms" button, or by going to "Account Settings" and choosing "Plans" The Upgrade Button Selecting Your Plan Step 3 Once your account is created, click the ‘Forms’ tab and click on the ‘Trusted Domains’ button. Add the root URL., and select the page / form you would like to access. Click ‘Edit’ on the First Variant. If you have multiple variants set up for this page you’ll need to add BriteForms to each variant that contains a form. Step 6 At the very bottom of your page you’ll see Javascripts. Click the plus sign to the right of Javascripts. If you already added javascripts to this form you will see a number instead of a plus sign. mode. IMPORTANT!! By default your form is set to test mode. You will ONLY be able to test the following addresses: - [email protected]: This will simulate a valid verification. - [email protected]: This will simulate an invalid verification. - [email protected]: This will simulate an unknown verification. - [email protected]: This will simulate an accept_all verification. - [email protected]: This will simulate a submission suspected of being fraudulent. This will allow you to develop and run BriteForms from your local machine without having to authorize any domains or worry about triggering BriteVerify's fraud tools and security countermeasures. In test mode you will need to use certain emails to simulate certain events. Submissions and Verifications will still be tracked but they will all be marked as test transactions and only appear in reporting when the form is in test mode. You must take your form out of test mode in order to validate anything other than the addresses shown above. You will do this in Step 10 Back to BriteForms, click on the ‘Forms’ tab then on the BriteForm you just created to view all your test analytics. The main forms screen This screen will provide valuable information about your form health once your form is turned live. The Form Analytics Screen Step 11 When your form is ready to be turned live visit the BriteForms Form tab, click on your BriteForm and move the toggle switch at the top of your analytics page from Test to Live. You really don't want to forget to do this step NOTE: All analytics observed during test mode will be reset to 0 when you turn your form live. Step 12 Revisit your Unbounce page and Re-Publish your form. This is the other step you really don't want to forget Other Documentation BriteForms Setup BriteForms Integration
http://bv-docs.squarespace.com/unbounceaddition
2018-03-17T10:16:32
CC-MAIN-2018-13
1521257644877.27
[]
bv-docs.squarespace.com
brew tap adds more repositories to the list of formulae that brew tracks, updates, and installs from. By default, tap assumes that the repositories come from GitHub, but the command isn’t limited to any one location. brew tap) brew tapwithout arguments lists the currently tapped repositories. For example: $ brew tap homebrew/core mistydemeo/tigerbrew dunn/emacs brew tap <user/repo> makes a shallow clone of the repository at. After that, brew will be able to work on those formulae as if they were in Homebrew’s canonical repository. You can install and uninstall them with brew [un]install, and the formulae are automatically updated when you run brew update. (See below for details about how brew tap handles the names of repositories.) brew tap <user/repo> <URL> makes a shallow clone of the repository at URL. Unlike the one-argument version, URL is not assumed to be GitHub, and it doesn’t have to be HTTP. Any location and any protocol that Git can handle is fine. Add --full to either the one- or two-argument invocations above, and Git will make a complete clone rather than a shallow one. Full is the default for Homebrew developers. brew tap --repair migrates tapped formulae from a symlink-based to directory-based structure. (This should only need to be run once.) brew untap user/repo [user/repo user/repo ...] removes the given taps. The repositories are deleted and brew will no longer be aware of their formulae. brew untap can handle multiple removals at once. On GitHub, your repository must be named homebrew-something in order to use the one-argument form of brew tap. The prefix ‘homebrew-‘ is not optional. (The two-argument form doesn’t have this limitation, but it forces you to give the full URL explicitly.) When you use brew tap on the command line, however, you can leave out the ‘homebrew-‘ prefix in commands. That is, brew tap username/foobar can be used as a shortcut for the long version: brew tap username/homebrew-foobar. brew will automatically add back the ‘homebrew-‘ prefix whenever it’s necessary. If your tap contains a formula that is also present in homebrew/core, that’s fine, but it means that you must install it explicitly by default. If you would like to prioritize a tap over homebrew/core, you can use brew tap-pin username/repo to pin the tap, and use brew tap-unpin username/repo to revert the pin. Whenever a brew install foo command is issued, brew will find which formula to use by searching in the following order: If you need a formula to be installed from a particular tap, you can use fully qualified names to refer to them. For example, you can create a tap for an alternative vim formula. Without pinning it, the behaviour will be brew install vim # installs from homebrew/core brew install username/repo/vim # installs from your custom repo However if you pin the tap with brew tap-pin username/repo, you will need to use homebrew/core to refer to the core formula. brew install vim # installs from your custom repo brew install homebrew/core/vim # installs from homebrew/core Do note that pinned taps are prioritized only when the formula name is directly given by you, i.e. it will not influence formulae automatically installed as dependencies. © 2009–present Homebrew contributors Licensed under the BSD 2-Clause License.
http://docs.w3cub.com/homebrew/taps/
2018-03-17T10:29:10
CC-MAIN-2018-13
1521257644877.27
[]
docs.w3cub.com
COM Updated: February 22, 2008 Applies To: Windows Server 2008 The Component Object Model (COM) is a platform-independent, distributed, object-oriented system for creating binary software components that can interact. COM is the foundation technology for Microsoft Object Linking and Embedding (OLE) (compound documents) and ActiveX® (Internet-enabled components) technologies. Aspects The following is a list of all aspects that are part of this managed entity:
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc774403(v=ws.10)
2018-03-17T11:20:35
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
This documentation is only valid for older versions of Wordfence. If you are using Wordfence 7 or later, please visit our new documentation. Difference between revisions of "Troubleshooting and Common Errors" From Wordfence Documentation Revision as of 20:37, 28 April 2016? LiteSpeed Server Issues modified theme or plugin files - Scheduled scans are not starting - You sent me a warning that the plugin "" needs an upgrade. How do I find out what plugin is named ""?? - Why did I get a blank page after a Wordpress automatic update?
https://docs.wordfence.com/index.php?title=Troubleshooting_and_Common_Errors&diff=prev&oldid=572
2018-03-17T10:32:27
CC-MAIN-2018-13
1521257644877.27
[]
docs.wordfence.com
change_history Merge Tags {referer} Merge Tag SummaryDisplays the address of the page which referred the user agent to the current page.Usage{referer}. {admin_email} Merge Tag SummaryDisplays email address configured on the WordPress > Settings > General page.Usage{admin_email}. {user_agent} Merge Tag SummaryDisplays the browser and platform information of the machine from which the entry was submitted.Usage{user_agent}. {ip} Merge Tag SummaryDisplays the form submitter's IP address.Usage{ip}. {embed_url} Merge Tag SummaryThe {embed_url} merge tag in Gravity Forms displays the URL from which the form was submitted. Gravity Forms uses merge tags to allow you to dynamically populate submitted field values and other dynamic information in notification emails, post content templates and more. {form_title} Merge Tag SummaryDisplays the title of the submitted form.Usage{form_title}NotesCan be used in areas such as notifications and confirmations to access form properties. {form_id} Merge Tag Usage{form_id}ResultDisplays the ID of the submitted form.NotesCan be used in areas such as notifications and confirmations to access form properties. {post_edit_url} Merge Tag SummaryDisplays the edit URL for the post which was created during submission.Usage{post_edit_url}NotesOnly relevant when the form uses fields from Post Fields panel. {post_id} Merge Tag {entry_url} Merge Tag SummaryDisplays a URL that will take you to detail page for the submitted entry.Usage{entry_url}NotesThis merge tag can be used in areas such as notifications and confirmations after the entry has been saved.
https://docs.gravityforms.com/category/user-guides/merge-tags-getting-started/
2018-03-17T10:35:48
CC-MAIN-2018-13
1521257644877.27
[]
docs.gravityforms.com
PuppetDB 4.0: Installing PuppetDB via Puppet module Included in Puppet Enterprise 2016.1. A newer version is available; see the version menu above for details. Note: may find it easier to follow our guide to installing PuppetDB from packages. Step 1: Enable the Puppet package repository If you haven’t done so already, you will need to do one of the following: - Enable the Puppet package repository on your PuppetDB server and Puppet master server. - Grab the PuppetDB and PuppetDB-termini,.
https://docs.puppet.com/puppetdb/4.0/install_via_module.html
2018-03-17T10:28:26
CC-MAIN-2018-13
1521257644877.27
[]
docs.puppet.com
Conflict Resolution: NodeJS For reasons explained in the Introduction to conflict resolution, we strongly recommend adopting a conflict resolution strategy that requires applications to resolve siblings according to use-case-specific criteria. Here, we’ll provide a brief guide to conflict resolution using the official Riak Node.js client. How the Node.js Client Handles Conflict Resolution In the Riak Node.js client, the result of a fetch can possibly return an array of sibling objects. If there are no siblings, that property will return an array with one value in it. Example: creating object with siblings So what happens if the length of rslt.values is greater than 1, as in the case above? In order to resolve siblings, you need to either fetch, update and store a canonical value, or choose a sibling from the values array and store that as the canonical value. Basic Conflict Resolution Example In this example, you will ignore the contents of the values array and will fetch, update and store the definitive value. Example: resolving siblings via store Choosing a value from rslt.values This example shows a basic sibling resolution strategy in which the first sibling is chosen as the canonical value. Example: resolving siblings via first Using conflictResolver This example shows a basic sibling resolution strategy in which the first sibling is chosen as the canonical value via a conflict resolution function. Example: resolving siblings via `conflictResolver
https://docs.basho.com/riak/kv/2.2.3/developing/usage/conflict-resolution/nodejs/
2018-03-17T10:43:29
CC-MAIN-2018-13
1521257644877.27
[]
docs.basho.com
Items Grid Items within project can be displayed in a grid that can be sorted, filtered and ordered. Every row in the grid represents an individual item. - You can filter instantly to find items of interest - You can export the items to Excel - You can perform bulk updates to make updates en masse - You can run Excel based reports on the items Watch How To Do It: Item Grid The following is an overview and showcase of features available to the item grid within Gemini.
https://docs.countersoft.com/concept-items-grid/
2018-03-17T10:41:06
CC-MAIN-2018-13
1521257644877.27
[array(['grid.png', None], dtype=object)]
docs.countersoft.com
Best Folding Clothesline For A Family What Is The Best Folding Or Foldown Clothesline For A Family? Now, in our range of fold-down clothesline products, it's quite large. But, out of the different brands we do, obviously for a family line capacity is really, really important. So, under the Austral brand you'll see there's a model there called an Austral Addaline 35 Clothesline. That has really good line capacity, over 35 meters, so plenty of hanging space. If you do need a bit more though, there's another model there called the City Living Urban 3000 Clothesline. Now, that one has got huge capacity as well. So, those two models, or even the Urban 2400 model, are really good for families or larger families. If you do need to know a bit more about the best folding clothesline for a family or those particular!
https://docs.lifestyleclotheslines.com.au/article/751-best-folding-clothesline-for-a-family
2018-03-17T10:26:04
CC-MAIN-2018-13
1521257644877.27
[]
docs.lifestyleclotheslines.com.au
). To redirect Plug and Play devices. Note You can also redirect drives that will be connected after a session to a remote computer is active. To make a drive that you will connect to later available for redirection, expand Drives, and then select the Drives that I connect to later check box. - Click OK and proceed to connect to the remote computer. Note The Remote Desktop Protocol (.rdp) file created by the RemoteApp Wizard automatically enables Plug and Play device redirection. For more information about TS RemoteApp, see the TS RemoteApp Step-by-Step Guide ().. Note Plug and Play device redirection is not supported over cascaded terminal server connections. For example, if you have a Plug and Play device attached to your local client computer, you can redirect and use that Plug and Play device when you connect to a terminal server (Server1, for example). If from within your remote session on Server1, you then connect to another terminal server (Server2, for example), you will not be able to redirect and use the Plug and Play device in your remote session with Server2. You can control Plug and Play Plug and Play device redirection on the Client Settings tab in the Terminal Services Configuration tool (tsconfig.msc) by using the Supported Plug and Play Devices check box. Additional references For information about other new features in Terminal Services, see What's New in Terminal Services for Windows Server 2008.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732578(v=ws.10)
2018-03-17T10:48:00
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
Checklist: Deploying Server for NIS Updated: March 1, 2012 Applies To: Windows Server 2008 R2, Windows Server 2012 Server for NIS enables a Microsoft Windows–based Active Directory Domain Services (AD DS) domain controller to administer UNIX Network Information Service (NIS) networks. Server for NIS allows you to migrate existing NIS maps from your UNIX-based NIS servers into the AD DS schema, and manage map data by using either command-line or Windows GUI-based tools. Notes Server for NIS can only be installed on an AD DS domain controller. Additional references For more information about Server for NIS, see: - The Windows Server TechCenter for Active Directory Domain Services ()
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753136(v=ws.11)
2018-03-17T10:15:45
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
Monaca Cloud IDE consists of 5 main parts such as: In the menu bar, there are several main menus such as: For more information to enable vision control on your project, please refer to Version Control. This feature allows you to make your project available to other users by sharing the link generated after publishing your project. By accessing the generated link, users can get a copy of their own in their account. All changes made in the copies are not transferred to the original, so there will be no fear of someone messing up the original. Publishing your project is really easy and done by following the next simple steps: File→ Publish Project. Click on Publish button. Use the generated link to share your project. With this feature, we allow Monaca users to directly import published Monaca projects or projects from a given URL directly by just accessing a link. Upon accessing the link, the users will be forwarded to the following screen (if signed in), whereby just clicking the import button the project will be imported into their account. In the Project panel, there are 3 main tabs: Monaca Backend: Contains the backend settings of the project. File Tree, Grep & Backend Once Monaca Debugger is connected with Monaca Cloud IDE, you can do console debugging as well DOM inspection in this panel. For more information, please refer to Monaca Debugger with Monaca Cloud IDE. The Live Preview provides an overview of your app in real-time. You can also interact with this preview as if it is running on an actual device with the limitation of executing the specific device’s functionality (such as camera, contact and so on) and cross-origin network AJAX request. Along with the Monaca Debugger, you will have effective and efficient experiences during app development. In this tab, you can: Configureicon in that tab. Then, you will see a drop-down list of different devices such as iPad, iPhone, and Nexus. You can change the orientation of the screen as well. When using Live Preview, you should be aware of the the following limitations: Access-Control-Allow-OriginHeader (i.e., Cross-Origin Ajax Request is permitted). The Share function allows you to: You can share your project with other Monaca users. You can also add/remove other Monaca users to/from your project. In order to manage the members of your project, please do as follows: Then, the Team Member Manage screen will appear. To add a member, input the email(s) of your team member(s). Please enter one email address per line. You can also assign the role of each member as Developer or Tester by choosing from the drop-down menu. Then, click on Add Member button to send the invitation to them. To remove a member from the project, you can just click on the delete icon at the end of the row of that member’s info as shown below: Views and edits the selected file from the file tree. Various settings such as Preferences is also shown and can be edited here. Once you open a file, you can select it from the tab. The editor supports the syntax highlight of JavaScript/HTML5/CSS3. The editor also supports JavaScript and CSS autocomplete function, Emmet (Zen Coding) and Typescript. Inside this editor, there is also a small and short menu bar as shown below: Within this short menu, you can: Helpicon. Settingicon. You will see 3 menu items such as:
https://docs.monaca.io/en/products_guide/monaca_ide/overview/
2018-03-17T10:44:41
CC-MAIN-2018-13
1521257644877.27
[]
docs.monaca.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Lists all stacks that are importing an exported output value. To modify or remove an exported output value, first use this action to see which stacks are using it. To see the exported output values in your account, see ListExports. For more information about importing an exported output value, see the Fn::ImportValue function. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ListImportsAsync. Namespace: Amazon.CloudFormation Assembly: AWSSDK.CloudFormation.dll Version: 3.x.y.z Container for the necessary parameters to execute the ListImports service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudFormation/MICloudFormationListImportsListImportsRequest.html
2018-03-17T10:43:16
CC-MAIN-2018-13
1521257644877.27
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes the replication configuration from the bucket. For .NET Core, PCL and Unity this operation is only available in asynchronous form. Please refer to DeleteBucketReplicationAsync. Namespace: Amazon.S3 Assembly: AWSSDK.S3.dll Version: 3.x.y.z Container for the necessary parameters to execute the DeleteBucketReplication
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/MIS3DeleteBucketReplicationDeleteBucketReplicationRequest.html
2018-03-17T10:38:42
CC-MAIN-2018-13
1521257644877.27
[]
docs.aws.amazon.com
Account setup guide Welcome to flex.bi ! To setup your account you have follow these steps: 1. Setup your data connections With flex.bi you can connect to your HansaWorld Standard ERP system, as well as import data from other systems that have open REST API connection or SQL database. You can also upload XLS, CSV files or use Google Spreadsheets integration to add values that are outside of any system. JIRA application will allow you to analyse your issues and tasks. flex.bi allows you to build your own data cubes from any of those sources as well. Before you start, we suggest you to read about Basic concepts of flex.bi. 2. See our report template library and build your first report. When you have your data imported and application set up, you can start working with reports. You can use our Report Templates Library and import ready-to-use reports in your account. You can find more information about available templates and manuals how to set up a report in Report Templates Library section. Learn how to Create your own reports and modify our templates 3. Publish reports, dashboards and wallboards to your team For each type of data you have to decide what will be the best way to communicate it to your decision makers in all company levels. For those who who will work with reports on daily basis Dashboards will be the best way - as they can use Drill into & Drill Across functionality to see related data. For occasional users you can setup Dashboards in email - to receive collection of reports with actual results as PDF. Different User rights can be specified per each person (email). Using Wallboards and Embed reports functionality, you can communicate results to team members who are not working with systems, but who need to know about company progress. If you have any questions, please, write to us at [email protected] and we will suggest you the best way for each situation. Happy analysing! flex.bi Team Setup your data connections - Hansaworld data source - Tildes Jumis (LV) - Excel and CSV files - REST API (non-HansaWorld app) - SQL data source - Google Spreadsheets - JIRA - Basic Concepts of Flex.bi - Google Analytics from REST API - Data access roles - Data Mapping - How to set up Standard ERP HTTPS with self-signed certificate - Dashboards and reports visible to non-flex.bi users - Wallboards
https://docs.flex.bi/confluence/support-center/account-setup-guide-48995807.html
2020-03-28T17:29:33
CC-MAIN-2020-16
1585370492125.18
[]
docs.flex.bi
Burning boot.img Using A Thumbdrive (U-Disk) Copy the boot.img into a thumbdrive, then plug it into your target device: Burning boot.img Using An SD-Card Copy the boot.img to an SD-card, then plug into your target device: Erase Partition You can get help for U-Boot commands by using the U-Boot help command. Typing help followed by a command name gives help regarding that particular command: Typing help at the U-Boot command prompt gives a list of supported commands:
https://docs.khadas.com/vim2/UBootUsage.html
2020-03-28T17:26:41
CC-MAIN-2020-16
1585370492125.18
[]
docs.khadas.com
MongoDB query editor The MongoDB query editor is Martini's dedicated editor for MongoDB queries. This editor is displayed when you open a MongoDB query file ( .mongo). It features content-assist and validation to help you create your queries faster and with ease. When creating a MongoDB query, you must provide a value for each of these fields: - Connection, which indicates the MongoDB database connection to query; - Database, which indicates the database in the connection to query; - Collection, which indicates which collection in the database to query; and - Query Type, which indicates the type of query to execute. Format your MongoDB query Martini can also format your query. To do this, right click on the query editor, and select Format. Formatting can also be triggered by using the shortcut . Creating a new MongoDB query To create a new MongoDB query, right-click on the queries directory of the server which contains the connection you would like to query against, and choose New > MongoDB Query. If you're in the Database perspective, the queries directory will appear under the server itself in the Database Navigator; otherwise it will appear in the core package of the server in the Navigator. Export a MongoDB query to a Gloop MongoDB service You can create a Gloop MongoDB service from a MongoDB query file by right clicking the latter from the navigator and then choosing Export > Gloop MongoDB Service from the appearing context menu. Content assist The MongoDB query editor lets you write queries. Editor hotkeys Pressing while the editor has focus will execute the query in the editor. Exporting a MongoDB collection structure to a Gloop model You can also export existing collection structures into Gloop models from the Database perspective. In order to do that, follow the steps below: - Open the Database perspective. - In the Database Navigator tree, expand the MongoDB connection that with the collection you would like to export. - Expand a database of your choice. - Right click the collection of your choice and select Export > MongoDB Collection to Gloop Model. - In the dialog that appears, specify the location and name of your model. The Location and Name fields are pre-populated by default, and set to the codedirectory and the name of the schema table respectively. - Click Finish. The Gloop model created from the collection is created by getting a sample of the collection to determine the structure.
https://docs.torocloud.com/martini/latest/setup-and-administration/data/database/query/mongodb/
2020-09-18T20:23:55
CC-MAIN-2020-40
1600400188841.7
[array(['../../../../../placeholders/img/coder-studio/compressed/mongodb-query-editor.png', 'MongoDB query editor'], dtype=object) array(['../../../../../placeholders/img/coder-studio/new-mongodb-query.png', 'Creating a new MongoDB query'], dtype=object) array(['../../../../../placeholders/img/coder-studio/export-mongodb-query.png', 'Martini Desktop Export MongoDB to Gloop Service'], dtype=object) array(['../../../../../placeholders/img/common/compressed/export-to-model-mongodb-collection.png', 'Exporting a MongoDB collection to Gloop model'], dtype=object) ]
docs.torocloud.com
Spatial Dataset Services API¶ Last Updated: December 2019 Spatial dataset services are web services that can be used to store and publish file-based spatial datasets (e.g.: Shapefile, GeoTiff, NetCDF). The spatial datasets published using spatial dataset services are made available in a variety of formats, many of which or more web friendly than the native format (e.g.: PNG, JPEG, GeoJSON, OGC Services). One example of a spatial dataset service is GeoServer, which is capable of storing and serving vector and raster datasets in several popular formats including Shapefiles, GeoTiff, ArcGrid and others. GeoServer serves the data in a variety of formats via the Open Geospatial Consortium (OGC) standards including Web Feature Service (WFS), Web Map Service (WMS), and Web Coverage Service (WCS). Another supported spatial dataset service is the THREDDS Data Server (TDS), which is a web server that specializes in serving gridded datasets using common protocols including OPeNDAP, OGC WMS, OGC WCS, and HTTP. Examples of data formats supported by THREDDS include NetCDF, HDF5, GRIB, and NEXRAD. Tethys app developers can use this Spatial Dataset Services API to store and access :term:` spatial datasets` for use in their apps and publish any resulting datasets their apps may produce. Spatial Dataset Engine References¶ The engines for some spatial dataset service engines in Tethys implement the SpatialDatasetEngine interface, which means they implement a common set of base methods for interacting with the service. The GeoServerSpatialDatasetEngine is an example of this pattern. Other engines are powered by excellent 3rd-party libraries, such as Siphon for THREDDS spatial dataset services. Refer to the following references for the APIs that are available for each spatial dataset service supported by Tethys. Spatial Dataset Service Settings¶ Using dataset services in your app is accomplished by adding the spatial_dataset_service_settings() method to your app class, which is located in your app configuration file ( app.py). This method should return a list or tuple of SpatialDatasetServiceSetting objects. For example: from tethys_sdk.app_settings import SpatialDatasetServiceSetting class MyFirstApp(TethysAppBase): """ Tethys App Class for My First App. """ ... def spatial_dataset_service_settings(self): """ Example spatial_dataset_service_settings method. """ sds_settings = ( SpatialDatasetServiceSetting( name='primary_geoserver', description='GeoServer service for app to use.', engine=SpatialDatasetServiceSetting.GEOSERVER, required=True, ), SpatialDatasetServiceSetting( name='primary_thredds', description='THREDDS service for the app to use.', engine=SpatialDatasetServiceSetting.THREDDS, required=True ), ) return sds_settings Caution The ellipsis in the code block above indicates code that is not shown for brevity. DO NOT COPY VERBATIM. Assign Spatial Dataset Service¶ The SpatialDatasetServiceSetting can be thought of as a socket for a connection to a SpatialDatasetService. Before we can do anything with the SpatialDatasetServiceSetting we need to "plug in" or assign a SpatialDatasetService to the setting. The SpatialDatasetService contains the connection information and can be used by multiple apps. Assigning a SpatialDatasetService is done through the Admin Interface of Tethys Portal as follows: Create SpatialDatasetServiceif one does not already exist Access the Admin interface of Tethys Portal by clicking on the drop down menu next to your user name and selecting the "Site Admin" option. Scroll down to the Tethys Services section of the Admin Interface and select the link titled Spatial Dataset Services. Click on the Add Spatial Dataset Service button. Fill in the connection information to the database server. Press the Save button to save the new SpatialDatasetService. Tip You do not need to create a new SpatialDatasetServicefor each SpatialDatasetServiceSettingor each app. Apps and SpatialDatasetServiceSettingscan share DatasetS link. Select the link for your app from the list of installed apps. Assign SpatialDatasetServiceto the appropriate SpatialDatasetServiceSetting Scroll to the Spatial Dataset Services Settings section and locate the SpatialDatasetServiceSetting. Note If you don't see the SpatialDatasetServiceSettingin the list, uninstall the app and reinstall it again. Assign the appropriate SpatialDatasetServiceto your SpatialDatasetServiceSettingusing the drop down menu in the Spatial Dataset Service column. Press the Save button at the bottom of the page to save your changes. Note During development you will assign the SpatialDatasetService setting yourself. However, when the app is installed in production, this steps is performed by the portal administrator upon installing your app, which may or may not be yourself. Working with Spatial Dataset Services¶ After spatial dataset services have been properly configured, you can use the services to store, publish, and retrieve data for your apps. This process typically involves the following steps: 1. Get an Engine for the Spatial Dataset Service¶ Call the get_spatial_dataset_service() method of the app class to get the engine for the Spatial Dataset Service: from my_first_app.app import MyFirstApp as app geoserver_engine = app.get_spatial_dataset_service('primary_geoserver', as_engine=True) You can also create a SpatialDatasetEngine object directly. This can be useful if you want to vary the credentials for dataset access frequently (e.g.: using user specific credentials): from tethys_dataset_services.engines import GeoServerSpatialDatasetEngine spatial_dataset_engine = GeoServerSpatialDatasetEngine(endpoint='', username='admin', password='geoserver') Caution Take care not to store API keys, usernames, or passwords in the source files of your app--especially if the source code is made public. This could compromise the security of your app and the spatial dataset service. 2. Use the Spatial Dataset Engine¶ After you have an engine object, simply call the desired methods on it. Consider the following example for uploading a shapefile to a GeoServer spatial dataset service: from my_first_app.app import MyFirstApp as app # First get an engine engine = app.get_spatial_dataset_service('primary_geoserver', as_engine=True) # Create a workspace named after our app engine.create_workspace(workspace_id='my_app', uri='') # Path to shapefile base for foo.shp, side cars files (e.g.: .shx, .dbf) will be # gathered in addition to the .shp file. shapefile_base = '/path/to/foo' # Notice the workspace in the store_id parameter result = dataset_engine.create_shapefile_resource(store_id='my_app:foo', shapefile_base=shapefile_base) # Check if it was successful if not result['success']: raise Note The type of engine object returned and the methods available vary depending on the type of spatial dataset service.
http://docs.tethysplatform.org/en/latest/tethys_sdk/tethys_services/spatial_dataset_services.html
2020-09-18T19:41:18
CC-MAIN-2020-40
1600400188841.7
[]
docs.tethysplatform.org
21. Battery Lifetime Estimator¶ Battery Lifetime Estimator tool is available for DA14580/581/583 and DA14585/6 families. The tool can be loaded by selecting the “Power Monitor” under Layout in the Toolbar or the “Battery Lifetime Estimator” under Tools. Figure 100 Battery Lifetime Estimator The user can specify the input values for the lifetime estimation. The tool is loaded with the default input values. On the bottom we can see the estimated lifetime in days and a table with results, such as adverting and connection current and charge. The calculations are repeated every time an input value changes. When the validation of an input value fails, a respective message appears and user is expected to correct the value in order to proceed with the lifetime estimation. Figure 101 Connection and Advertising Interval validation
http://lpccs-docs.dialog-semiconductor.com/SmartSnippetsToolbox5.0.8_UM/tools/lifetime_estimator.html
2020-09-18T19:45:34
CC-MAIN-2020-40
1600400188841.7
[array(['../_images/batteryLifeEstimator.png', '../_images/batteryLifeEstimator.png'], dtype=object) array(['../_images/invalidRangeError.png', '../_images/invalidRangeError.png'], dtype=object)]
lpccs-docs.dialog-semiconductor.com
Level Files There are a number of files that are generated when you create a new level in Sandbox. Here's a run down of the common files. For these examples, we'll be referring to the files for Forest, the sample map that is shipped with the SDK. Layers When you create Layers inside Sandbox, a "Layers" sub-folder is created in your level folder. Each layer gets its own .lyr file which allows for collaborative editing. If a layer inside Sandbox has sub-layers within it, another sub-folder is created. Backup Files Backup files are automatically generated by Sandbox each time you save your level. When you save over your .cry file, your existing .cry file is renamed to .bak and if in the event a .bak file already exists, .bak2 is created as a secondary backup. These .bak files are exactly the same as the .cry file, only with renamed extensions. If you wish to use or restore from a backup file, there's two ways in which you can do this (using Forest as an example): - Rename the Forest.bak file to Forest.cry. When prompted about changing the extension, select 'Yes'. - Open the Forest.bak file in Sandbox (you'll need to view "All Files") then "Save As" the Forest.cry file.
https://docs.cryengine.com/pages/viewpage.action?pageId=1606395&navigatingVersions=true
2020-09-18T20:06:49
CC-MAIN-2020-40
1600400188841.7
[]
docs.cryengine.com
- Reference > mongoShell Methods > - Role Management Methods > - db.createRole() db.createRole()¶ On this page Definition¶ db. createRole(role, writeConcern)¶¶ In the roles field, you can specify both built-in roles and user-defined roles. To specify a role that exists in the same database where db.createR Replica set¶ If run on a replica set, db.createRole() is executed using majority write concern by default. Scope¶. The db.createRole() method returns a duplicate role error if the role already exists in the database. Required Access¶:
https://docs.mongodb.com/v3.6/reference/method/db.createRole/
2020-09-18T20:38:35
CC-MAIN-2020-40
1600400188841.7
[]
docs.mongodb.com
Current Series Release Notes¶ 8.3.0-39¶ New Features¶ The first IPv4 address of the network_interfaceis now used for ironic and ironic-inspector API URLs in clouds.yamlin openrcinstead of localhost. Use ironic_api_urland ironic_inspector_api_urlto override. Supports TLS configuration by setting enable_tls=trueand, optionally, generate_tls=true. The corresponding bifrost-cliargument is --enable-tls(auto-generated certificates only). The bifrost-ironic-installrole now validates that the services have been started successfully, use skip_validationto disable. Known Issues¶ Because of Ansible dependencies Bifrost only works on virtual environments created with --system-site-packages. When using Keystone for authentication, it may not be possible to disable TLS after enabling it if the certificate is in a non-standard location. Due to upgrade limitations, it may not be possible to enable TLS on upgrading from a previous version. Do an upgrade first, then enable TLS in a separate installation step. Upgrade Notes¶ The use_public_urlsparameter is no longer supported, just provide public_ipinstead. Bifrost no longer adds ironic and ironic-inspector endpoints to the public firewalld zone, the operator has to do it explicitly if external access is expected. Support for the legacy CSV inventory format has been removed, only JSON and YAML are supported now. Support for installing and using RabbitMQ has been removed. Support for storing introspection data in nginx has been removed. It was useful before ironic-inspector started supporting storing data in the database, which is the default nowadays. Support for the OpenStack MetaData version 2012-08-10 has been removed from the bifrost-configdrives-dynamicrole. The newest supported metadata version is now 2015-10-15. The deprecated parameter node_network_infohas been removed, use node_network_datainstead. Adds the explicit setting of file access permissions to get_url calls in bifrost ansible playbooks to ensure that the contents of “/httpboot” are world-readable independently of which Ansible version is in use. Packaged iPXE ROMs are now used by default on openSUSE, set download_ipxe=trueto override. Bifrost will no longer kill all running dnsmasq processes for you. If you have dnsmasq processes that are not managed by systemd, you have to stop them yourself. No longer supports installation outside of a virtual environment. The parameter enable_venvhas been removed. Bug Fixes¶ Fixes an issue where the bifrost-create-dib-image role overrides any existing ELEMENTS_PATH environment variable value. This fix appends any existing ELEMENTS_PATH value to the path set in the role. Changes to keystone endpoint configuration are now automatically reflected on existing endpoints. Correctly updates repositories copied with copy_from_local_path. When copying repositories using copy_from_local_path, make sure they are consistently owned by the local user. Previously some repositories could end up owned by root. Correctly updates IPA images checksums on a major upgrade. Automatically enables DHCP and TFTP services in firewalld on CentOS/RHEL. Instead of modifying the publicfirewalld zone, creates a new zone bifrostand puts the network_interfacein it. Set firewalld_internal_zone=publicto revert to the previous behavior. Makes /var/lib/ironicand its images subdirectories readable by nginx. This is required for using the images cache. Fixes ACL of PXE and iPXE boot files to make sure they are world-readable. Resolves the issue with ansible versions 2.9.12 and 2.8.14 where implicit setting of file permissions on files downloaded with get_url calls results in overly restrictive permissions. This leads to access denied while attempting to read the contents of “/httpboot” and results in failed deployments. Removes the test_vm_network_enable_dhcpoption and disables DHCP on the libvirt network instead of unconditionally killing all dnsmasq processes on the machine. Adds correct SELinux context for /tftpboot. Other Notes¶ The file env-varshas been removed. It contains variables that only work for no-auth mode and only for ironic itself (not inspector). Use the generated clouds.yamlor openrcin the home directory. Ironic JSON RPC is now always authenticated, even in no-auth mode. Removes the no longer used transform_boot_imagevariable. 8.3.0¶ New Features¶ Adds support for configuring credential-less deploy via the new agentpower interface and the manual-managementhardware type. Extra parameters for ansible can now be passed to bifrost-clivia the -e/ --extra-varsflag. The format is the same as for ansible-playbook. Metadata cleaning is now enabled by default, set cleaningto falseto disable completely. To enable full disk cleaning, set cleaning_disk_eraseto true. The new parameter default_boot_modeallows specifying the default boot mode: uefior bios. Set the new parameter developer_modeto trueto make all packages installed from source to be installed with the --editableflag. The corresponding bifrost-cliargument is --develop. The new variable git_url_rootallows overriding the root URL for all repositories (e.g. changing the default a local path). HTTP basic authentication for API services is now supported in addition to no authentication and Keystone. It is triggered by setting noauth_mode=falsewith enable_keystone=false. Installations with bifrost-clinow use HTTP basic authentication if Keystone is disabled. The ramdisk logs for inspection are now stored by default in /var/log/ironic-inspector/ramdisk. If keystone_lockout_security_attemptsis enabled, the amount of time the account stays locked is now regulated by the new parameter keystone_lockout_duration(defaulting to 1800 seconds). Deploy/cleaning ramdisk logs are now always stored by default, use ironic_store_ramdisk_logsto override. Added creation of a symbolic link from $VENV/collections directory which contains ansible collections to the playbooks subdirectory of bifrost. This is done in the env-setup.sh script. The bifrost-create-vm-nodesrole now supports redfish emulation, set test_vm_node_driver=redfish(or --driver=redfishfor bifrost-cli testenv) to use. The new parameter default_boot_modeallows specifying the default boot mode: uefior bios. Upgrade Notes¶ The variable ci_testingis no longer taken into account by the roles. Use the existing copy_from_local_pathif you need Bifrost to copy repositories from their pre-cached locations. If you use cleaning=trueto enable full disk cleaning, you need to also set cleaning_disk_erase=truenow. Omitting it will result in only metadata cleaning enabled. All services now use journald logging by default, ironic-api.logand ironic-conductor.logare no longer populated. Use ironic_log_dirand inspector_log_dirto override. The ramdisk logs for deploy/cleaning are now by default stored in /var/log/ironic/deploy. The inspector_useruser is not created by default any more. Use bifrost_userinstead. If you’re relying on default passwords (e.g. for the database or keystone passwords), they will be changed on upgrade. Please use explicit values if you want to avoid it. OpenStackSDK is now installed from PyPI by default, set openstacksdk_source_install=trueto override. Previously installation used to be skipped completely if the skip_installvariable is defined, independent of its value. This has been fixed, and now installation is only skipped if skip_installis defined and equals true. Deprecation Notes¶ Deprecates providing inspector discovery parameters via inspector[discovery], use explicit variables instead. Bifrost will switch to HTTP basic authentication by default in the future. If you want to avoid it, please set noauth_modeto falseexplicitly. The ironic_db_passwordparameter is deprecated, please use service_passwordto set a password to use between services or override the whole ironicand keystoneobjects. Security Issues¶ Uses mode 0700 for the inspector log directories to prevent them from being world readable. When using Keystone, no longer locks users out of their accounts on 3 unsuccessful attempts to log in. This creates a very trivially exploitable denial-of-service issue. Use keystone_lockout_security_attemptsto re-enable (not recommended). Uses mode 0700 for the ironic log directories to prevent them from being world readable. Random passwords are now generated by default instead of using a constant. The same parameters as before can be used to override them. Bug Fixes¶ No longer clones repositories with corresponding *_source_installvariables set to false. Ironic Staging Drivers are now installed from source by default since they are released very infrequently (usually once per cycle). The addition of the symbolic link makes bifrost playbooks independent of the ANSIBLE_COLLECTIONS_PATHS environment variable which wasn’t reliably set in some environments. Removing dependency on libselinux-python for Fedora OS family. This package is no longer present in Fedora 32 and was causing installation failures. It is safe to remove as it is used with python2 only. On systems with SELinux enforcing, enables nginx to read symbolic links. Fixes network boot of instances. Other Notes¶ The role bifrost-openstack-ci-prephas been removed. It was only used in the upstream CI context and is no longer required. The variable ci_testing_zuulis no longer used or set. The version of cirros used by default is now 0.5.1 (instead of 0.4.0). Bifrost now uses the equivalent modules from the openstack.cloud collection. The change on modules is listed below. os_client_config is config os_ironic is baremetal_node os_ironic_inspect is baremetal_inspect os_ironic_node is baremetal_node_action os_keystone_role is identity_role os_keystone_service is catalog_service os_user is identity_user os_user_role is role_assignment 8.2.0¶ New Features¶ It is now possible to use the bifrostcloud with introspection commands even in no-auth mode. Debian Buster is now supported as a base operating system. Configures the default deploy and rescue kernel/ramdisk, setting them in driver_infois now optional. Ubuntu Focal (20.04) is now supported as a base operating system. The values of enabled_bios_interfaces, enabled_boot_interfaces, enabled_management_interfacesand enabled_power_interfacesare now derived from the enabled_hardware_typesif left empty (the default). Adds a new parameter internal_ipspecifying which IP address to use for nodes to reach ironic and the HTTP server, and for cross-service interactions when keystone is disabled. By default the IPv4 address of the network_interfaceis used. The manual-managementhardware type is now enabled by default. It can be used with hardware that does not feature a supported BMC. The noopmanagement interface can now be used out-of-box with ipmiand redfishnodes to prevent ironic from changing the boot device and order. MetalSmith is now installed by default. A normal ironic nodes.json(suitable for the baremetal createcommand) is now generated when creating testing VMs. The default location is /tmp/nodes.json. Sets the default resource class for newly enrolled nodes without an explicit resource class. Defaults to baremetal, can be changed via the default_resource_classparameter. Fedora 30 is now supported as a base operating system. Adds two new parameters for controlling how existing git checkouts are handled: update_reposcan be set to falseto prevent the repositories from being updated. force_update_reposcan be set to falseto prevent Bifrost from overwriting local changes. Changes the default version of Ansible to version 2.9. The new variable use_tinyipa(defaulting to true) defines whether to use the pre-built tinyIPA images or production-ready CentOS images built with DIB. Upgrade Notes¶ Explicit support for Fedora versions precedent to 30 has been removed. Explicit support for Debian Jessie has been removed. OpenStackClient is no longer installed when keystone is not enabled. Use the ironic native baremetalcommand instead. For example, instead of openstack baremetal node list use just baremetal node list The shade library is no longer used, nor installed by default. The default version of Ansible used for this release of bifrost is version 2.9. Operators may wish to upgrade if they are directly invoking playbooks or roles. All packages are now installed in a virtual environment in /opt/stack/bifrostby default instead of system-wide. Deprecation Notes¶ The bifrost-inspectorcloud in clouds.yamlis now deprecated, use the main bifrostcloud for all commands. The os_ironic_factsmodule is deprecated. Please use os_ironic_node_infothat returns information in the “node” parameter. Support for system-wide installation of packages is deprecated, untested and may be removed in a future release. Bug Fixes¶ Fixes installing Keystone under CentOS 8. Fixes failure to install on systems with a local resolved by setting disable_dnsmasq_dnsto Trueby default. Fixes fast-track deployment after inspection/discovery by providing the correct ironic API URL to the ramdisk. Fixes deployment in a testing environment on CentOS 8 by using firewalld instead of iptables to enable access from nodes to ironic. An ironic-python-agent image is now updated every time the installation playbooks are run. This is done to avoid discrepancy between ironic and the ramdisk on updates. Set update_ipato falseto prevent the ramdisk update (not recommended) or update_reposto falseto disable any updates.
https://docs.openstack.org/releasenotes/bifrost/unreleased.html
2020-09-18T20:10:14
CC-MAIN-2020-40
1600400188841.7
[]
docs.openstack.org
Planning a Data Load¶ This topic provides best practices, general guidelines, and important considerations for planning a data load. In this Topic: Dedicating Separate Warehouses to Load and Query Operations¶ Loading large data sets can affect query performance. We recommend dedicating separate warehouses for loading and querying operations to optimize performance for each. The number of data files that can be processed in parallel is determined by the number and capacity of servers in a warehouse. If you follow the file sizing guidelines described in Preparing Your Data Files, a data load requires minimal resources. Splitting larger data files allows the load to scale linearly. Unless you are bulk loading a large number of files concurrently (i.e. hundreds or thousands of files), a smaller warehouse (Small, Medium, Large) is generally sufficient. Using a larger warehouse (X-Large, 2X-Large, etc.) will consume more credits and may not result in any performance increase.
https://docs.snowflake.com/en/user-guide/data-load-considerations-plan.html
2020-09-18T20:34:00
CC-MAIN-2020-40
1600400188841.7
[]
docs.snowflake.com
. fill((red, green, blue)[, gcolor=(r, g, b)])¶ Fill the screen with a solid color. New in version 1.3: If gcoloris given then fill with a gradient, from colorat the top of the screen to gcolorat the bottom. blit(image, (left, top))¶ Draw the image to the screen at the given position. blit()accepts either a Surface or a string as its imageparameter. If imageis a strthen the named image will be loaded from the images/directory. draw. circle(pos, radius, (r, g, b), width=1)¶ Draw the outline of a circle with a certain line width.. Tip All of the colours can be specified as (r, g, b) tuples, or by name, using one of Pygame’s colour names The resource loader caches loaded images and sounds. To clear the cache (for instance, if you are running into memory issues), use the unload() and unload_all() functions. Example: cow = Actor('cow') loader.images.unload('cow') # clears the cache of cow.png loader.images.unload_all() # clears all cached image files.get and methods as Rect, including methods like .colliderect() which can be used to test whether two actors have collided. Positioning Actors¶ If you assign a new value to one of the position: This can be done during creation or by assigning a pair of x, y co-ordinates. For example: WIDTH = 200 HEIGHT = 200 alien = Actor('alien', center=(100,100)) def draw(): screen.clear() alien.draw() Changing center=(100, 100) to midbottom=(100, 200) gives you: If you don’t specify an initial position, the actor will initially be positioned in the top-left corner (equivalent to topleft=(0, 0)). Anchor point¶. Rotation¶ New in version 1.2. The .angle attribute of an Actor controls the rotation of the sprite, in degrees, anticlockwise (counterclockwise). The centre of rotation is the Actor’s anchor point. Note that this will change the width and height of the Actor. For example, to make an asteroid sprite spinning slowly anticlockwise in space: asteroid = Actor('asteroid', center=(300, 300)) def update(): asteroid.angle += 1 To have it spin clockwise, we’d change update() to: def update(): asteroid.angle -= 1 As a different example, we could make an actor ship always face the mouse pointer. Because angle_to() returns 0 for “right”, the sprite we use for “ship” should face right: ship = Actor('ship') def on_mouse_move(pos): ship.angle = ship.angle_to(pos) Remember that angles loop round, so 0 degrees == 360 degrees == 720 degrees. Likewise -180 degrees == 180 degrees. Distance and angle to¶ New in version 1.2. Actors have convenient methods for calculating their distance or angle to other Actors or (x, y) coordinate pairs. Transparency¶ New in version 1.3. In some cases it is useful to make an Actor object partially transparent. This can be used to fade it in or out, or to indicate that it is “disabled”. The .opacity attribute of an Actor controls how transparent or opaque it is. - When an actor is not at all transparent, we say it is “opaque” and it has opacityof 1.0, and you can’t see through it at all. - When an actor is completely transparent, it has an opacityof 0.0. This will make it completely invisible. To make an actor that is half-transparent (like a ghost), you could write: ghost = Actor('ghost') ghost.opacity = 0.5 This diagram shows the scale; the grey checkerboard is used to give the sense of transparency: Tip The order in which you draw overlapping transparent objects still matters. A ghost seen through a window looks slightly different to a window seen through a ghost., on_finished=None, *. Tone Generator¶ New in version 1.2. Pygame Zero can play tones using a built-in synthesizer. tone. play(pitch, duration)¶ Play a note at the given pitch for the given duration. Duration is in seconds. The pitch can be specified as a number in which case it is the frequency of the note in hertz. Alternatively, the pitch can be specified as a string representing a note name and octave. For example: 'E4'would be E in octave 4. 'A#5'would be A-sharp in octave 5. 'Bb3'would be B-flat in octave 3. Creating notes, particularly long notes, takes time - up to several milliseconds. You can create your notes ahead of time so that this doesn’t slow your game down while it is running: tone. create(pitch, duration)¶ Create and return a Sound object. The arguments are as for play(), above. This could be used in a Pygame Zero program like this: beep = tone.create('A3', 0.5) def on_mouse_down(): beep.play() Data Storage¶ The storage object behaves just like a Python dictionary but its contents are preserved across game sessions. The values you assign to storage will be saved as JSON, which means you can only store certain types of objects in it: list/ tuple, dict, str, float/ int, bool, and None. The storage for a game is initially empty. Your code will need to handle the case that values are loaded as well as the case that no values are found. A tip is to use setdefault(), which inserts a default if there is no value for the key, but does nothing if there is. For example, we could write: storage.setdefault('highscore', 0) After this line is executed, storage['highscore'] will contain a value - 0 if there was no value loaded, or the loaded value otherwise. You could add all of your setdefault lines towards the top of your game, before anything else looks at storage: storage.setdefault('level', 1) storage.setdefault('player_name', 'Anonymous') storage.setdefault('inventory', []) Now, during gameplay we can update some values: if player.colliderect(mushroom): score += 5 if score > storage['highscore']: storage['highscore'] = score You can read them back at any time: def draw(): ... screen.draw.text('Highscore: ' + storage['highscore'], ...) …and of course, they’ll be preserved when the game next launches. These are some of the most useful methods of storage: - class Storage(dict)¶ storage[key] = value Set a value in the storage. storage[key] Get a value from the storage. Raise KeyError if there is no such key in the storage. setdefault(key, default)¶ Insert a default value into the storage, only if no value already exists for this key. get(key, default=None)¶ Get a value from the storage. If there is no such key, return default, or None if no default was given. save()¶ Saves the data to disk now. You don’t usually need to call this, unless you’re planning on using load()to reload a checkpoint, for example. load()¶ Reload the contents of the storage with data from the save file. This will replace any existing data in the storage. Caution As you make changes to your game, storage could contain values that don’t work with your current code. You can either check for this, or call .clear() to remove all old values, or delete the save game file. Tip Remember to check that your game still works if the storage is empty!
https://pygame-zero.readthedocs.io/en/latest/builtins.html
2020-09-18T21:19:21
CC-MAIN-2020-40
1600400188841.7
[array(['_images/anchor_points.png', '_images/anchor_points.png'], dtype=object) array(['_images/alien_center.png', '_images/alien_center.png'], dtype=object) array(['_images/alien_midbottom.png', '_images/alien_midbottom.png'], dtype=object) ]
pygame-zero.readthedocs.io
Quick select buttons are programmable buttons at the Point of Sale that you can set to be any product that you would like. These can be set to be the same at all locations, or difference by location. All registers at any one location will have the same quick select buttons. Under the RETAIL CHAIN module, go to Edit POS quick selections. Click on the location you will be adjusting the quick selection feature for, or select ‘Change general quick selection’ if all of your locations are using the same set of quick select buttons. In BerlinPOS the order of the quick select items can be arranged using the up/down arrows on the right hand side of the Quick Selection tab.
http://docs-eng.nimi24.com/the-back-office/retail-chain/locations/quickselect
2020-09-18T21:11:07
CC-MAIN-2020-40
1600400188841.7
[]
docs-eng.nimi24.com
TOPICS× Overlay details Overlay details are shown when you hover on top of a link overlay. Overlay details display the following values that are tracked for that link: - Metric - Raw value - Rank - Percentage value - Link ID - Region - Show in Links On Page report
https://docs.adobe.com/content/help/en/analytics/analyze/activity-map/activitymap-overlay-details.html
2020-09-18T20:42:08
CC-MAIN-2020-40
1600400188841.7
[array(['/content/dam/help/analytics.en/help/analyze/activity-map/assets/overlay_details.png', None], dtype=object) ]
docs.adobe.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the DescribeLogPattern operation. Describe a specific log pattern from a LogPatternSet. Namespace: Amazon.ApplicationInsights.Model Assembly: AWSSDK.ApplicationInsights.dll Version: 3.x.y.z The DescribeLogPatternRequest type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0, 1.3 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ApplicationInsights/TDescribeLogPatternRequest.html
2020-09-18T21:38:33
CC-MAIN-2020-40
1600400188841.7
[]
docs.aws.amazon.com
Storm plugin This plugin monitors, visualizes and alert on your Apache Storm environment. Prerequisites Meter 4.5.0-778 or later must be installed. The Apache Storm plugin 0.9.3 or later supports the following Operating System. Events Generated An Event will be generated whenever the value of lasterror is not nil and will be displayed in the situation room. Once an error occurs an event will be triggered during each polling interval. We will see only one Event for the same error in the situation room with an occurrence count. A new Event will be generated for a different error. Plugin metrics For STORM_BOLT_LASTERROR and STORM_SPOUT_LASTERROR, 1 is used if there was an error, and the value for the errorLapsedSecs is less than the polling interval.
https://docs.bmc.com/docs/intelligence/storm-plugin-736714562.html
2020-09-18T20:26:00
CC-MAIN-2020-40
1600400188841.7
[]
docs.bmc.com
The global preferences dialog window specifies default user preferences for FlexSim. These preferences are saved across several models and remembered when FlexSim is closed and reopened. The dialog window is split into seven tabs. These settings are accessed from the File menu. The Fonts and Colors tab specifies syntax highlighting and other settings used in FlexSim's implementation of the Scintilla code editor. You can also specify settings and colors for the template editor, as well as for unformatted text (which is used in multi-line unformatted text controls like the user description portion of the Properties window). For more information on the Scintilla code editor, go to. The Scintilla text editor is under copyright 1998-2003 by Neil Hodgson <[email protected]>. All Rights Reserved. In this tab page you can specify various settings such as whether you want code to be C++ or FlexScript by default, various grid settings in the ortho and perspective view, excel DDE settings, etc. AutoSave will automatically save a backup model of your currently open model. This backup model will be named [modelname]_autosave.fsm and will be saved in the same directory as your model. AutoSave will only save your model if it is reset and not running. You can disable AutoSave for a specific model in the Model Settings window. For a list of valid time and date format options, see Model Settings. This tab page lets you control libraries that are loaded and displayed in the Drag-Drop Library. The paths specified in the User Libraries to Open on Startup are relative to your libraries folder of the install directory (for example, C:/Program Files/FlexSim/libraries). Enabled Libraries allow you to specify which libraries are visible and which order they appear in. Once a User Library is loaded, it will be displayed in this list. In this tab page you can specify various settings that send or pull content from the FlexSim server in order to give you a more dynamic experience or to help FlexSim better understand how to improve the software. In this tab page you can customize which menu commands are accessible easily through the customizable section of the top toolbar. This tab is used for customizing 3D graphics settings so that FlexSim will run the best on your hardware. Several of these options provide suggestions or hints to the graphics card as to the quality you would like. If the graphics card does not support the feature requested, such as Quad Buffered Stereo 3D or a 32 Bit Depth Buffer, it will fall back to using the next best option that it supports. If no graphics acceleration is available with your system configuration, FlexSim will automatically fall back to using a generic software OpenGL implementation and ignore the settings defined here. This option controls which type of OpenGL rendering context FlexSim will request from the graphics driver on your computer. If checked, FlexSim will display a counter showing how quickly the 3D view is rendering in frames per second. Choose the font and size for the object names and stats rendered in the 3D view. This option controls whether shadows will be rendered in the 3D environment. This specifies whether shadows should be rendered with hard or soft edges. This specifies the resolution of the shadow map. A higher resolution will improve shadow quality, but will require more graphics processing. This specifies the number of cascaded shadow maps to use. More cascade splits will improve shadow quality, but will require more graphics processing. This affects how much shadows are blurred around the edges when using soft shadows. If checked, FlexSim will generate multiple resized "thumbnails" of each texture it loads. This allows for faster and better texture rendering when zoomed out, but requires approximately double the graphics card memory to store the texture. Select how the graphics card should resolve the color of each pixel when you are zoomed in close to a texture. Select how the graphics card should resolve the color of each pixel when you are zoomed out from a texture. Choose the format in which you want the graphics to store color data. Choose the format in which you want the graphics to store depth buffer data. Depth buffer values span from a 3D view's near plane to its far plane, so the depth buffer bit depth defines the granularity by which the graphics card can distinguish which objects are in front of or behind other objects. This option controls how stereoscopic 3D should be rendered. Once your computer is properly configured with a 3D display, this mode will automatically render 3D views in stereoscopic 3D. If you are using an Nvidia Geforce graphics card, the view must be full-screen (F11) to trigger the effect. On Nvidia Quadro or AMD FirePro graphics cards, the effect will work with all 3D views. This mode only works with certain configurations of hardware and software. The graphics card, 3D display monitor, cable/port (DVI-D, DisplayPort, HDMI 1.4a), graphics driver, and operating system must all be compatible for this mode to work correctly. If any of those pieces are not compatible, then FlexSim will automatically fall back to rendering without stereoscopic 3D. This controls the difference in horizontal position between the right and left images, changing the depth of the stereo effect. This controls the focal point of the stereo effect. Increasing the convergence makes near objects appear to pop out of the screen. These settings are applied when enabling RTX Mode. After modifying these settings, restart RTX Mode to see the changes. This tab is used for configuring Visual Studio to compile or debug C++ code.
https://docs.flexsim.com/en/20.1/Reference/GeneralModelSettings/GlobalPreferences/GlobalPreferences.html
2020-09-18T20:14:27
CC-MAIN-2020-40
1600400188841.7
[]
docs.flexsim.com
Satpy’s Documentation¶ Satpy is a python library for reading, manipulating, and writing data from remote-sensing earth-observing meteorological satellite instruments. Satpy provides users with readers that convert geophysical parameters from various file formats to the common Xarray DataArray and Dataset classes for easier interoperability with other scientific python libraries. Satpy also provides interfaces for creating RGB (Red/Green/Blue) images and other composite types by combining data from multiple instrument bands or products. Various atmospheric corrections and visual enhancements are provided for improving the usefulness and quality of output images. Output data can be written to multiple output file formats such as PNG, GeoTIFF, and CF standard NetCDF files. Satpy also allows users to resample data to geographic projected grids (areas). Satpy is maintained by the open source Pytroll group. The Satpy library acts as a high-level abstraction layer on top of other libraries maintained by the Pytroll group including: Go to the Satpy project page for source code and downloads. Satpy is designed to be easily extendable to support any meteorological satellite by the creation of plugins (readers, compositors, writers, etc). The table at the bottom of this page shows the input formats supported by the base Satpy installation. Note Satpy’s interfaces are not guaranteed stable and may change until version 1.0 when backwards compatibility will be a main focus. Changed in version 0.20.0: Dropped Python 2 support. - Overview - Installation Instructions - Downloading Data - Examples - Quickstart - Readers - Composites - Resampling - Enhancements - Writers - MultiScene (Experimental) - Developer’s Guide
https://satpy.readthedocs.io/en/latest/?badge=latest
2020-09-18T19:49:17
CC-MAIN-2020-40
1600400188841.7
[]
satpy.readthedocs.io
Symptom Some recovery kits monitor protected resources by performing query operations that simulate user and/or client activity. This provides SIOS Protection Suite with accurate status information about a protected application or service. It also requires that a valid user account ID and password with login privileges be provided during resource object creation. If the user account does not have login privileges on a particular system, the following error message will be recorded in the Windows Application Event Log: Error Number 1385 – “Logon failure: the user has not been granted the requested logon type at this computer. Solution Have the domain administrator provide login privileges for the user account. Also, most recovery kits that require an ID and password have a resource properties or configuration tab available for administrators to change the account information for the resource. Right-click on the resource object and select the appropriate properties or configuration tab. If the resource does not have an account update feature, the resource object must be deleted and a new one created with updated account information. Post your comment on this topic.
http://docs.us.sios.com/sps/8.7.0/en/topic/restore-and-health-check-account-failures
2020-09-18T21:06:19
CC-MAIN-2020-40
1600400188841.7
[]
docs.us.sios.com
Overview - Customer Service Suite Empower your team to deliver the highest quality customer service in online sales and support. Communicate with customers in their preferred channel, increase efficiency, and find precise answers for complex questions more easily. The Customer Service Suite (CSS) extends your website, web application, native mobile application with an integrated customer service, customer support and online sales platform. Our toolset helps agents to resolve customer issues in record time, as well as sell and consult better. Core FeaturesCore Features - Co-Browsing - Live Chat - Webcalling - Chatbots
https://docs.chatvisor.com/docs/setup05_css_overview/
2020-09-18T19:59:17
CC-MAIN-2020-40
1600400188841.7
[]
docs.chatvisor.com
User Defined Types Cassandra 2.1 introduced user-defined types (UDTs). You can create a new type through CREATE TYPE statements in CQL: CREATE TYPE address (street text, zip int); Version 2.1 of the Python driver adds support for user-defined types. Registering a Class to Map to a UDT You can tell the Python driver to return columns of a specific UDT as instances of a class by registering them with your Cluster instance through Cluster.register_user_type(): cluster = Cluster(protocol_version=3) session = cluster.connect() session.set_keyspace('mykeyspace') session.execute("CREATE TYPE address (street text, zipcode int)") session.execute("CREATE TABLE users (id int PRIMARY KEY, location frozen Using UDTs Without Registering Them Although it is recommended to register your types with Cluster.register_user_type(), the driver gives you some options for working with unregistered UDTS. When you use prepared statements, the driver knows what data types to expect for each placeholder. This allows you to pass any object you want for a UDT, as long as it has attributes that match the field names for the UDT: cluster = Cluster(protocol_version=3) session = cluster.connect() session.set_keyspace('mykeyspace') session.execute("CREATE TYPE address (street text, zipcode int)") session.execute("CREATE TABLE users (id int PRIMARY KEY, location frozen<address>)") class Foo(object): def __init__(self, street, zipcode, otherstuff): self.street = street self.zipcode = zipcode self.otherstuff = otherstuff insert_statement = session.prepare("INSERT INTO users (id, location) VALUES (?, ?)") # since we're using a prepared statement, we don't *have* to register # a class to map to the UDT to insert data. The object just needs to have # "street" and "zipcode" attributes (which Foo does): session.execute(insert_statement, [0, Foo("123 Main St.", 78723, "some other stuff")]) # when we query data, UDT columns that don't have a class registered # will be returned as namedtuples: results = session.execute("SELECT * FROM users") first_row = results[0] address = first_row.location print address # prints "Address(street='123 Main St.', zipcode=78723)" street = address.street zipcode = address.street As shown in the code example, inserting data for UDT columns without registering a class works fine for prepared statements. However, you must register a class to insert UDT columns with unprepared statements.* You can still query UDT columns without registered classes using unprepared statements, they will simply return namedtuple instances (just like prepared statements do). * this applies to parameterized unprepared statements, in which the driver will be formatting parameters – not statements with interpolated UDT literals.
https://docs.datastax.com/en/developer/python-driver/3.18/user_defined_types/
2020-09-18T20:41:51
CC-MAIN-2020-40
1600400188841.7
[]
docs.datastax.com
This plugin allows GitHub users to sign-up and log in SonarQube. At, create a Developer application : "Homepage URL" is the public URL to your SonarQube server, for example "". For security reasons HTTP is not supported. HTTPS must be used. The public URL is configured in SonarQube at "Administration" -> "General" -> "Server base URL"" To provide feedback (request a feature, report a bug) use the Community Forums. Please do not forget to specify plugin and SonarQube versions if it relates to a bug.
https://docs.sonarqube.org/plugins/viewsource/viewpagesrc.action?pageId=6953125
2020-09-18T20:38:12
CC-MAIN-2020-40
1600400188841.7
[]
docs.sonarqube.org
Puppet Enterprise EOL Notice As of December 2018, Puppet Enterprise versions 2017.x and 2016.x are EOL. Refer to Puppet’s lifecycle page Quick Start Process - Define Puppet Master Configuration - Define Puppet Agent Configuration - Apply a Puppet Master and Puppet Agent vRA Property to a blueprint or other component Custom Property - Provision!
http://docs.sovlabs.com/latest/vmware-vra7x-plugin/modules/configuration-mgmt/puppet-enterprise/
2020-09-18T20:02:28
CC-MAIN-2020-40
1600400188841.7
[]
docs.sovlabs.com
How to use the Visual Page Builder Learn how to use the Visual Composer Page Builder plugin. If you haven’t already installed the Visual Composer, please install by following these step. Goto the WordPress Admin > Appearance > Install Plugins > Locate Visual Composer > Click Install If you’re creating a new page, clicking the Backed Editor button will display a screen similar to the one in the diagram above.. Once you’ve added an Element has been added to a page, to edit that Element, simply rollover it and icons will appear for you to select. See the diagram above for reference:: If you wish to load a Visual Composer Template you’ve created or load one of the default Templates, create a new post or page and follow these steps:
http://you.docs.acoda.com/tag/visual-composer-not-showing/
2020-09-18T20:44:35
CC-MAIN-2020-40
1600400188841.7
[]
you.docs.acoda.com
The eLearnCommerce_profile_authentication shortcode will display a login form for your students. If the student is already logged in, it will display the logged in user profile page. How to open the eLearnCommerce Authentication Shortcode Builder? How does the Authentication ShortCode Builder work? You can change the parameters available in the builder to adjust the shortcode to meet your needs. Once you are ready, click Use Shortcode for it to be inserted in the editor. To know more about eLearnCommerce's Visual Shortcode Builder, click here.
https://docs.elearncommerce.com/en/articles/2745690-shortcode-elearncommerce_profile_authentication
2020-09-18T19:19:20
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/175816762/5e6d44fdf410915e28c7f94a/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/197767116/b4e22f04706ef1cab5032917/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/174852621/48c7446039a7100cd567e40f/image.png', None], dtype=object) ]
docs.elearncommerce.com
Creating a contact page with the Customizr theme and the Formidable Forms plugin If you have a WordPress website, be it a blog or a static website, a contact form is a valuable addition to the site. Instead of asking your readers to contact you via mail, you can provide a contact form page and ask your users to give feedback or send queries via the form. The Formidable Forms plugin Formidable Forms is an excellent free contact form plugin for WordPress. It's beautiful and easy to use and inegrates seamlessly with the Customizr theme. The plugin is a pleasure to use with a simple drag-and-drop interface to create custom forms. You can even generate forms from a template. There is a contact form template that the plugin comes with, but you can also create your own template. The free version has all the features you'll need to create a nice contact form : - 7 field types in the free version, - unlimited email notifications, - the visual form styler, - ability to view form submissions from the back-end. And numerous other features, this plugin is a useful addition to every WordPress site. Installation of Formidable Forms Installation of the Formidable Forms plugins follows the usual procedure. From the WordPress admin dashboard, click on Plugins > Add New and enter Formidable Forms in the Search box. From the results, click on the Install Now button against the plugin. After Installation, activate it and you are done with the installation. Creating a Contact Form To create a Contact Form, click on Formidable > Forms from the admin dashboard. There is a listing of the forms available which is none as of now. Click on Add New. You get a screen with the form editor as below. Now, the plugin comes with a predefined template for Contact Form. You can either use this or build a form from scratch. Creation from a template In the form editor, type in the title of the Form. In the body of the form, you will find a dropdown list with the text Or load fields from a template. From this dropdown list, choose Contact us and click on Load Template. Your contact form gets created from the template with a set of pre-defined fields. Click on Create at the bottom of the screen to finalize the creation of the form. You may note that the captcha needs some setup. Either click on the link against the text next to Captcha or access it through Formidable > Global Settings from the admin dashboard. You will have to signup for a reCAPTCHA key. After getting the reCAPTCHA key, enter the Site Key and Private/Secret Key to enable Captcha in your form. Manual Creation Alternately, you can create a contact form with your choice of fields from the available fields. Go to Formidable > Forms from the admin dashboard and click on Add New. In the Form Editor, type in the title. Instead of loading the fields from the template, drag fields from the panel on the right. You may need a Single Line Text for Name. Drag it and drop in the body of the form. Click on Single Line Text to change the label to Name. In a similar fashion, add Email Address and Paragraph Text for email and Message respectively. Click on the asterisk (*) in front of the labels to make the field a required field. Click on Create to finalize the creation of the form. That is a bare minimum Contact Form that you have created. From the Settings tab (just next to the Build tab) of the Form editor, you can set up Form Actions like notifying by email of any new messages submitted. Click on Form Actions in the left panel and expand Email Notification. Substitute [sitename] with the Name field by placing the cursor there and click on the Name Key on the right side panel. Similarly, you can change the From email address. Click on Update to add this Email Notification Action to your form. Creating a Contact Page Now that the form is created, you have to place this form on a page on your site. So, go ahead and create a new page. Click on Pages > Add New from the admin dashboard. In the Page editor, type a title. Click on the Formidable button above the editor. You will see a popup screen with Insert a Form. From the dropdown list against Select a form:, choose the contact form you created (from scratch or from template). Click on Insert into Post. You will find a shortcode for your form pasted in the body of the page. If you have set up any options when you inserted the form, those options are also present in the shortcode. Publish the page. Adding Contact Page to Main Menu You created a contact form and then placed it in a page. There is one last step. You will have to place it in your menu so that your users can actually use it. From the admin dashboard, go to Appearance > Menus, choose the header menu and add the page you created to this menu. Check against the page and click on Add to Menu. Click on Save Menu. Visit your site to see if your Contact Page is present on your menu. Click on the menu item to see your Contact Page. You can see your Contact Page in action. Try using it to verify if it works. Enter a name, email and a message. On submission, if you get a message like " Your responses were successfully submitted. Thank you!" (The default message, which can be changed in the settings of the form), then your form works. You can further verify by checking the entries for the form. Click on Formidable > Entries, from the admin dashboard. You will see a list of entries which are the message submitted. Click on View below an entry to view the message. That is great! Your work is appreciated. Your reader is able to communicate with you through the form. Adding Contact Page to custom Footer Menu In case you do not want the Contact Page to appear in the header menu, but prefer to have it in a footer menu, what should you do? Just a few steps. - Create a menu, say newmenu by clicking on Appearance > Menus from the admin dashboard. - Add the following bit of code to your child theme' ) ).'' : '' ); } - From the customizer, Click on Menus > newmenu. Please note that instead of newmenu, you will choose the name that you gave your new menu. - Click on Add Items and the add the pages you want to your menu. - Check against Footer menu under Menu locations. Save and Publish. Visit your site to see if your footer menu can be seen. You will see something like this. That's it. What a breeze it is to customize your site using Customizr theme. Creation of a contact form and placing it in a standard menu or even a custom menu cannot get simpler than this. Can it? Try it!
https://docs.presscustomizr.com/article/181-creating-a-contact-page-with-the-customizr-theme-the-formidable-forms-plugin
2020-09-18T20:29:11
CC-MAIN-2020-40
1600400188841.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/56334203c697910ae05ef774/file-JxJBO8im72.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5631ca7ac69791452ed4faa5/file-mJqfLUVFOt.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5631cacc903360610fc6c517/file-rQzMbnARnw.png', None], dtype=object) ]
docs.presscustomizr.com
Contemporary visions for the wedding being a moral grey area, whilst the most useful in the field with a graduate pupil this season. Education delays the age from which a lady marries. Filipina girls are raised of their old tradition and countries that has great family members values. Spot into the culture: Asian women can be maybe perhaps not individualists; they truly are afraid to be alone, this is the reason family members and the– that is collective, family relations, neighbors etc. Such mail purchase brides catalogs are designed, being a guideline, by using specialized publishing businesses, purchasing print that is rather large.. Fundamentally, yes, although the cost of mail purchase bride s comprises of a checklist that is prolonged could appear a small amount of pricey too The cost of mail purchase brides is clearly nothing at all when compared to joy that is invaluable delight which you gain in revenue. Twenty-one percent of women (20-24 years of age) around the globe were child brides. Find brides that are foreign connect to and let love find you. With countless resources, additionally the many information that is up-to-date on our partner web sites, civil culture will come together and work, locally and globally, to speed up a conclusion to youngster wedding and empower ladies and girls to end up being the motorists of modification. Share your chosen lifestyle, reputation for your nation, music, and films while having a good time learning exactly the same but Slavic. Dating or being hitched to a us girl is like driving a beat-up Ford Escort. With this generation of kids and people that are young it really is getting more socially appropriate to obtain hitched later on and also to wait childbirth, for financial and wellness reasons. As a result, a shadow industry” of application advisors has arisen to simply help potential Russian brides streamline their bids that are online matrimony, Mr. Rowlson writes. Therefore , in reality, difficult anodized cookware ship purchase brides to be should be girls that need to find out their own hubby by in foreign nations. Today’s era that is modern includes a person fulfilling another one on one and investing quite a long time in a relationship before marriage. In a period whenever many people didn’t understand their precise birth date into the place that is first many states went by British typical legislation, which allowed girls to marry at 12 and men at 14. But as perceptions of wedding and youth shifted during the early twentieth century, some Americans started to view marriages between teenagers and grownups as strange or improper. The mail purchase bride sites have an interest in causing you to pleased. Korean society generally speaking, however, nevertheless has a tendency to think about the influx of foreign partners “a crisis,” as opposed to an” that is“irreversible making South Korean families more diverse, said Yang Sung Eun, a teacher in household studies at Chosun University in Southern Korea. Southern Korea happens to be grappling with moving demographics which have kept numerous middle-aged men – particularly in the countryside – cut adrift amid a potential-wife deficit in a country that prizes the rosy image of wedding. People provide sites with their data that are personal papers, and then make transactions online. However, males whom got stunning brides and spouses regarding the online dating sites notice how chatturbate their surrounding modifications if they come. Let’s begin with some “what-to-do” guidelines about dating a Russian or mail order bride that is ukrainian. At precisely the same time, the share of the delighted relationship that ultimately ends up with wedding is higher if in comparison to free relationship apps. In certain parts of asia here nevertheless be physical physical physical violence in a family group. The Hispanic mail purchase bride agencies not just inflate the academic standard of their females, they hide the usability of this training outside of Colombia.
https://docs.securtime.in/index.php/2019/12/09/overseas-dating-site-mail-purchase-brides-first/
2020-09-18T21:00:42
CC-MAIN-2020-40
1600400188841.7
[]
docs.securtime.in
Disallowed Minor Code Changes¶ There are a few types of code changes that have been proposed recently that have been rejected by the Glance team, so we want to point them out and explain our reasoning. If you feel an exception should be made for some particular change, please put it on the agenda for the Glance weekly meeting so it can be discussed. Database migration scripts¶ Once a database script has been included in a release, spelling or grammar corrections in comments are forbidden unless you are fixing them as a part of another stronger bug on the migration script itself. Modifying migration scripts confuses operators and administrators – we only want them to notice serious problems. Their preference must take precedence over fixing spell errors. Typographical errors in comments¶ errors only muddies the history of that part of the code, making git blame arguably less useful. So such changes are likely to be rejected. (This prohibition, of course, does not apply to corrections of misleading or unclear comments, or for example, an incorrect reference to a standards document.) Misspellings in code¶ Misspellings in function names are unlikely to be corrected for the “historical clarity” reasons outlined above for comments. Plus, if a function is named mispelled() and a later developer tries to call misspelled(), the latter will result in a NameError when it’s called, so the later developer will know to use the incorrectly spelled function name. Misspellings in variable names are more problematic, because if you have a variable named mispelled and a later developer puts up a patch where an updated value is assigned to misspelled, Python won’t complain. The “real” variable won’t be updated, and the patch won’t have its intended effect. Whether such a change is allowed will depend upon the age of the code, how widely used the variable is, whether it’s spelled correctly in other functions, what the current test coverage is like, and so on. We tend to be very conservative about making changes that could cause regressions. So whether a patch that corrects the spelling of a variable name is accepted is a judgment (or is that “judgement”?) call by reviewers. In proposing your patch, however, be aware that your reviewers will have these concerns in mind. Tests¶ Occasionally someone proposes a patch that converts instances of assertEqual(True, whatever) to assertTrue(whatever), or instances of assertEqual(False, w) to assertFalse(w) in tests. Note that these are not type safe changes and they weaken the tests. (See the Python unittest docs for details.) We tend to be very conservative about our tests and don’t like weakening changes. We’re not saying that such changes can never be made, we’re just saying that each change must be accompanied by an explanation of why the weaker test is adequate for what’s being tested. Just to make this a bit clearer it can be shown using the following example, comment out the lines in the runTest method alternatively: import unittest class MyTestCase(unittest.TestCase): def setUp(self): pass class Tests(MyTestCase): def runTest(self): self.assertTrue('True') self.assertTrue(True) self.assertEqual(True, 'True') To run this use: python -m testtools.run test.py Also mentioned within the unittests documentation. LOG.warn to LOG.warning¶ Consistently there are proposed changes that will change all {LOG,logging}. warn to {LOG,logging}.warning across the codebase due to the deprecation in Python 3. While the deprecation is real, Glance uses oslo_log that provides alias warn and solves the issue in single place for all projects using it. These changes are not accepted due to the huge amount of refactoring they cause for no reason. Gratuitious use of oslo libraries¶ We are big fans of the oslo libraries and all the hard work the Oslo team does to keep common code reusable and easily consumable. But that doesn’t mean that it’s a bug if Glance isn’t using an oslo library everywhere you could possibly use one. We are all for using oslo if it provides any level of benefit for us and makes sense, but please let’s not have these bugs/patches of “Let’s use oslo because it exists”.
https://docs.openstack.org/glance/latest/contributor/minor-code-changes.html
2020-09-18T19:52:12
CC-MAIN-2020-40
1600400188841.7
[]
docs.openstack.org
CBS’ye Giriş¶ Genel Bakış¶ Tıpkı doküman oluşturmak için kelime işlemci programlar kullandığımız gibi mekansal veriler ile bilgisayar üzerinde işlem yapmak için CBS uygulamaları” kullanırız. CBS **’Coğrafi Bilgi Sistemleri’ için kullanılan bir kısaltmadır. CBS şu bileşenlerden oluşur: - Sayısal Veri –– bilgisayar kullanarak görüntüleyeceğiniz ve analiz edeceğiniz coğrafi veri. Donanım –– veri işlemek, görüntülemek ve depolamak için kullanılan bilgisayarlar. Yazılım –– donanım üzerinde çalışan ve sayısal veri ile çalışmanıza imkan tanıyan bilgisayar programları. CBS’nin bir parçası olan bilgisayar programlarına CBS Uygulaması adı verilir. CBS uygulaması ile sayısal haritaları bilgisayarınızda açabilir, yeni mekansal bilgi üreterek haritaya ekleyebilir, ihtiyaçlarınıza uygun haritaları yazdırabilir ve mekansal analizler gerçekleştirebilirsiniz. Let’s look at a little example of how GIS can be useful. Imagine you are a health worker and you make a note of the date and place of residence of every patient you treat. Yukarıdaki tabloya bakacak olursanız Ocak ve Şubat aylarında çok sayıda kızamık vakası olduğunu hemen farkedersiniz. Sağlık personelimiz her hastanın evinin koordinatlarını tabloya işlemiş. CBS uygulaması ile bu veriyi görüntülersek hastalığın örüntüsü(dağılımı) hakkında daha fazla bilgi sahibi oluruz. CBS Hakkında Daha Fazlası¶ CBS görece yeni bir alandır — 1970’lerde başlamıştır. Bilgisayarlı CBS ilk zamanlar yalnızca pahalı bilgisayarlara sahip şirketler ve üniversiteler tarafından kullanılabiliyordu. Bugunse kişisel bilgisayar veya dizüstü bilgisayar sahibi herkes CBS yazılımı kullanabilir. Zaman içerisinde CBS uygulamalarının kullanımı kolaylaştı –– önceleri bu programları kullanabilmek için yoğun bir eğitim gerekirken bugün amatör ve sıradan kullanıcılar bile kolayca kullanabilmektedir. Yukarıda da belirttiğimiz üzere CBS yazılımdan daha fazlasıdır, sayısal coğrafi verinin yönetimi ve kullanımına ilişkin tüm boyutları içerir. Takip eden rehberlerde CBS Yazılımlarına(Uygulamalarına) yoğunlaşacağız. CBS Yazılımı / Uygulaması Nedir?¶ You can see an example of what a GIS Application looks like figure_gis_application. figure_map_view_towns, figure_map_view_schools, figure_map_view_railways and figure_map_view_rivers figure_vector_data shows different types of vector data being viewed in a GIS application. In the tutorials that follow we will be exploring vector data in more detail. block:. Now you try. Daha Fazla Kaynak¶ Kitap: Desktop GIS: Mapping the Planet with Open Source Tools. Author: Gary Sherman. ISBN: 9781934356067 The QGIS User Guide also has more detailed information on working with QGIS.
https://docs.qgis.org/3.4/tr/docs/gentle_gis_introduction/introducing_gis.html
2020-09-18T21:14:26
CC-MAIN-2020-40
1600400188841.7
[array(['../../_images/menus.png', '../../_images/menus.png'], dtype=object) array(['../../_images/toolbars.png', '../../_images/toolbars.png'], dtype=object) array(['../../_images/map_legend_before.png', '../../_images/map_legend_before.png'], dtype=object) array(['../../_images/vector_data.png', '../../_images/vector_data.png'], dtype=object) ]
docs.qgis.org
The Content Stream is a powerful tool for creating Learning Journeys on your eLearning Platform. how you create a Content Stream: - Click on Personalize Learning 2. Click on Add Learning Channel 3. Name your Content Stream 4. On the same page scroll down and under Content Stream Manager, set status to Active. 5. One by one add the courses, ebooks, videos you published with eLearnCommerce by clicking Add New Type and find courses you published on your Platform under course name and set delivery rules You can choose from the following Delivery Rules: - Instant Access - Dripped - Specific Date - Advanced: Dripped by Seconds Learn More about the Content Stream Timing Functionality 6. Click on Publish at the top right. This will add a Content Timeline as in the example below
https://docs.elearncommerce.com/en/articles/3617871-how-to-create-a-content-stream-in-the-personalized-learning-module
2020-09-18T20:04:29
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/184263081/16a5927b1ca5e2051b3b793f/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184263877/15e733877d94810f928d57b1/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184264295/5c4ce04fd794dbf6d20c079b/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184584384/4c35949f136c79fec8cf0ecb/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184271246/cc68d1736a1905ccae3ee3b5/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184274313/2c6e5992488d7f6aa6c3b1b5/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184278071/1d21c6d60ae2f8bcae002286/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/184321856/356730224533fff4fc5002e4/CS-ipad+%282%29.png', None], dtype=object) ]
docs.elearncommerce.com
330cm Wide Clothesline Check out our ultimate guide for 330cm wide clothesline products that are all Aussie designed or built right here! Built to withstand the Australian climate as well as usage scenarios of varying degrees, these mighty workhorses that we recommend might just be what you are looking for. Before we proceed, we would like to highlight that Lifestyle Clotheslines has been one of the most trusted online clothesline retailer in Australia with over 9,900 verified reviews and counting, rest assured that we will do our best to make sure that you hard-earned money will be well spent. If you want a clothesline that can handle the toughest job, survive the harsh winters and scorching summer heat we experience throughout the year, help you save on utility bills -- all while perfectly blending with your home's aesthetics, then, we've got you covered! We have created a video below where you'll see our most preferred 330cm wide clothesline brands and models, the colours available for each one, the mounting options/freestanding conversion kits, installation services, and more! Links to 330 330cm wide clothesline video on YouTube
https://docs.lifestyleclotheslines.com.au/article/4443-330cm-wide-clothesline
2020-09-18T19:51:56
CC-MAIN-2020-40
1600400188841.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555c1c9ce4b027e1978e16f8/images/5f1716b02c7d3a10cbab114a/file-nTjXqWw3jg.jpg', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555c1c9ce4b027e1978e16f8/images/5b03b61e0428635ba8b28a65/file-qLOA0jZSUC.png', 'Get help and support for 330cm clothesline'], dtype=object) ]
docs.lifestyleclotheslines.com.au
I have installed Xamarin and vs2017 on windows server 2016. Getting the below error as below. Please let me know how to resolve this. Note: The current Os version of windows server 2016 is 1607 and Hyper is also enabled. "launching the android emulator android_accelerated_x86_oreo on hyper-v needs windows spring creators update (Redstone 4) or newer installed."
https://docs.microsoft.com/en-us/answers/questions/36654/installingrunning-xamarin-app-windows-server-2016.html
2020-09-18T20:20:50
CC-MAIN-2020-40
1600400188841.7
[]
docs.microsoft.com
In this tutorial you will learn: Equalization implies mapping one distribution (the given histogram) to another distribution (a wider and more uniform distribution of intensity values) so the intensity values are spreaded over the whole range. To accomplish the equalization effect, the remapping should be the cumulative distribution function (cdf) (more details, refer to Learning OpenCV). For the histogram , its cumulative distribution is: To use this as a remapping function, we have to normalize such that the maximum value is 255 ( or the maximum value for the intensity of the image ). From the example above, the cumulative function is: Finally, we use a simple remapping procedure to obtain the intensity values of the equalized image: What does this program do? Downloadable code: Click here Code at glance: #include "opencv2/highgui.hpp" #include "opencv2/imgproc.hpp" #include <iostream> #include <stdio.h> using namespace cv; using namespace std; /** @function main */ int main( int argc, char** argv ) { Mat src, dst; char* source_window = "Source image"; char* equalized_window = "Equalized Image"; /// Load image src = imread( argv[1], 1 ); if( !src.data ) { cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl; return -1;} /// Convert to grayscale cvtColor( src, src, CV_BGR2GRAY ); /// Apply Histogram Equalization equalizeHist( src, dst ); /// Display results namedWindow( source_window, CV_WINDOW_AUTOSIZE ); namedWindow( equalized_window, CV_WINDOW_AUTOSIZE ); imshow( source_window, src ); imshow( equalized_window, dst ); /// Wait until user exits the program waitKey(0); return 0; } Declare the source and destination images as well as the windows names: Mat src, dst; char* source_window = "Source image"; char* equalized_window = "Equalized Image"; Load the source image: src = imread( argv[1], 1 ); if( !src.data ) { cout<<"Usage: ./Histogram_Demo <path_to_image>"<<endl; return -1;} Convert it to grayscale: cvtColor( src, src, CV_BGR2GRAY ); Apply histogram equalization with the function equalizeHist : equalizeHist( src, dst ); As it can be easily seen, the only arguments are the original image and the output (equalized) image. Display both images (original and equalized) : namedWindow( source_window, CV_WINDOW_AUTOSIZE ); namedWindow( equalized_window, CV_WINDOW_AUTOSIZE ); imshow( source_window, src ); imshow( equalized_window, dst ); Wait until user exists the program waitKey(0); return 0;. Note Are you wondering how did we draw the Histogram figures shown above? Check out the following tutorial!
https://docs.opencv.org/3.0-alpha/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html
2020-09-18T20:30:47
CC-MAIN-2020-40
1600400188841.7
[]
docs.opencv.org
502 Bad Gateway Your internet service provider's proxy server is trying to fulfill the request by your browser to access your Website. It receives an invalid response from your website's server. When this happens you will see a 502 Bad Gateway error. This happens due to poor IP communication between back-end servers, possibly including the Web server at your site. Troubleshooting - Clear your browser cache completely and re-start your browser before trying to access your web site. - Try visiting other web site and see if this problem exists for all web sites you try to visit. - If problem exist for all the sites you try to visit, then it's probably your Internet Service Provider has a major equipment failure or overload. Check with your Internet Service Provider to see if they are having any issues. - Another possibility could be something went wrong with your Internet connection. Try using alternative Internet access to determine if there is a problem with your equipment. - If the problem exist only when you try to visit your website, then it is likely that your Web Hosting server is having equipment failure or overloading. You will need to contact your Web Hosting company for details. Basically, if it's a server issue, there is nothing you can do but to contact the relevant parties and wait for them to fix it.
https://docs.presscustomizr.com/article/184-502-bad-gateway
2020-09-18T20:05:48
CC-MAIN-2020-40
1600400188841.7
[]
docs.presscustomizr.com
A. Please join the Aave community Discord server; the Aave team and members of the community look forward to helping you understand and use Aave. Aave Protocol has been audited and secured. The protocol is completely open source, which allows anyone to interact with an user interface client, API or directly with the smart contracts on the Ethereum network. Being open source means that you are able to build any third-party service or application to interact with the protocol and enrich your product. In order to use to interact with Aave protocol, you simply deposit your preferred asset and amount. After depositing, you will earn passive income based on the market borrowing demand. Additionally, depositing assets allows you to borrow by using your deposited assets as a collateral. Any interest you earn by depositing funds helps offset the interest rate you accumulate by borrowing. Interacting with the protocol requires transactions and so transaction fees for Ethereum Blockchain usage, which depend on the network status and transaction complexity. Your funds are allocated in a smart contract. The code of the smart contract is public, open source, formally verified and audited by third party auditors. You can withdraw your funds from the pool on-demand or export a tokenised (aTokens) version of your lender position. aTokens can be moved and traded as any other cryptographic asset on Ethereum. No platform can be considered entirely risk free. The risks related to the Aave platform are the smart contract risk (risk of a bug within the protocol code) and liquidation risk (risk on the collateral liquidation process). Every possible step has been taken to minimise the risk as much as possible-- the protocol code is public and open source and it has been audited. Additionally, there is an ongoing bug bounty campaign live and running. You can find additional risk and security related information in the risk framework and security and audits sections. AAVE is used as the centre of gravity of Aave Protocol governance. AAVE is used to vote and decide on the outcome of Aave Improvement Proposals (AIPs). Apart from this, AAVE can be staked within the protocol Safety Module to provide security/insurance to the protocol/depositors. Stakers earn staking rewards and fees from the protocol. Documentation on tokenomics and governance is available in the flash paper and with further detail in the full documentation. Feel free to join the discussion in the governance forum. If you are unsure about any specific terms feel free to check the Glossary. Feel free to refer to the White Paper for a deeper dive into Aave Protocol mechanisms. Developers can access the Documentation for a technical description of the Aave decentralised lending pool protocol. For a detailed risk analysis, please visit the risk framework. If you still have any questions or issues, feel free to reach the Aave team over the live chat within the app or in the discord or telegram channel
https://docs.aave.com/faq/
2021-07-24T06:54:30
CC-MAIN-2021-31
1627046150134.86
[]
docs.aave.com
Duplicate a Message or Journey You can duplicate any message. - A/B tests and In-App Automation messages duplicate to their composer of origin then open for editing. - For messages created with the Message or Automation composers, you will first have the option to duplicate to the Message or Automation composer. Recurring messages will duplicate to the Message composer. - For journeys: - After duplication, the next screen will be the Journey ManagerA preview of the messages in a journey, with options for editing and testing, and for running experiments. for the new journey. - The new journey name is the original journey name, with " - Copy" appended. - If the original journey was Started at the time of duplication, the new journey status is Paused until you start it. - You cannot duplicate a message within a journey. - Go to Messages » Messages Overview. - Click for the message or journey you want to duplicate. Feedback Was this page helpful? Thank you Thanks for your feedback!Tell Us More Thank you We will try harder!Tell Us More Categories
https://docs.airship.com/guides/messaging/user-guide/manage/duplicate/
2021-07-24T07:42:26
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
expo-web-browserprovides access to the system's web browser and supports handling redirects. On iOS, it uses SFSafariViewControlleror SFAuthenticationSession, depending on the method you call, and on Android it uses ChromeCustomTabs. As of iOS 11, SFSafariViewControllerno longer shares cookies with Safari, so if you are using WebBrowserfor authentication you will want to use WebBrowser.openAuthSessionAsync, and if you just want to open a webpage (such as your app privacy policy), then use WebBrowser.openBrowserAsync. expo install expo-web-browser If you're installing this in a bare React Native app, you should also follow these additional installation instructions. import React, { Component } from 'react'; import { Button, Text, View } from 'react-native'; import * as WebBrowser from 'expo-web-browser'; export default class App extends Component { state = { result: null, }; render() { return ( <View> <Button title="Open WebBrowser" onPress={this._handlePressButtonAsync} /> <Text>{this.state.result && JSON.stringify(this.state.result)}</Text> </View> ); } _handlePressButtonAsync = async () => { let result = await WebBrowser.openBrowserAsync(''); this.setState({ result }); }; } WebBrowserwindow for authentication or another use case where you would like to pass information back into your app through a deep link, be sure to add a handler with Linking.addEventListenerbefore opening the browser. When the listener fires, you should call dismissBrowser -- it will not automatically dismiss when a deep link is handled. Aside from that, redirects from WebBrowserwork the same as other deep links. Read more about it in the Linking guide. import * as WebBrowser from 'expo-web-browser'; SFSafariViewController, and Chrome in a new custom tab on Android. On iOS, the modal Safari will not share cookies with the system Safari. If you need this, use openAuthSessionAsync. #AARRGGBBor #RRGGBBformat. done, close, or cancel. #AARRGGBBor #RRGGBBformat. false #AARRGGBBor #RRGGBBformat. {type: 'opened'}if we were able to open browser. { type: 'cancel' }. dismissBrowser, the Promise resolves with { type: 'dismiss' }. SFAuthenticationSession. warmUpmethod on CustomTabsClient for specified package. { package: string } mayLaunchUrlmethod for browser specified by the package. { package: string }. warmUpAsyncor mayInitWithUrlAsync. You should call this method once you don't need them to avoid potential memory leaks. However, those binding would be cleared once your application is destroyed, which might be sufficient in most cases. { package: string }when cooling is performed, or an empty object when there was no connection to be dismissed. { type: 'dismiss' }. PackageManager.getResolvingActivitiesunder the hood. (For example, some browsers might not be present in browserPackageslist once another browser is set to defult.) { browserPackages: string[], defaultBrowserPackage: string, servicePackages: string[], preferredBrowserPackage: string } PackageManageras capable of handling Custom Tabs. Empty array means there is no supporting browsers on device. PackageManageras capable of handling Custom Tabs Service. This service is used by warmUpAsync, mayInitWithUrlAsyncand coolDownAsync. CustomTabsClientto be used to handle Custom Tabs. It favors browser chosen by user as default, as long as it is present on both browserPackagesand servicePackageslists. Only such browsers are considered as fully supporting Custom Tabs. It might be nullwhen there is no such browser installed or when default browser is not in servicePackageslist. servicePackagelist, it should be capable of performing warmUpAsync, mayInitWithUrlAsyncand coolDownAsync. For opening an actual web page, browser must be in browserPackageslist. A browser has to be present in both lists to be considered as fully supporting Custom Tabs. window.open()method was invoked too long after a user input was fired. expo start:web --httpsor expo start --web --httpsto open your web page in a secure development environment.
https://docs.expo.io/versions/v39.0.0/sdk/webbrowser/
2021-07-24T08:34:57
CC-MAIN-2021-31
1627046150134.86
[]
docs.expo.io
We recommend the Key management service (barbican) for storing encryption keys used by the OpenStack volume encryption feature. It can be enabled by updating cinder.conf and nova.conf. Configuration changes need to be made to any nodes running the cinder-api or nova-compute server. Steps to update cinder-api servers: Edit the /etc/cinder/cinder.conf file to use Key management service as follows: Look for the [key_manager] section. Enter a new line directly below [key_manager] with the following: backend = barbican Restart cinder-api. Update nova-compute servers: Ensure the cryptsetup utility is installed, and install the python-barbicanclient Python package. Set up the Key Manager service by editing /etc/nova/nova.conf: [key_manager] backend = barbican Note Use a ‘#’ prefix to comment out the line in this section that begins with ‘fixed_key’. Restart nova-compute. Special privileges can be assigned on behalf of an end user to allow them to manage their own encryption keys, which are required when creating the encrypted volumes. The Barbican Default Policy for access control specifies that only users with an admin or creator role can create keys. The policy is very flexible and can be modified. To assign the creator role, the admin must know the user ID, project ID, and creator role ID. See Assign a role for more information. An admin can list existing roles and associated IDs using the openstack role list command. If the creator role does not exist, the admin can create the role. Block Storage volume type assignment provides scheduling to a specific back-end, and can be used to specify actionable information for a back-end storage device. This example creates a volume type called LUKS and provides configuration information for the storage system to encrypt or decrypt the volume. Source your admin credentials: $ . admin-openrc.sh Create the volume type, marking the volume type as encrypted and providing the necessary details. Use --encryption-control-location to specify where encryption is performed: front-end (default) or back-end. $ openstack volume type create --encryption-provider luks \ --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LUKS +-------------+----------------------------------------------------------------+ | Field | Value | +-------------+----------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', | | | encryption_id='8584c43f-1666-43d1-a348-45cfcef72898', | | | key_size='256', | | | provider='luks' | | id | b9a8cff5-2f60-40d1-8562-d33f3bf18312 | | is_public | True | | name | LUKS | +-------------+----------------------------------------------------------------+ The OpenStack dashboard (horizon) supports creating the encrypted volume type as of the Kilo release. For instructions, see Create an encrypted volume type. Use the OpenStack dashboard (horizon), or openstack volume create command to create volumes just as you normally would. For an encrypted volume, pass the --type LUKS flag, which specifies that the volume type will be LUKS (Linux Unified Key Setup). If that argument is left out, the default volume type, unencrypted, is used. Source your admin credentials: $ . admin-openrc.sh Create an unencrypted 1GB test volume: $ openstack volume create --size 1 'unencrypted volume' Create an encrypted 1GB test volume: $ openstack volume create --size 1 --type LUKS 'encrypted volume' Notice the encrypted parameter; it will show True or False. The option volume_type is also shown for easy review. Non-admin users need the creator role to store secrets in Barbican and to create encrypted volumes. As an administrator, you can give a user the creator role in the following way: $ openstack role add --project PROJECT --user USER creator For details, see the Barbican Access Control page. Note Due to the issue that some of the volume drivers do not set encrypted flag, attaching of encrypted volumes to a virtual guest will fail, because OpenStack Compute service will not run encryption providers. This is a simple test scenario to help validate your encryption. It assumes an LVM based Block Storage server. Perform these steps after completing the volume encryption setup and creating the volume-type for LUKS as described in the preceding sections. Create a VM: $ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM Create two volumes, one encrypted and one not encrypted then attach them to your VM: $ openstack volume create --size 1 'unencrypted volume' $ openstack volume create --size 1 --type LUKS 'encrypted volume' $ openstack volume list $ openstack server add volume --device /dev/vdb TESTVM 'unencrypted volume' $ openstack server add volume --device /dev/vdc TESTVM 'encrypted volume' Note The --device option to specify the mountpoint for the attached volume may not be where the block device is actually attached in the guest VM, it is used here for illustration purposes. On the VM, send some text to the newly attached volumes and synchronize them: # echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb # echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc # sync && sleep 2 # sync && sleep 2 On the system hosting cinder volume services, synchronize to flush the I/O cache then test to see if your strings can be found: # sync && sleep 2 # sync && sleep 2 # strings /dev/stack-volumes/volume-* | grep "Hello" Hello, world (unencrypted /dev/vdb) In the above example you see that the search returns the string written to the unencrypted volume, but not the encrypted one. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/cinder/rocky/configuration/block-storage/volume-encryption.html
2021-07-24T07:12:22
CC-MAIN-2021-31
1627046150134.86
[]
docs.openstack.org
cdfBinomialInv¶ Examples¶ What is a reasonable range of wins for a basketball team playing 82 games in a season, with a 60% chance of winning any game? For our example we will define a reasonable range as falling between the top and bottom deciles. // Probability range range = { .10, .9 }; // Number of trials trials = 82; // Probabiliy of success prob = 0.6 // Call cdfBinomialInv s = cdfBinomialInv(range, trials, prob); print "s = " s; After above code, s = 43 55 This means that a team with a 60% chance of winning any one game would win between 43 and 55 games in 80% of seasons. Remarks¶ For invalid inputs, cdf(), pdfBinomial(), cdfNegBinomial(), cdfNegBinomialInv()
https://docs.aptech.com/gauss/cdfbinomialinv.html
2021-07-24T07:27:20
CC-MAIN-2021-31
1627046150134.86
[]
docs.aptech.com
The command bar is designed to enable you to navigate across the devtron dashboard without having to click around the screen. Top-level categories (eg. app, chart, security, global-config) are auto-filled depending upon your location on the Devtron dashboard. You can clear the top-level category to navigate within other category locations. Open command bar by clicking the 🔍 search icon on left navbar or pressing Cmd/Ctrl + / Start typing the app name you're looking for. Navigate using ↓ ↑ between the matching results and press → to view nested options. Note: Pressing Enter on a highlighted option will navigate to the selected page location. In this case, app / dashboard / configure / workflow-editor will navigate to the Workflow editor in dashboard application. Similarly, you can use the command bar to navigate around the Devtron dashboard without a click. We would love to know your experience with the command bar. Jump in to the Devtron Discord Community
https://docs.devtron.ai/user-guide/command-bar
2021-07-24T07:28:39
CC-MAIN-2021-31
1627046150134.86
[]
docs.devtron.ai
This error message has come up and your customers are asking themselves as well as yourself why? And you have no idea what exactly is going on? Well, there are several possible causes for this to be showing up and we're here to shed some light on them. 🤔 Among the possible causes, we'll be inserting some possible workarounds - and how to bring the portal closer to your customers so they know precisely which information needs to be entered - and where. ✅ 💬 Entered data needs to match 100% 🔥 The data entered by the customer needs to match 100%. This means no spaces, no hyphens, nothing that would turn the info entered into an incorrect one. 💬 Customers do not know their order or where to find it? 💡 Yes, this happens often and it happens to the best of us. Customers simply forgot their order number and do not know where to find it. 💬 Customers forgot the exact e-mail address they used? 🤯 Does it happen often? You guessed it, it does! In the era where technology is taking over, we have way too many email addresses. And we forget, we're human after all. 💬 The exact format of the order number is not followed through? ℹ️ It can happen indeed that the customers do not know the exact format of the order number (e.g. including the pre-fix) which needs to be entered. This is crucial information and without this, the error message will keep showing up. 💬 Typos ❌ In the modern era, especially in the post-covid one, where we communicate mostly by text, typos can be a man's biggest enemy. Watch out for them! They could be found in e-mail addresses, order numbers, practically anywhere. How to advise your customers as well as reduce the number of issues similar to this one? Here's some advice on that: Customize the text fields in advance to warn or advise your customers about the information they need to enter. E.g. the right format of the order. Customize the notification that shows up when an order cannot be found. Insert a helpful note that may solve the issue your customer is facing instantly! Using their ZIP code instead of email address could also help reduce the cases you get. ⚡ Pro-Tip: You can customize your text fields when you head over from your Dashboard ➡️ Translations ➡️ Customize. Selecting a language from the dropdown menu will leave you with a whole new world to explore, and customize!
https://docs.richcommerce.co/common-questions/order-not-found
2021-07-24T08:50:05
CC-MAIN-2021-31
1627046150134.86
[]
docs.richcommerce.co
Setup This page will walk you through the setup process required to install and configure Sofi on your ServiceNow instance in under 5-minutes. We recommend using a non-production instance of ServiceNow that is a recent clone of your production environment. Sofi supports all current versions of ServiceNow. If you require any assistance during the setup or trial period we are here to help at [email protected] After successfully installing Sofi, we need to configure Sofi to learn using your historical data to provide Intelligent Call Classification and Routing predictions. While Sofi provides default configuration as part of the base install, every customer tends to have a different data model. It is important to understand your data and how Sofi can use your data to provide Intelligent Call Classification and Routing predictions. In addition to Service Agent Assistant, Sofi ships with an express version of Sofi Virtual Assistant. Sofi Virtual Assistant provides a number of out the box features, allowing you to experience the benefits of deploying an Intelligent Virtual Agent to your Service Portal. 1. Request Sofi Instance Before installing Sofi on your ServiceNow instance, you will need your dedicated Sofi instance to be provisioned. If you have not been provided an instance, you securely connect to your dedicated Sofi instance. Note: Please contact us at [email protected] if you require a new API key, as we do not store this key. 2. Installing Sofi on your ServiceNow instance Installing Sofi on your ServiceNow instance is through a ServiceNow Update Set and associated XML configuration file. The installation of Sofi does not affect exisiting ServiceNow configurations, a complete list of dependancies is available upon request. You will be provided with the Sofi files Sofi x.x.x.zip file,
https://docs.sofi.ai/docs/setup
2021-07-24T08:33:01
CC-MAIN-2021-31
1627046150134.86
[]
docs.sofi.ai
distance (in texels) between separate UV tiles in lightmaps. (Editor only). The range is 2 to 100. The default value is 2. This setting can be useful when scaling lightmaps down to support lower-end hardware, which can lead to bilinear filtering issues between neighbouring object's tiles. Adjust this setting to increase the space between tiles, and avoid this issue. Note that adjusting this setting does not affect bilinear filtering within the tile itself. This setting affects lightmaps generated by the Baked Global Illumination system. When Unity serializes this LightingSettings object as a Lighting Settings Asset, this property corresponds to the Lightmap Padding property in the Lighting Settings Asset Inspector. See Also: Lighting Settings Asset.
https://docs.unity3d.com/ScriptReference/LightingSettings-lightmapPadding.html
2021-07-24T09:08:07
CC-MAIN-2021-31
1627046150134.86
[]
docs.unity3d.com
Set Up VAT Rates To set up a new VAT rate 1.On the Codejig ERP Main menu, click the VAT module, and then select VAT Rate. A listing page of the VAT Rate directory opens. 2.On the listing page, click + Add new. You are taken to a page for setting up a new VAT rate. 3.Define the VAT rate: - Provide a name for the VAT rate. - Select a VAT type from the list of pre-defined types. - Specify a country where the VAT rate is applicable (information that enables Codejig ERP to suggest correct VAT rates for use in sales and purchase documents after the system has established what type of VAT transaction is relevant in a particular business situation. For example, if you trade with other countries, VAT treatment of cross-border transactions must be implemented. However, if you purchase or sell goods/services within your country of residence, domestic VAT is applied. For information how the system figures what type of VAT transaction it has to deal with, see VAT Registration.For detailed information about VAT rates suggested for use in each specific VAT situation, see the explication of the VAT field for any sales process document. ). - Select a VAT account for accumulating input VAT charged at this VAT rate. - Specify a VAT account for accumulating output VAT you charge at this rate. - In the Tax rate history section, enter the relevant rate in % along with the date on which it becomes effective. 4.Click Save. The page for defining new VAT rates consists of the following sections: - General area - Accounting section - Tax rate history section To be able to save new VAT rates, you have to fill in the required fields which are marked with an asterisk (*). More information VAT Rate: Accounting Section VAT Rate: Tax Rate History Section Manage VAT Rate Changes Modify Data Display Data Delete Data
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427395976
2021-07-24T08:03:20
CC-MAIN-2021-31
1627046150134.86
[]
docs.codejig.com
You can quickly navigate to the objects that you visited during your vSphere Web Client session. You can switch between objects you last visited without having to search for the objects in the object navigator or in the inventory tree. In the Recent Objects pane, you can see a history of the most recent objects that you visited in your environment. You can see the most recent objects that you visited and the latest objects that you created. The recent objects list is persistent between vSphere Web Client sessions, but the new objects list is not persistent between vSphere Web Client sessions. Procedure - In the Recent Objects pane, select the tab that you want to view.Objects are listed in two tabs depending on whether you visited or created the object. - Click the object that you want to view.The object displays in the center pane of the vSphere Web Client Results You have navigated to the object that you selected in the Recent Objects pane.
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-0AEF3222-68CA-4BCA-9710-6FA72D0E3194.html
2021-07-24T08:46:32
CC-MAIN-2021-31
1627046150134.86
[]
docs.vmware.com
You need to activate your license to receive updates for the theme and install/update the premium plugins included in your purchase. Go to your WordPress admin dashboard and find the theme name under the Appearance menu to activate your license. Where can I find my purchase code? Login to your ThemeForest account and navigate to the Downloads page. Your purchase code is in the TXT file that is available to download under the “Download” button.
https://docs.rtthemes.com/document/license-activation/
2021-07-24T09:02:12
CC-MAIN-2021-31
1627046150134.86
[]
docs.rtthemes.com
#include <wx/propgrid/property.h> Base class for wxPropertyGrid cell renderers. Flags for Render(), PreDrawCell() and PostDrawCell(). Paints property category selection rectangle. Utility to draw editor's value, or vertically aligned text if editor is NULL. Utility to draw vertically centered text. Returns size of the image in front of the editable area. Reimplemented in wxPGDefaultRenderer. Utility to be called after drawing is done, to revert whatever changes PreDrawCell() did. Utility to render cell bitmap and set text colour plus bg brush colour. Returns true if rendered something in the foreground (text or bitmap). Implemented in wxPGDefaultRenderer.
https://docs.wxwidgets.org/3.1.5/classwx_p_g_cell_renderer.html
2021-07-24T08:38:50
CC-MAIN-2021-31
1627046150134.86
[]
docs.wxwidgets.org
Zextras Suite 3.1.9 Backup Issue ID: BCK-24 Title: Fixed and added more info to restore blobs command for notifications Description: Now, after a restore blobs operation has been performed, the list of parameters is correctly displayed. Issue ID: BCK-437 Title: Backup’s CLI doItemRestore accepts parameter in different forms Description: Now the CLI accepts the account’s name or the account’s id as parameter Issue ID: BCK-439 Title: Changed backupChatEnabled attribute default value to false Description: Now, by default, the backup has been disabled. Issue ID: BCK-446 Title: Added a new parameter to undelete command Description: Now, with the undelete command is possible to restore deleted items in their original folder Issue ID: BCK-447 Title: NullPointerException during purge with third party backup fixed Description: Fixed a bug that prevented to complete the purge operation if third party backup on S3 is enabled Issue ID: BCK-461 Title: doRestoreOnNewAccount command has been fixed Description: Fixed issue that prevented restoring older deleted account when backup contains multiple accounts with the same name. Issue ID: BCK-493 Title: Backup on external volume has been fixed Description: Fixed the check if backup is migrated on a new bucket with the same credentials Fixed the creation of backup volume directly from migrate/set command General Issue ID: COR-515 Title: doDeployClientZimlet download fixed Description: Fixed a bug that was downloading the zimlet from a wrong path Issue ID: COR-516 Title: The doDeploy command’s error message has been improved with explanation Description: Now, when doDeploy command fails, a message is displayed containing the error and the explanation Issue ID: COR-572 Title: GetAllOperations command has been fixed Description: Fixed a bug that prevented the correct output from being displayed if no operations were running Issue ID: COR-584 Title: The display of the config status has been improved Description: Now, in the case of out-of-sync nodes the config status reports the errors and the related causes Docs Issue ID: DOCS-113 Title: Popup error if user clicks on preferences before complete zimlet load Description: Popup error if a user clicks on preferences before complete Docs zimlet load Drive Issue ID: DRIV-974 Title: Special characters in Drive file name fixed Description: Now it is possible to save into Drive attachments that have a single quote in the name Mobile Issue ID: MOB-306 Title: Shared folders file download fixed Description: Fixed a bug that prevented attachments to be downloaded from emails in shared folders. Issue ID: MOB-308 Title: EAS autocomplete honor zimbra contacts autocomplete settings Description: When composing a new mail via EAS device, autocomplete on recipient address will search in local contact, GAL, or shared contacts, honoring Zimbra contacts settings (zimbraPrefSharedAddrBookAutoCompleteEnabled, A_zimbraPrefGalAutoCompleteEnabled) Powerstore Issue ID: PS-286 Title: Fixed logs for mailbox purge command Description: Fixed a bug that wrongly displayed the logs for mailbox purge command Issue ID: PS-291 Title: DoMailboxMove now returns an error for missing parameters Description: Added an error that is shown when a user tries to perform a mailbox move operation by CLI without specifying any parameters or specifying the wrong ones. Issue ID: PS-297 Title: BulkDelete service fixed Description: Fixed a bug that doesn’t retry failed deletions on the local file system Team Issue ID: TEAMS-2054 Title: Improved Instant Meeting UI Description: Such a big UI improvement for Instant Meeting, with a new layout with grid-mode or cinema-mode, both with fullscreen available; see the list of who is speaking, use the push to talk feature or mute/unmute someone with microphone issues. Participants list is cleaner and more organized, you can cycle it if your meeting is large. Resizing the window will adjust automatically every ui component of the instant meeting. Instant Metting will remain open if the owner is still in there, otherwise if owner left and you’re the last user in there the Instant Meeting will close automatically. Issue ID: TEAMS-2128 Title: Mute notifications button added Description: Added mute notifications button in one to one conversations, groups, spaces. Issue ID: TEAMS-2167 Title: Mute feature for conversations added Description: It is now possible to mute the conversations, groups and spaces to avoid notifications. Notes: this is very useful for noisy groups! Issue ID: TEAMS-2353 Title: Meeting views on grid mode have been improved Description: Now, during a meeting, a user can see if other users are talking, via the green border that appears in their panel Issue ID: TEAMS-2356 Title: Little tiles separation Description: Stream components are more visible thanks to its margin Issue ID: TEAMS-2357 Title: Writing notification fixed in conversations Description: If the connection with the server is lost while writing, the "is writing" notification will remain until logout. Now this has been fixed. Issue ID: TEAMS-2376 Title: Chat list filter has been improved Description: Now, when the user clicks on the "chats" tab after filtering the chat list, the filter is reset Issue ID: TEAMS-2380 Title: Removed notifications for messages of join, left and kicked type on channels and spaces Description: Removed notifications for messages from badge for channels and spaces in case someone joins, left or has been kicked out from a channel or space, only if these messages were received during the session Issue ID: TEAMS-2382 Title: A new button has been added to mini-chat for calls Description: Added a new button on mini-chat header that allows you to call the other member/members who are part of the conversation Issue ID: TEAMS-2384 Title: A new button has been added to switch from the Team tab to the related mini-chat Description: Added a new button on conversation header that allows you to direct to the related mini-chat Issue ID: TEAMS-2385 Title: A new button has been added to switch from the mini-chat to the Team tab Description: Added a new button on mini-chat header that allows you to direct to the related conversation on the Team tab Issue ID: TEAMS-2392 Title: Team user search does not performs too many searches Description: Check if team search do not performs too many searches Issue ID: TEAMS-2433 Title: Added copy in message menu Description: Added copy functionality on bubble contextual menu Issue ID: TEAMS-2477 Title: Mailbox move must handle mute Description: When mailbox move is performed, even mute conversation info should be moved Issue ID: TEAMS-2491 Title: GetHistory doesn’t show deleted messages Description: GetHistory doesn’t show deleted messages Issue ID: TEAMS-2498 Title: Add papyrous as conversation background image Description: Add papyrous as conversation background image Issue ID: TEAMS-2615 Title: Video Server installer differentiates Zimbra NE and OSE installation Description: Now the Video Server installer provides the command to run to configure it both on Zimbra Network Edition and Zimbra Open Source Edition. Issue ID: TEAMS-2620 Title: Error with multi version cluster Description: Exception is thrown when a user on a server using APIv9 create a conversation with a user on a server using APIv10 Issue ID: TEAMS-2621 Title: Fixed janus calls bug on rooms Description: Fixed a bug that prevented the calls in rooms to be started if these rooms are on a different server than the user’s.
https://docs.zextras.com/zextras-suite-documentation/3.2.0/changelogs/3.1.9.html
2021-07-24T08:01:06
CC-MAIN-2021-31
1627046150134.86
[]
docs.zextras.com
AWS Glue API Permissions: Actions and Resources Reference Use the following table as a reference when you're setting up Authentication and Access Control for AWS Glue and writing a permissions policy to attach to an IAM identity (identity-based policy) or to a resource (resource policy). The table lists each AWS Glue API operation, the corresponding actions for which you can grant permissions to perform the action, and the AWS resource for which you can grant the permissions. You specify the actions in the policy's Action field, and you specify the resource value in the policy's Resource field. Actions on some AWS Glue resources require that ancestor and child resource ARNs are also included in the policy's Resource field. For more information, see Data Catalog Amazon Resource Names (ARNs). Generally, you can replace ARN segments with wildcards. For more information see IAM JSON Policy Elements in the IAM User Guide. Condition keys for IAM policies are listed by API operation. You can use AWS-wide condition keys in your AWS Glue policies to express conditions. For a complete list of AWS-wide keys, see AWS Global Condition Keys in the IAM User Guide. Note To specify an action, use the glue: prefix followed by the API operation name (for example, glue:GetTable). If you see an expand arrow (↗) in the upper-right corner of the table, you can open the table in a new window. To close the window, choose the close button (X) in the lower-right corner.
https://docs.aws.amazon.com/glue/latest/dg/api-permissions-reference.html
2019-05-19T11:25:34
CC-MAIN-2019-22
1558232254751.58
[]
docs.aws.amazon.com
Motors can be used to control whether the conveyor systems are on or off at a given time. The motor can also be used to sync dog gaps on a power and free chain loop when simulating a power and free conveyor system. For information on events, see the Event Listening page. The motor has the following events: Explain the event. When does the event occur? It has the following parameters: Return value (if applicable) The motor does not implement any states. The motor does not track any statistics. Motor properties can be edited in Quick Properties or the Motor property window. The following sections explain the available properties in each tool. The following image shows the motor properties that are available in Quick Properties: It has the following properties: You can type a custom name here if needed. Use the color selector to change the color of the merge controller. Changes the position of the merge controller in the 3D model: The motor object properties window has three tabs with various properties. The last two tabs are the standard tabs that are common to all conveyor objects. For more information about the properties on those tabs, see: Only the Motor tab is unique to the motor. The properties on this tab will be explained in more detail in the next section. The Motor tab has the following properties:.
https://docs.flexsim.com/en/19.1/Reference/3DObjects/Conveyors/Motor/Motor.html
2019-05-19T10:16:31
CC-MAIN-2019-22
1558232254751.58
[]
docs.flexsim.com
Implementing Continuous Querying Use continuous querying in your clients to receive continuous updates to queries run on the servers. CQs are only run by a client on its servers. Before you begin, you should be familiar with Querying and have your client/server system configured. Configure the client pools you will use for CQs with subscription-enabledset to true. To have CQ and interest subscription events arrive as closely together as possible, use a single pool for everything. Different pools might use different servers, which can lead to greater differences in event delivery time. Write your OQL query to retrieve the data you need from the server. The query must satisfy these CQ requirements in addition to the standard GemFire querying specifications: - The FROM clause must contain only a single region specification, with optional iterator variable. - The query must be a SELECT expression only, preceded by zero or more IMPORT statements. This means the query cannot be a statement such as "/tradeOrder.name"or "(SELECT * from /tradeOrder).size". - The CQ query cannot use: - Cross region joins - Drill-downs into nested collections - DISTINCT - Projections - Bind parameters - The CQ query must be created on a partitioned or replicated region. See Region Type Restrictions for CQs. The basic syntax for the CQ query is: SELECT * FROM /fullRegionPath [iterator] [WHERE clause] This example query could be used to get all trade orders where the price is over $100: SELECT * FROM /tradeOrder t WHERE t.price > 100.00 Write your CQ listeners to handle CQ events from the server. Implement org.apache.geode.cache.query.CqListenerin each event handler you need. In addition to your main CQ listeners, you might have listeners that you use for all CQs to track statistics or other general information. Note: Be especially careful if you choose to update your cache from your CqListener. If your listener updates the region that is queried in its own CQ and that region has a Poolnamed, the update will be forwarded to the server. If the update on the server satisfies the same CQ, it may be returned to the same listener that did the update, which could put your application into an infinite loop. This same scenario could be played out with multiple regions and multiple CQs, if the listeners are programmed to update each other’s regions. This example outlines a CqListenerthat might be used to update a display screen with current data from the server. The listener gets the queryOperationand entry key and value from the CqEventand then updates the screen according to the type of queryOperation. // CqListener class public class TradeEventListener implements Cq . . . } } When you install the listener and run the query, your listener will handle all of the CQ results. If you need your CQs to detect whether they are connected to any of the servers that host its subscription queues, implement a CqStatusListenerinstead of a CqListener. CqStatusListenerextends the current CqListener, allowing a client to detect when a CQ is connected and/or disconnected from the server(s). The onCqConnected()method will be invoked when the CQ is connected, and when the CQ has been reconnected after being disconnected. The onCqDisconnected()method will be invoked when the CQ is no longer connected to any servers. Taking the example from step 3, we can instead implement a CqStatusListener: public class TradeEventListener implements CqStatus . . . } public void onCqConnected() { //Display connected symbol } public void onCqDisconnected() { //Display disconnected symbol } } When you install the CqStatusListener, your listener will be able to detect its connection status to the servers that it is querying. Program your client to run the CQ: - Create a CqAttributesFactoryand use it to set your CqListeners and CqStatusListener. - Pass the attributes factory and the CQ query and its unique name to the QueryServiceto create a new CqQuery. - Start the query running by calling one of the execute methods on the CqQueryobject. You can execute with or without an initial result set. - When you are done with the CQ, close it. Continuous Query Implementation // Get cache and queryService - refs to local cache and QueryService // Create client /tradeOrder region configured to talk to the server // Create CqAttribute using CqAttributeFactory CqAttributesFactory cqf = new CqAttributesFactory(); // Create a listener and add it to the CQ attributes callback defined below CqListener tradeEventListener = new TradeEventListener(); cqf.addCqListener(tradeEventListener); CqAttributes cqa = cqf.create(); // Name of the CQ and its query String cqName = "priceTracker"; String queryStr = "SELECT * FROM /tradeOrder t where t.price > 100.00"; // Create the CqQuery CqQuery priceTracker = queryService.newCq(cqName, queryStr, cqa); try { // Execute CQ, getting the optional initial result set // Without the initial result set, the call is priceTracker.execute(); SelectResults sResults = priceTracker.executeWithInitialResults(); for (Object o : sResults) { Struct s = (Struct) o; TradeOrder to = (TradeOrder) s.get("value"); System.out.println("Intial result includes: " + to); } } catch (Exception ex) { ex.printStackTrace(); } // Now the CQ is running on the server, sending CqEvents to the listener . . . // End of life for the CQ - clear up resources by closing priceTracker.close(); With continuous queries, you can optionally implement: - Highly available CQs by configuring your servers for high availability. - Durable CQs by configuring your clients for durable messaging and indicating which CQs are durable at creation.
https://gemfire.docs.pivotal.io/93/geode/developing/continuous_querying/implementing_continuous_querying.html
2019-05-19T10:22:12
CC-MAIN-2019-22
1558232254751.58
[]
gemfire.docs.pivotal.io
History enabled The Symmetry command mirrors curves and surfaces, makes the mirrored half tangent to the original, and then when the original object is edited, the mirrored half updates to match the original. History recording must be on when the command is run. Store the connection between a command's input geometry and the result, so that when the input geometry changes, the result updates accordingly. Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 16-May-2019
http://docs.mcneel.com/rhino/6/help/en-us/commands/symmetry.htm
2019-05-19T10:44:46
CC-MAIN-2019-22
1558232254751.58
[]
docs.mcneel.com
Rhino automatically downloads service releases to your computer and notifies you when they are ready to install. You can control when updates are downloaded. Updates and Statistics New version notification. Last checked date and time. Downloads the release version of the latest service release. Downloads the pre-release builds that the development team believes are stable, reliable, and are ready for broader testing. Usage statistics contain information such as preferences, feature use, file types used, operating system version, and memory use. These statistics help us prioritize the features and improvements we should work on. Learn more about usage statistics Enter your e-mail address to send it with statistics from your computer. McNeel may e-mail you to learn more about how you use Rhino. Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 16-May-2019
http://docs.mcneel.com/rhino/6/help/en-us/options/updates_and_statistics.htm
2019-05-19T10:57:48
CC-MAIN-2019-22
1558232254751.58
[]
docs.mcneel.com
PlasmaPy Community Code of Conduct¶ The PlasmaPy community strives to follow the best practices in open source software development. New contributors are encouraged to join the team and contribute to the codebase. We anticipate/encourage a global participation from people with diverse backgrounds, skills, interests, and opinions. We believe that such diversity is critical in ensuring a growth of ideas in our community. We as a community pledge to abide by the following guidelines: - We pledge to treat all people with respect and provide a harassment- and bullying-free environment, regardless of age, sex, sexual orientation and/or gender identity, disability, physical appearance, body size, race, nationality, ethnicity, religion, and level of experience. in forums, especially in discussion threads work from the very beginning of this project to make PlasmaPy accessible to people with disabilities. - We pledge to help the entire community follow these guidelines, and to not remain silent when we see violations of them. We will take action when members of our community violate these guidelines. Members of the PlasmaPy community may contact any member of the Coordinating Committee to report violations. Members of the Coordinating Committee will treat these reports in the strictest confidence. The Coordinating Committee will develop formal procedures for how to handle reported violations. Nick Murphy at [email protected] or any member of the Coordinating Committee. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Project team members should recuse themselves from enforcement of the code of conduct for a given incident if they have an actual or apparent conflict of interest. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership. Attribution¶ Parts of these guidelines have been adapted from the Contributor Covenant (version 1.4), the Astropy Community Code of Conduct, and the Python Software Foundation code of conduct.
http://docs.plasmapy.org/en/stable/CODE_OF_CONDUCT.html
2019-05-19T11:04:33
CC-MAIN-2019-22
1558232254751.58
[]
docs.plasmapy.org
PlasmaPy’s Vision Statement¶ About PlasmaPy¶ PlasmaPy is a community-developed and community-driven free and open source Python package that provides common functionality required for plasma physics in a single, reliable codebase. Motivation¶ In recent years, researchers in many different scientific disciplines have worked together to develop core Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data visualization and analysis, and educational tools for their respective scientific disciplines. We believe that a similar cooperatively developed package for plasma physics will greatly benefit our field. In this document, we lay out our vision for PlasmaPy: a community-developed and community-driven open source core Python software package for plasma physics. There is considerable need in plasma physics for open, general purpose software framework using modern best practices for scientific computing. As most scientific programmers are largely self-taught, software often does not take advantage of these practices and is instead written in a rush to produce results for the next research paper. The resulting code is often difficult to read and maintain, the documentation is usually inadequate, and tests are typically implemented late in the development process if at all. Legacy code is often written in low level languages such as Fortran, which typically makes compiling and installing packages difficult and frustrating, especially if it calls external libraries. It is also unusual to share code, and access to major software is often restricted in some way, resulting in many different programs and packages which do essentially the same thing but with little or no interoperability. These factors lead to research that is difficult to reproduce, and present a significant barrier to entry for new users. The plasma physics community is slowly moving in the open source direction. Several different types of packages and software have been released under open source licences, including the UCLA PIC codes, PICCANTE, EPOCH, VPIC, PIConGPU, WARP, the FLASH framework, Athena, and PENCIL. These projects are built as individual packages, are written in different programming languages, and often have many dependencies on specific packages. Python packages such as Astropy, SunPy, and SpacePy have had notable success providing open source alternatives to legacy code in related fields. We are grateful to these communities for their hard work, and hope to build upon their accomplishments for the field of plasma physics. An end user might not always be interested in a complicated powerpack to perform one specific task on supercomputers. She might also be interested in performing some basic plasma physics calculations, running small desktop scale simulations to test preliminary ideas (e.g., 1D MHD/PIC or test particles), or even comparing data from two different sources (simulations vs. spacecraft). Such tasks require a central platform. This is where PlasmaPy comes in. PlasmaPy Community Code of Conduct¶ Please see the attached PlasmaPy Community Code of Conduct. Organizational Structure¶ The Coordinating Committee (CC) will oversee the PlasmaPy project and code development. The CC will ensure that roles are being filled, facilitate community-wide communication, coordinate and delegate tasks, manage the project repository, oversee the code review process, regulate intercompatibility between different subpackages, seek funding mechanisms, facilitate compromises and cooperation, enforce the code of conduct, and foster a culture of appreciation. The Community Engagement Committee (CEC) will be responsible for organizing conferences, trainings, and workshops; maintaining and moderating social media groups and accounts; overseeing PlasmaPy’s website; and communicating with the PlasmaPy and plasma physics communities. The CEC will facilitate partnerships with groups such as Software Carpentry. Each subpackage will have lead and deputy coordinators who will guide and oversee the development of that subpackage. The Accessibility Coordinator will work to ensure that the PlasmaPy codebase, documentation, and practices are accessible to disabled students and scientists. Additional roles include the Webpage Maintainer, the Release Coordinator, and the Testing Coordinator. The work undertaken by each of these groups and coordinators should be done openly and transparently, except where confidentiality is needed. We will strive to have multiple subfields from plasma physics in each committee. Major decisions should ideally be made by general consensus among the PlasmaPy community, but when consensus is not possible then the committees may decide via majority vote. Much of this section is following the organizational structure of Astropy. Development Procedure¶ The initial developers of PlasmaPy will create a flexible development roadmap that outlines and prioritizes subpackages to be developed. The developers will survey existing open source Python packages in plasma physics. Priority will be given to determining how data will be stored and structured. Developers will break up into small groups to work on different subpackages. These small groups will communicate regularly and work towards interoperability and common coding practices. Because Python is new to many plasma physicists, community engagement is vital. The CEC will arrange occasional informal trainings early in the project that are director towards the initial developers. New code and edits should be submitted as a pull request to the development branch of the PlasmaPy repository on GitHub. The pull request will undergo a code review by the subpackage maintainers and/or the CC, who will provide suggestions on how the contributor may update the pull request. Subpackage maintainers will generally be responsible for deciding on pull requests with minor changes, while pull requests with major changes should be decided jointly by the subpackage maintainers and the CC. The CC and CEC will develop a friendly guide on how users may contribute new code to PlasmaPy. New code should conform to the PEP 8 style guide for Python code and the established coding style within PlasmaPy. New code should be submitted with documentation and tests. Documentation should be written primarily in docstrings and follow the numpydoc documentation style guide. Every new module, class and function should have an appropriate docstring. The documentation should describe the interface and the purpose for the method, but generally not the implementation. The code itself should be readable enough to be able to explain how it works. Documentation should be updated when the code is edited. The tests should cover new functionality (especially methods with complex logic), but the tests should also be readable and easy to maintain. Existing tests should be updated when necessary [e.g., during the initial development of a new feature when the application program interface (API) is not yet stable], but with caution since this may imply loss of backwards compatibility. Members of the PlasmaPy community may submit PlasmaPy Enhancement Proposals (PLEPs) to suggest changes such as major reorganization of a subpackage, creation of a new subpackage, non-backwards compatible changes to a stable package, or significant changes to policies and procedures related to the organization of this project. The issues list on GitHub will generally be more appropriate for changes that do not require community discussion. The CC shall maintain a GitHub repository of PLEPs. PLEPs will be made openly available for community discussion and transparency for a period of at least four weeks, during which time the proposal may be updated and revised by the proposers. The CC shall approve or decline these proposals after seeking community input. The rationale behind the decision and a summary of the community discussion shall be recorded along with the PLEP. Programming Guidelines¶ Choice of Languages¶ PlasmaPy shall be written using Python 3. PlasmaPy shall initially guarantee compatibility with Python 3.6 and above. Python 3 is continually growing, so we will proceed on the general principle that future updates to PlasmaPy remain compatible with releases of Python that are up to two years old. Python 2.7 and below will not be supported as these versions will no longer be updated past 2020. The core package will initially be written solely in Python. Code readability is more important than optimization, except when performance is critical. Code should be optimized only after getting it to work, and primarily for where there is a performance bottleneck. Performance-critical parts of the core package will preferably be written using Cython or Numba to achieve compiled speeds while maintaining the significant advantages of using a high level language. Versioning¶ PlasmaPy will use Semantic Versioning. Releases will be given version numbers of the form MAJOR.MINOR.PATCH, where MAJOR, MINOR, and PATCH are nonnegative integers. Starting with version 1.0, MAJOR will be incremented when backwards incompatible changes are made, MINOR will be incremented when new backwards-compatible functionality is added, and PATCH will be incremented when backwards-compatible bug fixes are made. Development releases will have MAJOR equal to zero and start at version 0.1. The API should not be considered stable during the development phase. PlasmaPy will release version 1.0 once it has a stable public API that users are depending on for production code. All releases will be provided with release notes and change log entries, and a table will be provided that describes the stability of the public API for each PlasmaPy subpackage. Dependencies¶ Dependencies have the advantage of providing capabilities that will enhance PlasmaPy and speed up its development, but the disadvantage that they can make manual installation more difficult and potentially frustrating. Package managers such as Anaconda and Homebrew greatly simplify installation of Python packages, but there will be situations where manual installation is necessary (e.g., on some supercomputers without package managers). The core package should be able to be imported using a minimal number of packages (e.g., NumPy, SciPy, and matplotlib) without getting an import error. Additional packages may be included as dependencies of the core package if there is a strong need for it, and if these packages are easily installed with currently available package managers. Subpackages may use additional dependencies when appropriate. Affiliated Packages¶ We will follow the practice of Astropy by having a core package and affiliated packages. The core package will contain common tools and base functionality that most plasma physicists will need. The affiliated packages contained in separate repositories will include more specialized functionality that is needed for subfields of plasma physics. This approach will reduce the likelihood of scope creep for the core package while maintaining avenues for broader development. Units¶ Multiple sets of units are used by plasma physicists. There exist some peculiarities with how units are used within plasma physics, such as how an electron volt is typically used as a measurement of temperature. Code will be most readable and maintainable if written assuming a particular set of units, but there should be enough flexibility for people in different subfields to choose their preferred set of units. As the generally most common accepted international standard, SI base units will be utilized. We will use an existing Python module (e.g., astropy.units or pint) to assign units to variables and allow straightforward conversion between different systems of units.
http://docs.plasmapy.org/en/stable/about/vision_statement.html
2019-05-19T10:17:27
CC-MAIN-2019-22
1558232254751.58
[]
docs.plasmapy.org
WebhookFilterRule The event criteria that specify when a webhook notification is sent to your URL. Contents - jsonPath A JsonPath expression that will be applied to the body/payload of the webhook. The value selected by the JsonPath expression must match the value specified in the MatchEqualsfield, otherwise the request will be ignored. For more information about JsonPath expressions, see Java JsonPath implementation in GitHub. Type: String Length Constraints: Minimum length of 1. Maximum length of 150. Required: Yes - matchEquals The value selected by the JsonPathexpression must match what is supplied in the MatchEqualsfield, otherwise the request will be ignored. Properties from the target action configuration can be included as placeholders in this value by surrounding the action configuration key with curly braces. For example, if the value supplied here is "refs/heads/{Branch}" and the target action has an action configuration property called "Branch" with a value of "master", the MatchEqualsvalue will be evaluated as "refs/heads/master". For a list of action configuration properties for built-in action types, see Pipeline Structure Reference Action Requirements. Type: String Length Constraints: Minimum length of 1. Maximum length of 150. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_WebhookFilterRule.html
2019-05-19T10:49:44
CC-MAIN-2019-22
1558232254751.58
[]
docs.aws.amazon.com
- - - - - System requirements - Pre install Citrix Application Delivery Management (ADM), you must understand the software requirements, browser requirements, port information, license information, and limitations. Requirements for Citrix ADM Requirements for Citrix ADM on-prem agent Minimum Citrix ADC versions required for Citrix ADM features *. Note If you have configured Citrix ADCs in High Availability mode, Citrix ADM uses the Citrix ADC’s subnet IP (Management SNIP) address to communicate with Citrix ADC. For communication using SNIP with Citrix ADM, the following ports remain the same. Limitations From Citrix ADM 12.1 or later, - Requirements for Citrix ADM on-prem agent - Minimum Citrix ADC versions required for Citrix ADM features - Requirements for Citrix SD-WAN instance management - Requirements for Citrix ADM analytics - Supported hypervisors - Supported operating systems and receiver versions - Supported browsers - Supported ports
https://docs.citrix.com/en-us/citrix-application-delivery-management-software/13/system-requirements.html
2019-05-19T11:57:46
CC-MAIN-2019-22
1558232254751.58
[]
docs.citrix.com
Tutorial: use the prepared data and automatically generate a regression model to predict taxi fare prices. By using the automated machine learning capabilities of the service, you define your machine learning goals and constraints. You launch the automated machine learning process. Then allow the algorithm selection and hyperparameter tuning to happen for you. The automated machine learning technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion. In this tutorial, you learn the following tasks: - Set up a Python environment and import the SDK packages. - Configure an Azure Machine Learning service workspace. - Autotrain a regression model. - Run the model locally with custom parameters. - Explore the results. If you don’t have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning service today. Note Code in this article was tested with Azure Machine Learning SDK version 1.0.0. Prerequisites Skip to Set up your development environment to read through the notebook steps, or use the instructions below to get the notebook and run it on Azure Notebooks or your own notebook server. To run the notebook you will need: - Run the data preparation tutorial. - A Python 3.6 notebook server with the following installed: - The Azure Machine Learning SDK for Python with automland notebooksextras matplotlib - The tutorial notebook - A machine learning workspace - The configuration file for the workspace in the same directory as the notebook Get all these prerequisites from either of the sections below. Use a cloud notebook server in your workspace It's easy to get started with your own cloud-based notebook server. The Azure Machine Learning SDK for Python is already installed and configured for you once you create this cloud resource. - Complete the Quickstart: Use a cloud-based notebook server to get started with Azure Machine Learning to create a workspace and launch the notebook webpage. - After you launch the notebook webpage, run the tutorials/regression-part2-automated-ml.ipynb notebook. Use your own Jupyter notebook server Use these steps to create a local Jupyter Notebook server on your computer. Make sure that you install matplotlib and the automl and notebooks extras in your environment. Use the instructions at Create a Azure Machine Learning service workspace to: - Create a Miniconda environment - Install the Azure Machine Learning SDK for Python - Create a workspace - Write a workspace configuration file (aml_config/config.json). Clone the GitHub repository. git clone Add a workspace configuration file using any of these methods: Copy the aml_config/config.json file you created in step 1 into the cloned directory. In the Azure portal, select Download config.json from the Overview section of your workspace. - Create a new workspace using code in the configuration.ipynb. Start the notebook server from your cloned directory. jupyter notebook After you complete the steps, run the tutorials/regression-part2-automated-ml.ipynb notebook. Set up your development environment All the setup for your development work can be accomplished in a Python notebook. Setup includes the following actions: - Install the SDK - Import Python packages - Configure your workspace Install and import packages If you are following the tutorial in your own Python environment, use the following to install necessary packages. pip install azureml-sdk[automl,notebooks] matplotlib Import the Python packages you need in this tutorial: import azureml.core import pandas as pd from azureml.core.workspace import Workspace import logging import os Configure workspace Create a workspace object from the existing workspace. A Workspace is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. Workspace.from_config() reads the file config.json and loads the details into an object named ws. ws is used throughout the rest of the code in this tutorial. After you have a workspace object, specify a name for the experiment. Create and register a local directory with the workspace. The history of all runs is recorded under the specified experiment and in Use the data flow object created in the previous tutorial. To summarize, part 1 of this tutorial cleaned the NYC Taxi data so it could be used in a machine learning model. Now, you use various features from the data set and allow an automated model to build relationships between the features and the price of a taxi trip. Open and run the data flow and review the results: import azureml.dataprep as dprep file_path = os.path.join(os.getcwd(), "dflows.dprep") dflow_prepared = dprep.Dataflow.open(file_path) the data into train and test sets Now you split the data into training and test sets by using the train_test_split function in the sklearn library. This function segregates the data into the x, features, dataset for model training and the y, values to predict, dataset=223) # flatten y_train to 1d array y_train.values.flatten() The purpose of this step is to have data points to test the finished model that haven't been used to train the model, in order to measure true accuracy. In other words, a well-trained model should be able to accurately make predictions from data it hasn't already seen. You now have the necessary packages and data ready for autotraining your model. Automatically train a model To automatically train a model, take the following steps: - Define settings for the experiment run. Attach your training data to the configuration, and modify settings that control the training process. - Submit the experiment for model tuning. After submitting the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric. Define settings for autogeneration and tuning Define the experiment parameter and model settings for autogeneration and tuning. View the full list of settings. Submitting the experiment with these default settings will take approximately 10-15 min, but if you want a shorter run time, reduce either iterations or iteration_timeout_minutes. automl_settings = { "iteration_timeout_minutes" : 10, "iterations" : 30, "primary_metric" : 'spearman_correlation', "preprocess" : True, "verbosity" : logging.INFO, "n_cross_validations": 5 } Use your defined training settings as a parameter to an AutoMLConfig object. Additionally, specify your training data and the type of model, which is regression in this case.. Set the output to True to view progress during the experiment: from azureml.core.experiment import Experiment experiment=Experiment(ws, experiment_name) local_run = experiment.submit(automated_ml_config, show_output=True) The output shown updates live as the experiment runs. For each iteration, you see the model type, the run duration, and the training accuracy. The field BEST tracks the best running training score based on your metric use a Jupy You can also retrieve the history of each experiment and explore the individual metrics for each iteration run. By examining RMSE (root_mean_squared_error) for each individual model run, you see that most iterations are predicting the taxi fair cost within a reasonable margin ($3-4). children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics. By using the overloads on get_output, you can retrieve the best run and fitted model for any logged metric or a particular iteration: best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) Test the best model accuracy Use the best model to run predictions on the test dataset to predict taxi fares. The function predict uses the best model and predicts the values of y, trip cost, from the x_test dataset. Print the first 10 predicted cost values from y_predict: y_predict = fitted_model.predict(x_test.values) print(y_predict[:10]) Create a scatter plot to visualize the predicted cost values compared to the actual cost values. The following code uses the distance feature as the x-axis and trip cost as the y-axis. To compare the variance of predicted cost at each trip distance value, the first 100 predicted and actual cost values are created as separate series. Examining the plot shows that the distance/cost relationship is nearly linear, and the predicted cost values are in most cases very close to the actual cost values for the same trip distance. import matplotlib.pyplot as plt fig = plt.figure(figsize=(14, 10)) ax1 = fig.add_subplot(111) distance_vals = [x[4] for x in x_test.values] y_actual = y_test.values.flatten().tolist() ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker="s", label='Predicted') ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker="o", label='Actual') ax1.set_xlabel('distance (mi)') ax1.set_title('Predicted and Actual Cost/Distance') ax1.set_ylabel('Cost ($)') plt.legend(loc='upper left', prop={'size': 12}) plt.rcParams.update({'font.size': 14}) plt.show() Calculate the root mean squared error of the results. Use the y_test dataframe. Convert it to a list to compare to the predicted values. The function mean_squared_error takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, cost. It indicates roughly how far the taxi fare predictions are from the actual fares: from sklearn.metrics import mean_squared_error from math import sqrt rmse = sqrt(mean_squared_error(y_actual, y_predict)) rmse 3.2204936862688798 Run the following code to calculate mean absolute percent error (MAPE) by using the full y_actual and y_predict datasets. This metric calculates an absolute difference between each predicted and actual value and sums all the differences. Then it.10545153869569586 Model Accuracy: 0.8945484613043041 From the final prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $3.00. The traditional machine learning model development process is highly resource-intensive, and requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario. Clean up resources Important The resources you created can be used as prerequisites to other Azure Machine Learning service In this automated machine learning tutorial, you did the following tasks: - Configured a workspace and prepared data for an experiment. - Trained by using an automated regression model locally with custom parameters. - Explored and reviewed training results. Deploy your model with Azure Machine Learning. Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-auto-train-models
2019-05-19T11:34:06
CC-MAIN-2019-22
1558232254751.58
[array(['media/tutorial-auto-train-models/flow2.png', 'Flow diagram'], dtype=object) array(['media/tutorial-auto-train-models/automl-dash-output.png', 'Jupyter widget run details'], dtype=object) array(['media/tutorial-auto-train-models/automl-chart-output.png', 'Jupyter widget plot'], dtype=object) array(['media/tutorial-auto-train-models/automl-scatter-plot.png', 'Prediction scatter plot'], dtype=object) ]
docs.microsoft.com
ActiveState::PerlCritic::UserProfile - Edit a perlcriticrc file NAME ActiveState::PerlCritic::UserProfile - Edit a perlcriticrc file SYNOPSIS my $profile = ActiveState::PerlCritic::UserProfile->new( $filename ); my $policy = $profile->policy("RegularExpressions::RequireExtendedFormatting"); $policy->state("enabled"); $policy->severity(2); $policy->param("foo" => 42); $profile->save( $filename ); DESCRIPTION ActiveState::PerlCritic::UserProfile objects holds a perlcriticrc file where policy state and parameters can be queried/modified and the whole configuration file written back to disk. The following methods are provided: - $profile = ActiveState::PerlCritic::UserProfile->new - - $profile = ActiveState::PerlCritic::UserProfile->new( $filename ) Creates a new profile object and optinally initialize its state from the given filename. If a filename is passed it's also saved so that the calling the save method without a filename saves back to the same file. - $profile = ActiveState::PerlCritic::UserProfile->new_default Open up the user default perlcriticrc file; usually found at ~/.perlcriticrc. The file name is saved so that invoking the save method without a filename saves the state back to the file. - $profile->save - - $profile->save( $filename ) Write the current state of the userprofile object back to the given file. If no filename is given try to save back to the filename that the profile object was initialized from. - $profile->filename Returns the filename that the state was initialized from or last saved to. - $profile->dirname Returns the name of the directory where the profile file resides. - $profile->content Returns the content that would be written if the profile had been saved now. - $profile->revert Revert to the stored version of the configuration file. - $profile->clear Empty the configuration file. - $profile->param( $name ) - - $profile->param( $name => $new_svalue ) Gets or sets the specified global parameter value - $profile->policies Lists all the policies (both configured or unconfigured). - $profile->policy( $name ) Look up the given policy object. The returned object provide the following methods: - $policy->name Returns the name of the policy; it's a string like "RegularExpressions::RequireExtendedFormatting". - $policy->config_name Returns the name used in the configuration file. This will often be the same as $policy->name, but not always. There should not really be a reason to expose this name to users. - $policy->state - - $policy->state( $new_state ) Gets or sets the state of the policy. The state is one of the following values: unconfigured enabled disabled - $policy->severity - - $policy->severity( $int ) Gets or sets the severity for the policy. It's a number in the range 1 to 5. - $policy->param( $name ) - - $policy->param( $name => $new_value ) Gets or sets policy specific parameter values SEE ALSO Perl::Critic, ActiveState::Config::INI
http://docs.activestate.com/activeperl/5.24/perl/lib/ActiveState/PerlCritic/UserProfile.html
2019-05-19T10:26:51
CC-MAIN-2019-22
1558232254751.58
[]
docs.activestate.com
Module::Build::API - API Reference for Module Authors - NAME - DESCRIPTION - MODULE METADATA - AUTHOR - SEE ALSO NAME Module::Build::API - API Reference for Module Authors DESCRIPTION. CONSTRUCTORS - current() [version 0.20] This method returns a reasonable facsimile of the currently-executing Module::Buildobjectobjectwas invoked from. - new() [version 0.03]. - [version 0.28]is executed, but you've listed in "build_requires" so that they should be available when ./Buildis executed. - build_requires [version 0.07]. - configure_requires [version 0.30] Modules listed in this section must be installed before configuring this distribution (i.e. before running the Build.PL script). This might be a specific minimum version of Module::Buildor any other module the Build.PL needs in order to do its stuff. Clients like CPAN.pmor CPANPLUSwill be expected to pick configure_requiresout of the META.yml file and install these items before running the Build.PL. Module::Build may automatically add itself to configure_requires. See "auto_configure_requires" for details. See the documentation for "PREREQUISITES" in Module::Build::Authoring for the details of how requirements can be specified. - [version 0.07]. - [version 0.11". - dynamic_config [version 0.07]doesn't actually do anything with this flag - it's up to higher-level tools like CPAN.pmto do something useful with it. It can potentially bring lots of security, packaging, and convenience improvements. - extra_compiler_flags - - extra_linker_flags [version 0.19]`, ); - extra_manify_args [version 0.4006] Any extra arguments to pass to Pod::Man->new()when building man pages. One common choice might be utf8 => 1to get Unicode support. - get_options [version 0.26 A default value for the option. If no default value is specified and no option is passed, then the option key will not exist in the hash returned by args().. Use capitalized option names to avoid unintended conflicts with future Module::Build options. Consult the Getopt::Long documentation for details on its usage. - The distribution is licensed under a license that is not approved by but that allows distribution without restrictions.might create the file lib/Foo/Bar.pm. The files are specified with the .PLfilesscripts' => [], } [version 0.08]. -instead [version 0.16code) [version 0.26]. -) [version 0.26 is performed. Among the files created in _build/is a _build/prereqs file containing the set of prerequisites for this distribution, as a hash of hashes. This file may be eval()-ed to obtain the authoritative set of prerequisites,. - current_action() [version 0.28]. - depends_on(@actions) [version 0.28]. - dir_contains($first_dir, $second_dir) [version 0.28] Returns true if the first directory logically contains the second directory. This is just a convenience function because File::Specdoesn't really provide an easy way to figure this out (but Path::Classdoes...). - dispatch($action, %args) [version 0.03]. -) [version 0.21] system's shell, and any special characters will do their special things. If you supply multiple arguments, no shell will get involved and the command will be executed directly. -) [version 0.26will any installable element. This is useful if you want to set the relative install path for custom build elements.. -) [version 0.28]. The supplied $pathshould be an absolute path to install elements of $type. The return value is $path. Assigning the value undefto an element causes it to be removed. - install_types() [version 0.28]htmland binhtml. Other user-defined types may also exist. - invoked. - notes() - - notes($key) - - notes($key => () [version. - prepare_metadata() [deprecated] [version 0.36] [Deprecated] As of 0.36, authors should use get_metadatainstead.action is just a thin wrapper around the prereq_report()method. - prompt($message, $default) [version 0.12]. -) [version 0.28]or any execution of ./Buildhaddata) [version 0.20]. - y_n($message, $default) [version 0.12].
http://docs.activestate.com/activeperl/5.24/perl/lib/Module/Build/API.html
2019-05-19T10:23:29
CC-MAIN-2019-22
1558232254751.58
[]
docs.activestate.com
class Regex String pattern is Method A regex is a kind of pattern that describes a set of strings. The process of finding out whether a given string is in the set is called matching. The result of such a matching is a Match object, which evaluates to True in boolean context if the string is in the set. A regex is typically constructed by a regex literal rx/ ^ab /; # describes all strings starting with 'ab'/ ^ ab /; # samerx/ \d ** 2/; # describes all strings containing at least two digits A named regex can be defined with the regex declarator followed by its definition in curly braces. Since any regex does Callable introspection requires referencing via &-sigil. my ;say .^name; # OUTPUT: «Regex␤» To match a string against a regex, you can use the smartmatch operator: my = 'abc' ~~ rx/ ^ab /;say .Bool; # OUTPUT: «True␤»say .orig; # OUTPUT: «abc␤»say .Str; # OUTPUT: «ab␤»say .from; # OUTPUT: «0␤»say .to; # OUTPUT: «2␤» Or you can evaluate the regex in boolean context, in which case it matches against the $_ variable = 'abc';if / ^ab /else Methods method ACCEPTS multi method ACCEPTS(Regex: Mu --> Match)multi method ACCEPTS(Regex: @)multi method ACCEPTS(Regex: %) Matches the regex against the argument passed in. If the argument is Positional, it returns the first successful match of any list item. If the argument is Associative, it returns the first successful match of any key. Otherwise it interprets the argument as a Str and matches against it. In the case of Positional and Associative matches, Nil is returned on failure. method Bool multi method Bool(Regex: --> Bool) Matches against the caller's $_ variable, and returns True for a match or False for no match. Type Graph Regex Routines supplied by class Method Regex inherits from class Method, which provides the following routines: Routines supplied by class Routine Regex Regex Regex.
http://docs.perl6.org/type/Regex
2019-05-19T10:37:53
CC-MAIN-2019-22
1558232254751.58
[]
docs.perl6.org
ERPNext Version 12 is almost fully accessible using keyboard shortcuts. The keyboard shortcuts for any page can be viewed by opening the keyboard shortcuts dialog from the Help dropdown on the navigation bar. The dialog displays a list of global shortcuts that can be used over the entire app. It also displays a list of shortcuts that are specific to the current page. Similar to the pattern used by Windows, Alt is an important key for keyboard shortcuts on ERPNext. Pressing Alt on any page underlines a single character on the Page Menu buttons as well as the buttons on the sidebar. Pressing the underlined character while keeping Alt pressed triggers the button click. Here, N on the New button is underlined, so pressing Alt + N opens the New Customer dialog: For the Menu drop-down, a single character on each drop-down item is underlined. So when the Menu is open, the items in the Menu can be clicked by pressing the underlined character while keeping Alt pressed: As specified in the Keyboard Shortcuts Dialog, List View is navigable using the Up and Down arrow keys. Enter opens the list item and Space selects the list item. Shift + Down or Shift + Up can be used to select multiple list items. Next: Comments ERPNext is used by more than 5000 companies across the world
https://docs.erpnext.com/docs/user/manual/en/using-erpnext/articles/keyboard-shortcuts
2020-03-28T18:13:49
CC-MAIN-2020-16
1585370492125.18
[]
docs.erpnext.com
{"title":"Jul 2-Jul 8, 2016","slug":"jul-2-jul-8-2016","body":"## July 8\n\n**Frame UI**\n+ Resolved an issue where upgrading from a trial account seemed to fail with an error although it actually worked correctly.\n\n**Frame Terminal**\n+ Resolved an issue where the Frame Terminal Session Status Bar momentarily appeared at full width then resized when started with a fixed resolution.\n+ Resolved an issue where uploading files with drag and drop did not work on Firefox or Safari.","_id":"5863f7a7ec13242d00edf0e4","createdAt":"2016-12-28T17:34:31.168Z","__v":0,"changelog":[],"project":"55d535ca988e130d000b3f5c","user":{"name":"Bill Glover","username":"","_id":"56461e119f3f550d00fa3da2"},"metadata":{"title":"","description":"","image":[]}}
https://docs.fra.me/v1.0/blog/jul-2-jul-8-2016
2020-03-28T18:27:43
CC-MAIN-2020-16
1585370492125.18
[]
docs.fra.me
This document explains how you define a work schedule view to be used for the work schedule in M3 Maintenance. Work schedule views place high demands on system resources. For optimal system performance, it is recommended that you do not create more work schedule views than you need and only assign status 20 (Active) to the views that are actually used. A work schedule view is defined for the work schedule for work requests in 'Work Request. Open Toolbox' (MOS197) or for work orders in 'Work Schedule. Open Toolbox'(MOS195). The view determines which operations or operation elements are selected and displayed when making searches in the work schedule in (MOS195) or (MOS197), as well as how they are displayed. In order to use the work schedule view, you need to define an sorting order for the work schedule. While doing so, you specify the work schedule view field as one of the key fields in the sorting option. It is recommended that you specify the view as the first key field. Sorting orders are defined in 'Sorting order. Open' (CRS022). Once this is done, the work schedule view is ready to be used. Whenever you start the work schedule in (MOS195) or (MOS197), you select the sorting order and then enter the work schedule view ID. The work schedule view is saved in the MMOWSV table. If a selection table is used, this selection table must be defined for the (MMOWSO) table in 'General Selection Table. Open' (CRS023). Start 'Work Schedule View. Open' (MOS152). Enter an ID for the work schedule view. Select Create. On panel E, enter a name and description for the work schedule view. Enter a status for the work schedule view. Specify the facilities for which the work schedule will apply. Select one of the following alternatives: Specify a selection table with criteria that the operations must meet in order to be displayed in the work schedule, if applicable. If you want the operations to be displayed on all valid site structure levels, select the 'Include for each site structure level' check box. If you want each operation element in an operation to be displayed, select the 'Include for each operation element' check box. Click Enter.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.maintmgmths_16.x/c100345.html
2020-03-28T18:04:22
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
This document explains item hierarchy structure. There are several requirements on user defined item hierarchy and/or product catalog functionality. The main requirements are from two areas: Product catalog is not the same as item hierarchy. One difference is that item hierarchy is focusing on companies' internal business structures, while product catalog is a way to present information between customers and suppliers. The scope for this design is item hierarchy structure, a flexible way to search items and new term for statistics and control. One reason why the product catalog is excluded from this design is that the product catalog logic most likely will be placed on the web server not within the ERP application it self. The main use is to give possibility to search items in a structured way and to have statistics on other terms than we have today. Item hierarchy fields be used as a path for user-defined tables, and also to create the contents of user-defined files: The following tables are updated: Items are registered as described in Create and Connect Item to a Warehouse Structure. The main use is to allow to search items in a structured way and to have statistics on other terms what we have today. Structure Item Group Structure This diagram shows an example of item group hierarchy. Note the difference between horizontal (search groups) and vertical (hierarchy ID) search. The vertical search is based on the fact that each hierarchy level (Hier lvl) is represented as an iterative identity field that is repeated for each level. The approach is a drill down approach. Each hierarchy level could also be unique. This makes it possible to do a horizontal search. Since the item hierarchies and search groups are stored on the item master can this information be used in views from different programs like item toolbox (MMS200), item statistics (MMS090) etc. Item Hierarchy = A hierarchical search patch that reflect a companies business in a structure way. A drill down approach, called hierarchy id in the data base. Search group = Could be used as a complement to do a horizontal search. Hierarchy level = Defines on which level a hierarchy entity is defined. Value 1 to 5 is permitted. One is the top level. Upper level = Defines the level above, used on keys for drill down purposes. Upper level identity = Defines the hierarchy identity for the level above. Field group Program MM200 Item toolbox (MMS200) MMKV1 Item toolbox (MMS200) MMIT1 Views supply chain (MWS051) MMPV5 Stock transactions (MWS070) MWPV2 Balance id (MWS060) In following programs can item hierarchy fields be used as a path for user-defined tables, and also to create the contents of user-defined files: Field group Program This diagram shows the files and programs involved in the described workflow.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.scexechs_16.x/c000848.html
2020-03-28T18:12:56
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
Introduction This quickstart guide describes how to create a new KNIME Extension, i.e. write a new node implementation to be used in KNIME Analytics Platform. You’ll learn how to set up a KNIME SDK, how to create a new KNIME Extension project, how to implement a simple manipulation node, how to test the node, and how to easily deploy the node in order to make it available for others. For this purpose, we created a reference extension you can use as orientation. This KNIME Extension project can be found in the org.knime.examples.numberformatter folder of the knime-examples GitHub repository. It contains all required project and configuration files and an implementation of a simple Number Formatter example node, which performs number formatting of numeric values of the input table. We will use this example implementation to guide you through all necessary steps that are involved in the creation of a new KNIME Extension. Set up a KNIME SDK In order to start developing KNIME source code, you need to set up a KNIME SDK. A KNIME SDK is a configured Eclipse installation which contains KNIME Analytics Platform dependencies. This is necessary as Eclipse is the underlying base of KNIME Analytics Platform i.e. KNIME Analytics Platform is a set of plug-ins that are put on top of Eclipse and the Eclipse infrastructure. Furthermore, Eclipse is an IDE, which you will use to write the actual source code of your new node implementation. To set up your KNIME SDK, we start with an "Eclipse IDE for RCP and RAP Developers" installation (this version of Eclipse provides tools for plug-in development) and add all KNIME Analytics Platform dependencies. In order to do that, please follow the SDK Setup instructions. Apart from giving instructions on how to set up a KNIME SDK, the SDK Setup will give some background about the Eclipse infrastructure, its plug-in mechanism, and further useful topics like how to explore KNIME source code. Create a New KNIME Extension Project After Eclipse is set up and configured, create a new KNIME Extension project. A KNIME Extension project is an Eclipse plug-in project and contains the implementation of one or more nodes and some KNIME Analytics Platform specific configuration. The easiest way to create a KNIME Extension project, is by using the KNIME Node Wizard, which will automatically generate the project structure, the plug in manifest and all required Java classes. Furthermore, the wizard will take care of embedding the generated files in the KNIME framework. The KNIME Node Wizard Install the KNIME Node Wizard Open the Eclipse installation wizard at Help → Install New Software…, enter the following update site location: the location box labelled Work with:. Hit the Enter key, and put KNIME Node Wizardin the search box. Tick the KNIME Node Wizardunder the category KNIME Node Development Tools, click the Next button and follow the instructions. Finally, restart Eclipse.Figure 1. The KNIME Node Wizard installation dialog. Start the KNIME Node Wizard After Eclipse has restarted, start the KNIME Node Wizard at File → New → Other…, select Create a new KNIME Node-Extension(can be found in the category Other), and hit the Next button.Figure 2. The KNIME Node Wizard start dialogs. Create a KNIME Extension Project In the Create new KNIME Node-Extensiondialog window enter the following values: New Project Name: org.knime.examples.numberformatter Node class name: NumberFormatter Package name: org.knime.examples.numberformatter Node vendor: <your_name> Node type: Select Manipulatorin the drop down menu. Replace <your_name>with the name that you like to be the author of the created extension. Leave all other options as is and click Finish.Figure 3. The KNIME Node Wizard dialog. After some processing, a new project will be displayed in the Package Explorer view of Eclipse with the project name you gave it in the wizard dialog.Figure 4. A view of Eclipse after the KNIME Node Wizard has run. Test the Example Extension At this point, all parts that are required for a new KNIME Extension are contained in your Eclipse workspace and are ready to run. To test your node, follow the instructions provided in the Launch KNIME Analytics Platform Section of the SDK Setup. After you started KNIME Analytics Platform from Eclipse, the Number Formatter node will be available at the root level of the node repository. Create a new workflow using the node (see Figure below), inspect the input and output tables, and play around with the node. The node will perform simple rounding of numbers from the input table. To change the number of decimal places the node should round to, change the digit contained in the format String that can be entered in the node configuration (e.g. %.2f will round to two decimal places,the default value is %.3f). After you are done, close KNIME Analytics Platform. Project Structure Next, let’s review the important parts of the extension project you’ve just created. First, we’ll have a look at the files located in org.knime.examples.numberformatter. The files contained in this folder correspond to the actual node implementation. There are four Java classes implementing what the node should do, how the dialog and the view looks like, one XML file that contains the node description, and an image which is used as the node icon (in this case a default icon) displayed in the workflow view of KNIME Analytics Platform. Generally, a node implementation comprises of the following classes: NodeFactory, NodeModel, NodeDialog, NodeView. In our case, these classes are prefixed with the name you gave the node in the KNIME Node Wizard, i.e. NumberFormatter. NumberFormatterNodeFactory.java The NodeFactorybundles all parts that make up a node. Thus, the factory provides creation methods for the NodeModel, NodeDialog, and NodeView. Furthermore, the factory will be registered via a KNIME extension point such that the node is discoverable by the framework and will be displayed in the node repository view of KNIME Analytics Platform. The registration of this file happens in the plugin.xml(see description of the plugin.xmlfile below). NumberFormatterNodeModel.java The NodeModelcontains the actual implementation of what the node is supposed to do. Furthermore, it specifies the number of inputs and outputs of a node. In this case the node model implements the actual number formatting. NumberFormatterNodeDialog.java(optional) The NodeDialogprovides the dialog window that opens when you configure (double click) a node in KNIME Analytics Platform. It provides the user with a GUI to adjust node specific configuration settings. In the case of the Number Formatter node this is just a simple text box where the user can enter a format String. Another example would be the file path for a file reader node. NumberFormatterNodeView.java(optional) The NodeView provides a view of the output of the node. In the case of the Number Formatter node there will be no view as the output is a simple table. Generally, an example for a view could be a tree view of a node creating a decision tree model. NumberFormatterNodeFactory.xml This XML file contains the node description and some metadata of the node. The root element must be a <knimeNode> … </knimeNode>tag. The attributes of this tag further specify the location of the node icon ( icon=”…”) and the type of the node ( type=”…”). Note that this is the type you selected in the dialog of the Node Wizard earlier. The most common types are Source, Manipulator, Predictor, Learner, Sink, Viewer, and Loop. The description of the node is specified in the children of the root tag. Have a look at the contents of the file for some examples. The .xmlmust be located in the same package as the NodeFactoryand it has to have the same name (only the file ending differs). default.png This is the icon of the node displayed in the workflow editor. The path to the node icon is specified in the NumberFormatterNodeFactory.xml( iconattribute of the knimeNodetag). In this case the icon is just a placeholder displaying a question mark. For your own node, replace it with an appropriate image representative of what the node does. It should have a resolution of 16x16 pixels. Apart from the Java classes and the factory .xml, which define the node implementation, there are two files that specify the project configuration: plugin.xmland META-INF/MANIFEST.MF These files contain important configuration data about the extension project, like dependencies to other plug-ins and the aforementioned extension points. You can double click on the plugin.xmlto open an Eclipse overview and review some of the configuration options (e.g. the values we entered in KNIME Node Wizard are shown on the overview page under General Informationon the left). However, you do not have to change any values at the moment. plugin.xml and MANIFEST.MF. Number Formatter Node Implementation Once you have reviewed the project structure, we have a look at some implementation details. We will cover the most important parts as the example code in the project you created earlier already contains detailed comments in the code of the implemented methods (also have a look at the reference implementation in the org.knime.examples.numberformatter folder of the knime-examples repository). Generally, the Number Formatter node takes a data table as input and applies a user specified format String to each Double column of the input table. For simplicity, the output table only contains the formatted numeric columns as String columns. This basically wraps the functionality of the Java String.format(…) function applied to a list of Double values into a node usable in KNIME Analytics Platform. Let’s work through the most important methods that each node has to implement. The functionality of the node is implemented in the NumberFormatterNodeModel.java class: protected NumberFormatterNodeModel() { super(1, 1); } The super(1, 1) call in the constructor of the node model specifies the number of output and input tables the node should have. In this case it is one input and one output table. BufferedDataTable[] execute(final BufferedDataTable[] inData, final ExecutionContext exec) The actual algorithm of the node is implemented in the execute method. The method is invoked only after all preceding nodes have been successfully executed and all data is therefore available at the input ports. The input table will be available in the given array inData which contains as many data tables as specified in the constructor. Hence, the index of the array corresponds to the port index of the node. The type of the input is BufferedDataTable, which is the standard type of all tabular data in KNIME Analytics Platform. The persistence of the table (e.g. when the workflow is saved) is automatically handled by the framework. Furthermore, a BufferedDataTable is able to handle data larger than the size of the main memory as the data will be automatically flushed to disk if necessary. A table contains DataRow objects, which in turn contain DataCell objects. DataCells provide the actual access to the data. There are a lot of DataCell implementation for all types of data, e.g. a DoubleCell containing a floating point number in double precision (for a list of implementations have a look at the type hierarchy of the DataCell class). Additionally, each DataCell implements one or multiple DataValue interfaces. These define which access methods the cell has i.e. which types it can be represented as. For example, a BooleanCell implements IntValue as a Boolean can be easily represented as 0 and 1. Hence, for each DataValue there could be several compatible DataCell classes. The second argument exec of the method is the ExecutionContext which provides means to create/modify BufferedDataTable objects and report the execution status to the user. The most straightforward way to create a new DataTable is via the createDataContainer(final DataTableSpec spec) method of the ExecutionContext. This will create an empty container where you can add rows to. The added rows must comply with the DataTableSpec the data container was created with. E.g. if the container was created with a table specification containing two Double columns, each row that is added to the container must contain two DoubleCells. After you are finished adding rows to the container close it via the close() method and retrieve the BufferedDataTable with getTable(). This way of creating tables is also used in the example code (see NumberFormatterNodeModel.java). Apart from creating a new data container, there are more powerful ways to modify already existing input tables. However, these are not in the scope of this quickstart guide, but you can have a look at the methods of the ExecutionContext. The execute method should return an array of output BufferedDataTable objects with the length of the number of tables as specified in the constructor. These tables contain the output of the node. DataTableSpec[] configure(final DataTableSpec[] inSpecs) The configure method has two responsibilities. First, it has to check if the incoming data table specification is suitable for the node to execute with respect to the user supplied settings. For example, a user may disallow a certain column type in the node dialog, then we need to check if there are still applicable columns in the input table according to this setting. Second, to calculate the table specification of the output of the node based on the inputs. For example: imagine the Number Formatter node gets a table containing two Double columns and one String column as input. Then this method should return a DataTableSpec (do not forget to wrap it in an array) containing two DataColumnSpec of type String (the Double columns will be formatted to String, all other columns are ignored). Analogously to the execute method, the configure method is called with an array of input DataTableSpec objects and outputs an array of output DataTableSpec objects containing the calculated table specification. If the incoming table specification is not suitable for the node to execute or does not fit the user provided configuration, throw an InvalidSettingsException with an informative message for the user. saveSettingsTo(final NodeSettingsWO settings) and loadValidatedSettingsFrom(final NodeSettingsRO settings) These methods handle the loading and saving of settings that control the behaviour of the node, i.e. the settings entered by the user in the node dialog. This is used for communication between the node model and the node dialog and to persist the user settings when the workflow is saved. Both methods are called with a NodeSettings object (in a read only (RO) and write only (WO) version) that stores the settings and manages writing or reading them to or from a file. The NodeSettings object is a key-value storage, hence it is easy to write or read to or from the settings object. Just have a look at the provided methods of the NodeSettings object in your Eclipse editor. In our example, we do not write settings directly to the NodeSettings object as we are using a SettingsModel object to store the user defined format String. SettingsModel objects already know how to write and read settings from the NodeSettings (via methods that accept NodeSettings) and help to keep settings synchronization between the model and dialog simple. Furthermore, they can be used to create simple dialogs where the loading and saving of settings is already taken care of. You can find the actual algorithm of the Number Formatter node in the execute method in the NumberFormatterNodeModel.java class. We encourage you to read through the code of the above mentioned classes to get a deeper understanding of all parts of a node. For a more thorough explanation about how a node should behave consult the KNIME Noding Guidelines. Deploy your Extension This section describes how to manually deploy your Extension after you have finished the implementation using the Number Formatter Extension as example. There are two options: Option 1: Local Update Site (recommended) The first option is to create a local Update Site build, which can be installed using the standard KNIME Analytics Platform update mechanism. To create a local Update Site build, you need to create a Feature project that includes your extension. A Feature is used to package a group of plug-ins together into a single installable and updatable unit. To do so, go to File → New → Other… , open the Plug-in Development category, select Feature Project and click the Next button. Enter the following values in the Feature Properties dialog window: Project ID: org.knime.examples.numberformatter.feature Feature Name: Number Formatter Feature Version: leave as is Feature Vendor: <your_name> Install Handler Library: leave empty Replace <your_name> with the name that you like to be the author of the created extension. Additionally, choose a location for the new Feature Project (e.g. next to the Number Formatter Extension) and click the Next button. On the next dialog choose Initialize from the plug-ins list: and select the org.knime.examples.numberformatter plug-in (you can use the search bar to easily find the plug-in). The plug-ins selected here are the ones that will be bundled into an installable unit by the Feature. Of course, you can edit that list later on. Finally, hit the Finish button. After the wizard has finished, you will see a new project in your Eclipse Package Explorer with the Project ID you gave it earlier and Eclipse will automatically open an overview of the feature.xml (you can also open this view by double clicking on the feature.xml file located in the Feature Project). The Feature overview looks similar to the plugin.xml overview, be careful not to confuse them. You can view/modify the list of included plug-ins by selecting the Included Plug-ins tab at the bottom of the overview dialog. feature.xml. The link to create an Update Site Project is marked in red. Next, you need to publish the Feature on a local Update Site. For this, first create an Update Site Project by clicking on the Update Site Project link on bottom right corner of the Eclipse overview dialog of the feature.xml (see figure above). This will start the Update Site Project Wizard. On the shown dialog, enter the following: Project name: org.knime.examples.numberformatter.update Again, choose a location for the new Update Site Project and click the Finish button. Similar to the Feature Project Wizard, you will see a new project in your Eclipse Package Explorer with the Project name you gave it in the wizard dialog and Eclipse will automatically open an overview of the site.xml called Update Site Map. Again similar to a Feature, an Update Site bundles one or several Features that can be installed by the Eclipse update mechanism. site.xml. The Update Site in this image already contains one category called number_formattingwhere the org.knime.examples.numberformatter.featurewas added to. This way the Number Formatter Extension will be listed under this category during installation. On the Eclipse overview of the site.xml, first create a new category by clicking on the New Category button. This will create a new default category shown in the tree view on the left. On the right, enter an ID like number_formatting and a Name like Number Formatting. This name will be displayed as a category and used for searching when the Feature is installed. Also, provide a short description of the category. Second, select the newly created category from the tree view and click the Add Feature… button. On the shown dialog, search for org.knime.examples.numberformatter.feature and click the Add button. At last, click the Build All button. This will build all Features added to the update site and create an installable unit that can be used to install the Number Formatter Extension into an KNIME Analytics Platform instance. After building has finished, you can now point KNIME Analytics Platform to this folder (which now contains a local Update Site) to install the Extension. To do so, in KNIME Analytics Platform open the Install New Software… dialog, click on the Add button next to the update site location, on the opening dialog click on Local…, and choose the folder containing the Update Site. At last, give the local Update Site a name and click OK. Now, you can install the Number Formatter Extension like any other plug-in. The above description shows how to manually deploy a new Extension using the Eclipse update site mechanism. However, in a real world scenario this process should be done automatically. If you think your new node or extension could be valuable for others, KNIME provides the infrastructure to host and automatically deploy your Extension by becoming a Community Contributor and providing a Community Extension. This way, your extension will be installable via the Community Extension update site. For more information about Community Extensions and how to become a contributor please see the Community section on our website or contact us. Option 2: dropin The second option is to create a dropin using the Deployable plug-ins and fragments Wizard from within Eclipse. A dropin is just a .jar file containing your Extension that is simply put into the Eclipse dropins folder to install it. To create a dropin containing your Extension, go to File → Export → Plug-in Development → Deployable plug-ins and fragments and click Next. The dialog that opens will show a list of deployable plug-ins from your workspace. Check the checkbox next to org.knime.examples.numberformatter. At the bottom of the dialog you are able to select the export method. Choose Directory and supply a path to a folder where you want to export your plugin to. At last click Finish. After the export has finished, the selected folder will contain a .jar file containing your plugin. To install it into any Eclipse or KNIME Analytics Platform installation, place the .jar file in the dropins folder of the KNIME/Eclipse installation folder. Note that you have to restart KNIME/Eclipse for the new plugin to be discovered. In this example, the node is then displayed at the top level of the node repository in KNIME Analytics Platform. Further Reading For more information on development see the Developers Section of the KNIME website. - If you have questions regarding development, reach out to us in the KNIME Development category of our forum.
https://docs.knime.com/2019-12/analytics_platform_new_node_quickstart_guide/index.html
2020-03-28T18:04:41
CC-MAIN-2020-16
1585370492125.18
[array(['./img/knime.png', 'knime'], dtype=object) array(['./img/plugin_xml_qualifier.png', 'plugin xml qualifier'], dtype=object) array(['./img/start_feature_wizard.png', 'start feature wizard'], dtype=object) array(['./img/run_feature_wizard.png', 'run feature wizard'], dtype=object) array(['./img/feature_overview_marked.png', 'feature overview marked'], dtype=object) array(['./img/run_update_wizard.png', 'run update wizard'], dtype=object) array(['./img/update_site_overview.png', 'update site overview'], dtype=object) array(['./img/update_site_feature_selection.png', 'update site feature selection'], dtype=object) array(['./img/export.png', 'export'], dtype=object)]
docs.knime.com
New tools for Windows Azure On February 1st we released the latest version of the Windows Azure Tools and SDK Version 1.1. This release adds support for Windows Azure mountable Drives as well as numerous bug fixes. The new Drives feature is designed to support moving applications that currently address the NTFS file system into the cloud. With this feature you can upload or create a special Windows Azure blob that is mountable like a VHD such that a drive letter can be assigned and used by a Windows Azure program. This feature is in addition to the use of standard persistent Windows Azure Blob, Table and Queue storage as well as the local non-persistent storage that can be used for caching and other local functions on every Windows Azure Server. This SDK also implements the Windows Azure OS version support which will allow a Windows Azure application to choose the appropriate guest OS version that it wants to run with. Technorati Tags: Windows Azure,Visual Studio,SDK
https://docs.microsoft.com/en-us/archive/blogs/innov8showcase/new-tools-for-windows-azure
2020-03-28T18:16:52
CC-MAIN-2020-16
1585370492125.18
[]
docs.microsoft.com
- Security > - Authentication > - Authentication Mechanisms > - x.509 x.509¶ New in version 2.6. MongoDB supports x.509 certificate authentication for client authentication and internal authentication of the members of replica sets and sharded clusters. x.509 certificate authentication requires a secure TLS/SSL connection.:. MongoDB authenticate using x.509 client certificate, connect to MongoDB over TLS/SSL connection; i.e. include the --ssl and --sslPEMKeyFile command line options. Then in the $external database, use db.auth() to authenticate the user corresponding to the client certificate. For an example, see Use x.509 Certificates to Authenticate Clients Member x.509 Certificates¶ New in version 3.0. For internal authentication, members of sharded clusters and replica sets can use x.509 certificates instead of keyfiles, which use the SCRAM authentication mechanism. certificates.
https://docs.mongodb.com/v3.6/core/security-x.509/
2020-03-28T18:12:07
CC-MAIN-2020-16
1585370492125.18
[]
docs.mongodb.com
@Documented @Retention(value=RUNTIME) @Target(value={METHOD,FIELD,PARAMETER,ANNOTATION_TYPE}) public @interface NumberFormat Supports formatting by style or custom pattern string. Can be applied to any JDK Number type such as Double and Long. for either number of currency, depending on the annotated field or method parameter type. NumberFormat public abstract NumberFormat.Style style Defaults to NumberFormat.Style.DEFAULT for general-purpose number formatting for most annotated types, except for money types which default to currency formatting. Set this attribute when you wish to format your field in accordance with a common style other than the default style. public abstract String pattern Defaults to empty String, indicating no custom pattern String has been specified. Set this attribute when you wish to format your field in accordance with a custom number pattern not represented by a style.
https://docs.spring.io/spring-framework/docs/5.2.0.BUILD-SNAPSHOT/javadoc-api/org/springframework/format/annotation/NumberFormat.html
2020-03-28T18:16:52
CC-MAIN-2020-16
1585370492125.18
[]
docs.spring.io
Android Components integration. The Android To show MOLPay online banking Component in your payment form, you need to: Specify in your /paymentMethods request: - Deserialize the response from the /paymentMethods call and get the object with type: molpay_ebanking_fxp_MY, molpay_ebanking_TH, or molpay_ebanking_VN. Add the MOLPay online banking Component: a. Import the MOLPay online banking Component to your build.gradlefile. implementation "com.adyen.checkout:molpay-ui:<latest-version>" Check the latest version on GitHub. b. Create an molpayConfigurationobject: val molpayConfiguration = MolpayConfiguration.Builder(Locale.getDefault(), resources.displayMetrics, Environment.TEST) .build() c. Initialize the MOLPay Component. Pass the payment method object and the molpayConfigurationobject. val molpayComponent = MolpayComponent.PROVIDER.get(this@YourActivity, paymentMethod, molpayConfiguration) d. Add the MOLPay Component view to your layout. <com.adyen.checkout.molpay.MolpaySpinnerView android: e. Attach the Component to the view to start getting your shopper's payment details. MolpaySpinnerView.attach(molpayComponent, this@YourActivity) f. When shoppers enter their payment details, you start receiving updates. If isValidis true and the shopper proceeds to pay, pass the paymentComponentState.data.paymentMethodto your server and make a payment request. molpayComponent.observe(this@coreActivity, Observer { if (it?.isValid == true) { // When the proceeds to pay, pass the `it.data` to your server to send a /payments request } }) Make a payment When the shopper proceeds to pay, the Component returns the paymentComponentState.data.paymentMethod. - Pass the paymentComponentState.data.paymentMethodto your server. - From your server, make a /payments request, specifying: paymentMethod.type: The paymentComponentState.data.paymentMethodfrom your client app. returnURL: URL to where the shopper should be redirected back to after they complete the payment. This URL can have a maximum of 1024 characters.. You need this to initialize the Redirect Component. Handle the redirect - Use the Redirect Component to redirect the shopper to the issuing bank's app or website..
https://docs.adyen.com/payment-methods/molpay/android-component
2020-03-28T18:15:10
CC-MAIN-2020-16
1585370492125.18
[]
docs.adyen.com
Configuring External Authentication with LDAP and SAML Cloudera Data Science Workbench supports user authentication against its internal local database, and against external services such as Active Directory, OpenLDAP-compatible directory services, and SAML 2.0 Identity Providers. By default, Cloudera Data Science Workbench. User Signup Process The first time you visit the Cloudera Data Science Workbench Data Science Workbench account. This will prevent the user from logging into Cloudera Data Science Workbench Data Science Workbench will extract user attributes such as username, email address and full name from the authentication responses received from the LDAP server or SAML 2.0 Identity Provider and use them to create the user accounts. Configuring LDAP/Active Directory Authentication Cloudera Data Science Workbench Data Science Workbench to use external authentication methods by clicking the Admin link on the left sidebar and selecting the Security tab. Select LDAP from the list to start configuring LDAP properties.. LDAP Over SSL (LDAPS) To support secure communication between Cloudera Data Science Workbench and the LDAP/Active Directory server, Cloudera Data Science Workbench might require a CA certificate to be able to validate the identity of the LDAP/Active Directory service. - CA Certificate: If the certificate of your LDAP/Active Directory service was signed by a trusted or commercial Certificate Authority (CA), it is not necessary to upload the CA certificate here. However, if your LDAP/Active Directory certificate was signed by a self-signed CA, you must upload the self-signed CA certificate to Cloudera Data Science Workbench in order to use LDAP over SSL (LDAPS). LDAP Group Settings LDAP Group Search Base: The base distinguished name (DN) where Cloudera Data Science Workbench will search for groups. LDAP Group Search Filter: The LDAP filter that Cloudera Data Science Workbench Data Science Workbench will automatically substitute the {0} placeholder for the DN of the authenticated user. LDAP User Groups: A list of LDAP groups whose users have access to Cloudera Data Science Workbench. When this property is set, only users that successfully authenticate themselves AND are affiliated to at least one of the groups listed here, will be able to access Cloudera Data Science Workbench. If this property is left empty, all users that can successfully authenticate themselves to LDAP will be able to access Cloudera Data Science Workbench. LDAP Full Administrator Groups: A list of LDAP groups whose users are automatically granted the site administrator role on Cloudera Data Science Workbench. The groups listed under LDAP Full Administrator Groups do not need to be listed again under the LDAP User Groups property.Example If you want to restrict access to Cloudera Data Science Workbench to members of a group whose DN is: CN=CDSWUsers,OU=Groups,DC=company,DC=comAnd automatically grant site administrator privileges to members of a group whose DN is: CN=CDSWAdmins,OU=Groups,DC=company,DC=comAdd the CNs of both groups to the following settings in Cloudera Data Science Workbench: - LDAP User Groups: CDSWUsers - LDAP Full Administrator Groups: CDSWAdmins How Login Works with LDAP Group Settings Enabled Authentication with LDAP When an unauthenticated user first accesses Cloudera Data Science Workbench, they are sent to the login page where they can login by providing a username and password. Cloudera Data Science Workbench will search for the user by binding to the LDAP Bind DN and verify the username/password credentials provided by the user. Authorization Check for Access to Cloudera Data Science Workbench If the user is authenticated successfully, Cloudera Data Science Workbench Data Science Workbench as a regular user. If there is a match with a group listed under LDAP Full Administrator Groups, this user will be allowed to access Cloudera Data Science Workbench as a site administrator. Test LDAP Configuration. Configuring SAML Authentication Cloudera Data Science Workbench supports the Security Assertion Markup Language (SAML) for Single Sign-on (SSO) authentication; with - The first name of the user. Valid attributes are: - givenName - urn:oid:2.5.4.42 - The last name of the user. Valid attributes are: - sn - urn:oid:2.5.4.4 Configuration Options Use the following properties to configure SAML authentication and authorization in Cloudera Data Science Workbench. For an overview of the login process, see How Login Works with SAML Group Settings Enabled. Cloudera Data Science Workbench Settings Entity ID: Required. A globally unique name for Cloudera Data Science Workbench as a Service Provider. This is typically the URI. NameID Format: Optional. The name identifier format for both Cloudera Data Science Workbench Data Science Workbench. The uploaded certificate is made available at the endpoint. Logout URL: Optional. When this URL is provided, and the Enable SAML Logout checkbox is enabled, a user clicking the Sign Out button on CDSW will also be logged out of the identity provider. Identity Provider Signing Certificate: Optional. Administrators can upload the X.509 certificate of the Identity Provider for Cloudera Data Science Workbench to validate the incoming SAML responses. Cloudera Data Science Workbench Data Science Workbench to establish secure communication with the Identity Provider. Enable SAML Logout: Optional. When this checkbox is enabled, and the Identity Provider Logout URL is provided, a user clicking the Sign Out button on CDSW will also be logged out of the identity provider. As a result of this, the user might also be logged out from any other services that they authenticate to using the same identity provider. For this feature to work, the identity provider must support Single Logout Service with HTTP-Redirect binding. Authorization SAML Attribute Identifier for User Role: The Object Identifier (OID) of the user attribute that will be provided by your identity provider for identifying a user’s role/affiliation. You can use this field in combination with the following SAML User Groups property to restrict access to Cloudera Data Science Workbench to only members of certain groups. For example, if your identity provider returns the OrganizationalUnitName user attribute, you would specify the OID of the OrganizationalUnitName, which is urn:oid:2.5.4.11, as the value for this property. SAML User Groups: A list of groups whose users have access to Cloudera Data Science Workbench. When this property is set, only users that are successfully authenticated AND are affiliated to at least one of the groups listed here, will be able to access Cloudera Data Science Workbench. For example, if your identity provider returns the OrganizationalUnitName user attribute, add the value of this attribute to the SAML User Groups list to restrict access to Cloudera Data Science Workbench to that group. If this property is left empty, all users that can successfully authenticate themselves will be able to access Cloudera Data Science Workbench. SAML Full Administrator Groups: A list of groups whose users are automatically granted the site administrator role on Cloudera Data Science Workbench. The groups listed under SAML Full Administrator Groups do not need to be listed again under the SAML User Groups property. How Login Works with SAML Group Settings Enabled Authentication by Identity Provider When an unauthenticated user accesses Cloudera Data Science Workbench, they are first sent to the identity provider’s login page, where the user can login as usual. Once successfully authenticated by the identity provider, the user is sent back to Cloudera Data Science Workbench along with a SAML assertion that includes, amongst other things, a list of the user's attributes. Authorization Check for Access to Cloudera Data Science Workbench Cloudera Data Science Workbench Data Science Workbench as a regular user. If there is a match with a group listed under SAML Full Administrator Groups, this user will be allowed to access Cloudera Data Science Workbench as a site administrator. Debug Login URL When using external authentication, such as LDAP, Active Directory or SAML 2.0, even a small mistake in authentication configurations in either Cloudera Data Science Workbench or the Identity Provider could potentially block all users from logging in. Cloudera Data Science Workbench Data Science Workbench via local database when using external authentication. In case of external authentication failures, when the debug login route is disabled, root access to the master host is required to re-enable the debug login route. Contact Cloudera Support for more information.
https://docs.cloudera.com/documentation/data-science-workbench/1-7-x/topics/cdsw_external_authentication.html
2020-03-28T18:35:36
CC-MAIN-2020-16
1585370492125.18
[]
docs.cloudera.com
This topic describes how you can use the DevExpress Localization Service to obtain and edit translated resources (strings) for the DevExpress UI Controls. You should log into the DevExpress website to get started. Click Add Translation. Specify the DevExpress release version and the required language in the opened popup dialog. For every translation you create, the following actions are available: Modify translated strings Pre-defined translations are available for many languages. The DevExpress Localization Service displays the total number of strings, the strings that have been translated, and your changes. Click the Modify link associated with the language and release version to review each individual translation and make the necessary changes. Delete a translation from your language set Click Delete to remove a language from your set at any time. When you delete a translation, all modifications are lost. Copy a translation You can copy your existing translations and apply them to a different release version to simplify the transition from one version to another. When you click the Copy link for a given translation, you are prompted to specify the target version for copied translations. If you choose to Modify a translation, you are directed to the Customize Localization Resources page. This page allows you to do the following: Filter the values in the translation table Use the filter bar to restrict the set of resources to those you wish to view within the translation table. You can restrict values by platform, module, and translation state. Modify and Save individual translations The translation table includes the English version of a resource, the suggested translation (if translated), and an empty field for your custom translation. Click Save to record your changes if you make changes to a translation. The suggested translation is not included into the downloaded localization resources. To include it, copy the Suggested Translation to the Your Translation field. Download your translations When you made the changes to a translation, click the Download button to download all modified localization resources. Once the build process is completed, the Localization Service sends you a link to a signed self-extracting archive to your email. To unpack a self-extracting archive on your Mac/Linux PC, you can use the same CLI as with regular archives: 7z x windowsfile.exe OR unzip windowsfile.exe When you unpack the downloaded executable file, you find a folder with the localization resources (satellite assemblies and JSON files). To navigate to the list of languages you selected in Step 2, click Return to Your Translations List. If you use an older version of DevExpress .NET assemblies (12.1 or earlier), see the following instructions to download all available localization resources: The collection of localized DevExpress assemblies.
https://docs.devexpress.com/LocalizationService/16235/localization-service
2020-03-28T17:57:17
CC-MAIN-2020-16
1585370492125.18
[]
docs.devexpress.com
Use this process to perform a search and update personal data per data subject. Create data subject run The default status for newly created data subject run is 20-'New'. You can specify one or several records of data subject run that you wish to process. Add data subject run lines Define search data Run search function The status is updated based on the activity being processed. It is updated to 25-'Search in progress' when the search process is in progress. Once it is finished, it updates to 30-'Search finished'. You can interrupt or stop the search process using related option 21='Stop Search' and the status will revert to 20-'New'. The lowest and highest status depend on the line level that is being processed. The search start date and end date are updated at the time you perform the data search function. When the batch job is completed, you can view the search result in 'Data Subject Run. Open Search Result' (CMS212) and the recommended actions for the personal data in 'Data Subject Run. Review Action' (CMS213). View search results The search result will display the equivalent data found from the system and the method used during the search that can be done through Enterprise search and manual table reading. The result only displays active entries from (CMS075). You can export the result with the tool 'Export to Excel'. A corresponding browse program can be opened to make sure the correct record is found using related option 20='View Data Subject'. Print search results Review action Run 'Execute' function The status is updated to 85='Update in progress' when the update is still in process. Once the update is completed, it is updated to 90-'Done'. The execute start date and end date are updated at the time you perform the 'Execute' function. You can mark the record as done in (CMS213) using related option 21='Mark as done' when the action for manual update has been applied. The status should be raised to 90-'Done'. If the action is performed by the system, the status is automatically updated to 90. The records with anonymous key as assigned action are saved in 'Anonymous Data Subject. Open' (CMS220) once the update of personal data has been completed.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.appfoundhs_16.x/qzx1567533138225.html
2020-03-28T17:48:08
CC-MAIN-2020-16
1585370492125.18
[]
docs.infor.com
Introduction We try to ensure that breaking changes aren't introduced by utilising various automated code testing, syntax testing and unit testing along with manual code review. However bugs can and do get introduced as well as major refactoring to improve the quality of the code base. We have two branches available for you to use. The default is the master branch. Development branch Our master branch is our dev branch, this is actively commited to and it's not uncommon for multiple commits to be merged in daily. As such sometimes changes will be introduced which will cause unintended issues. If this happens we are usually quick to fix or revert those changes. We appreciate everyone that runs this branch as you are in essence secondary testers to the automation and manually testing that is done during the merge stages. You can configure your install (this is the default) to use this branch by setting $config['update_channel'] = 'master'; in config.php and ensuring you switch to the master branch with: cd /opt/librenms && git checkout master Stable branch With this in mind, we provide a monthly stable release which is released on or around the last Sunday of the month. Code pull requests (aside from Bug fixes) are stopped days leading up to the release to ensure that we have a clean working branch at that point. The changelog is also updated and will reference the release number and date so you can see what changes have been made since the last release. To switch to using stable branches you can set $config['update_channel'] = 'release'; in config.php and then switch to the latest release branch with: cd /opt/librenms && git fetch --tags && git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
https://docs.librenms.org/General/Releases/
2020-03-28T18:21:08
CC-MAIN-2020-16
1585370492125.18
[]
docs.librenms.org
Core SPTK API¶ All functionality in pysptk.sptk (the core API) is directly accesible from the top-level pysptk.* namespace. For convenience, vector-to-vector functions ( pysptk.mcep, pysptk.mc2b, etc) that takes an input vector as the first argment, can also accept matrix. As for matrix inputs, vector-to-vector functions are applied along with the last axis internally; e.g. mc = pysptk.mcep(frames) # frames.shape == (num_frames, frame_len) is equivalent to: mc = np.apply_along_axis(pysptk.mcep, -1, frames) Warning The core APIs in pysptk.sptk package are based on the SPTK’s internal APIs (e.g. code in _mgc2sp.c), so the functionalities are not exactly same as SPTK’s CLI. If you find any inconsistency that should be addressed, please file an issue. Note Almost all of pysptk functions assume that the input array is C-contiguous and has float64 element type. For vector-to-vector functions, the input array is automatically converted to float64-typed one, the function is executed on it, and then the output array is converted to have the same type with the input you provided.
https://pysptk.readthedocs.io/en/v0.1.12/sptk.html
2020-03-28T17:03:39
CC-MAIN-2020-16
1585370492125.18
[]
pysptk.readthedocs.io
Cloudera Data Science Workbench 1.7.2 -.7. Cloudera Data Science Workbench publishes placeholder parcels for other operating systems as well. However, note that these do not work and have only been included to support mixed-OS clusters.. -. The wildcard DNS hostname configured for Cloudera Data Science Workbench must)..17 and Python 3.6.9. 1.3 (and higher) include..+ - IE's Compatibility View mode is not supported..
https://docs.cloudera.com/documentation/data-science-workbench/1-7-x/topics/cdsw_requirements_supported_versions.html
2020-03-28T18:57:15
CC-MAIN-2020-16
1585370492125.18
[]
docs.cloudera.com
The “Save Image” dialog. Since GIMP-2.8, the file is automatically saved in the XCF format and you can't save in another file format (for this, you have to export the file). The Save as dialog allows you to save with another name and/or to another folder.command displays the You can access this command from the image menubar through→ , or by using the keyboard shortcut Shift+Ctrl+S.. File size, resolution and image composition are displayed below the preview window. If your image has been modified by another program, click on the preview to update it. Enter the filename of the new image file here. can select a compressed format for your XCF file.
https://docs.gimp.org/2.10/en/gimp-file-save-as.html
2020-03-28T18:44:32
CC-MAIN-2020-16
1585370492125.18
[]
docs.gimp.org
oocgcm : out of core analysis of general circulation models in python.¶ This project provides tools for processing and analysing output of general circulation models and gridded satellite data in the field of Earth system science. Our aim is to simplify the analysis of very large datasets of model output (~1-100Tb) like those produced by basin-to-global scale sub-mesoscale permitting ocean models and ensemble simulations of eddying ocean models by leveraging the potential of xarray and dask python packages. The main ambition of this project is to provide simple tools for performing out-of-core computations with model output and gridded data, namely processing data that is too large to fit into memory at one time. The project is so far mostly targeting NEMO ocean model and gridded ocean satellite data (AVISO, SST, ocean color...) but our aim is to build a framework that can be used for a variety of models based on the Arakawa C-grid. The framework can in principle also be used for atmospheric general circulation models. We are trying to develop a framework flexible enough in order not to impose too strictly a specific workflow to the end user. oocgcm is a pure Python package and we try to keep the list of dependencies as small as possible in order to simplify the deployment on a number of platforms. oocgcm is not intended to provide advanced visualization functionalities for gridded geographical data as several powerful tools already exist in the python ecosystem (see in particular cartopy and basemap). We rather focus on building a framework that simplifies the design and production of advanced statistical and dynamical analyses of large datasets of model output and gridded data. Note oocgcm is at the pre-alpha stage. The API is therefore unstable and likely to change without notice.
https://oocgcm.readthedocs.io/en/latest/
2020-03-28T16:43:36
CC-MAIN-2020-16
1585370492125.18
[]
oocgcm.readthedocs.io
Now we need to specify what our crop will be. A crop summarizes all that is necessary before creatng our first crop cycle or planting. Agriculture > Crop . Agriculture > Crop . On the desk, click on the Crop icon. A list will show any existing Crops. On the top right, click on 'New' to create the first Crop. A new Crop document will open, and we will enter basic information. The basic information should be entered as such: Click Save We will skip the Materials Required, Byproducts and Produce sections. In the Ideal Agriculture Task List we enter some planned tasks such as planting, watering, and harvesting. (Please note, our activity list will be intentionally abbreviated for illustraion purposes. For this example we will prepare our field, plant the next day, water only once, add a cover after germination on day 12, remove weeds at day 19, and harvest at day 90. The first row will look like this: When done, you can click 'Save' to prevent any work from being lost. We are not done yet, we simply have saved the minimum required information! Continue filling the next rows with Task Name, Start Day, End Day and Holiday Management. Click 'Save' You form should now look something like this Repeat step 2 for as many crops as you need. You can save some time by duplicating existing crops and modifying only the necessary items ERPNext is used by more than 5000 companies across the world
https://docs.erpnext.com/docs/user/manual/en/agriculture/crop
2020-03-28T18:01:37
CC-MAIN-2020-16
1585370492125.18
[]
docs.erpnext.com
Before we do anything, we need to define some details about where our crops will be planted. We will first create our farm as a parent land unit, and then we will add one or more fields as children or nodes, belonging to the parent. Agriculture > Land Unit Agriculture > Land Unit On the desk, click on the Land Unit icon. assets/img/new-land-unit-icon.png A list will show any existing Land Units. On the top right, click on New to create the first Land Unit. Click Save It should look something like this With the farm created, we can now create our first Carrot Field! Click on New Repeat step 3 for as many fields as you need ERPNext is used by more than 5000 companies across the world
https://docs.erpnext.com/docs/user/manual/en/agriculture/land_unit
2020-03-28T18:31:12
CC-MAIN-2020-16
1585370492125.18
[]
docs.erpnext.com
Sharing Folders¶ Sharing folders works much like on any other computer running Syncthing, with a couple of caveats. Synology Permissions¶ Syncthing runs as a system user called syncthing.net, which by default will not have permission to access any of your file shares. This is intentional for your security. Hence, in order to share a folder using Syncthing, the first step is to grant Syncthing access to the file share. In the Control Panel, select Shared Folders, the folder you want to share, and Edit. In the Synology folder editor, click the Permissions tab. You should be here, seeing a list of oridnary users and their access permissions: Now select “System internal user” in the dropdown that by default says “Local users”. You will see a different set of users, where syncthing.net will be one among them. Grant syncthing.net Read/Write permissions to the shared folder: Click OK to save these settings. Syncthing Setup & Ignores¶ You can now add the folder in Syncthing. Syncthing sees the folder at its physical path, which includes the Synology volume name. If there is only one volume this will be the default path which Syncthing suggests, but you should double check: It is also a good idea to set up some initial ignore patterns at the same time. Synology by default creates a directory @aeDir that contains some metadata, and depending on your setup also a recycle bin in #recycle. These should generally not be synced and ignoring them at folder creating time will avoid issues down the line: You can now save the folder, adjust sharing, and so on as usual. For more information about Syncthing please refer to the Syncthing documentation or the Syncthing forum.
https://docs.kastelo.net/synology/sharing/
2020-03-28T17:28:55
CC-MAIN-2020-16
1585370492125.18
[array(['../../_images/permissions11.png', '../../_images/permissions11.png'], dtype=object) array(['../../_images/permissions21.png', '../../_images/permissions21.png'], dtype=object) array(['../../_images/syncthing11.png', '../../_images/syncthing11.png'], dtype=object) array(['../../_images/syncthing21.png', '../../_images/syncthing21.png'], dtype=object) ]
docs.kastelo.net
In this guide, you'll learn how to use your own Font assets in Widget Blueprints that contain Text. Steps Follow the steps below to see how to assign your own Fonts to be used with the UMG UI Designer. For this how-to guide, we are using the Blank Template project, using no Starter Content, with default Target Hardware and Project Settings. To create a new Widget Blueprint , from the Content Browser click on the Add New button, then hover over User Interface, and then click on Widget Blueprint selection. This will create a new Widget Blueprint. Make sure to give it a name that you can easily locate later. Go back to the Content Browser where you saved your Widget Blueprint and double-click on it to open it up. In the Widget Blueprint's Palette, select a Text widget and drag it onto the Then grab the corner and scale it to a larger size. Now that you've created your Text Widget, you can click on it and access the Details panel, under Appearance, you'll see a Font option where you can change the font type, it's styling (regular, bold, itallic, etc.), and its size. By default, the Engine uses the Roboto font, however, if you click the dropdown menu, any Composite Font assets created can be selected and used instead. You can also choose to create a Composite Font from this menu and specify where the new asset should be saved (it will be blank by default, requiring you to fill it out). Once you select your Composite Font, the second dropdown menu will allow you to select a font to use from the Default Font Family. You can also specify the size of the font in the input box. Currently, UMG only supports Runtime cached font assets. Also, if you have assigned fonts using the old method, none of your existing file-based font settings will be lost, however going forward, you will need to create a Composite Font asset in order to use custom fonts with UMG. End Result Now that you've successfully used your fonts in UMG, you can head over to learn how to style your Fonts by setting colors, Materials, and outline properties (as well as by using shadows).
https://docs.unrealengine.com/en-US/Engine/UMG/UserGuide/Fonts/HowTo/FontsWithUMG/index.html
2020-03-28T16:49:02
CC-MAIN-2020-16
1585370492125.18
[array(['./../../../../../../../Images/Engine/UMG/UserGuide/Fonts/HowTo/FontsWithUMG/FontWithUMG_Hero.jpg', 'FontWithUMG_Hero.png'], dtype=object) ]
docs.unrealengine.com
A number of payroll extracts will be available on the CS TimeClock. You are able to use more than one type of payroll extract. To add a new payroll setup, click on “Insert”. Payroll balancing will be applied on the employee payroll hours if the payroll extract has been assigned to that employee i.e. no payroll balancing will be applied if the employee payroll extract type is Unassigned. Say an employee has the following hours at the end of his 7-day payroll period: Normal: 08:00 Overtime: 02:00 Saturday: 06:30 Holiday: 08:00 Paid: 16:00 According to the setup in the picture above: Payroll Hours Total = 08:00 (Normal) + 08:00 (Holiday) + 16:00 (Paid Leave) = 32:00 The employee is short 8 hours. During payroll balancing any hours found in Overtime is moved to Normal. Since the Total is still less than 40:00 after doing so, it will move any hours found in Saturday to Normal. Now you have: Payroll hours Total = 32:00 + 02:00 + 06:30 = 40:30 This exceeds 40:00 so 00:30 will be moved into Overtime. The end result after payroll balancing is: Normal: 08:00 Overtime: 00:30 Saturday: 00:00 Holiday: 08:00 Paid: 16:00 Click on “Save” to save the payroll extract. Permalink: {{ web.link(web.text(uri.build("", _, { url: page.uri }))) }} Viewing Details:
http://docs.cstimeclocks.com/The_Web_Interface/The_Setup_Menu/Payroll_Extracts
2020-03-28T18:58:36
CC-MAIN-2020-16
1585370492125.18
[]
docs.cstimeclocks.com
Slack Integration Castle users can send notifications to a Channel in Slack when the risk score exceeds the specified threshold. Setup Configuration is per environment in Castle. Head to the Notification settings page for a project: Dashboard > Settings > Notifications > Slack. Click the Add to Slack button. Select the Slack team you want to integrate with, and authorize the permission request. Select the channel or group you wish to receive messages for. The Slack integration will automatically be enabled. Adjust the risk score threshold to your liking. Enabling and disabling You can disable the Slack integration at any time by flicking the switch and clicking Save. Change Slack Channel If you want to change the Slack Channel to send notifications to, simply click Disconnect Slack and click Add to Slack again.
https://docs.castle.io/slack/
2020-03-28T17:52:09
CC-MAIN-2020-16
1585370492125.18
[]
docs.castle.io
Create Expectations¶ This tutorial covers the workflow of creating and editing expectations. The tutorial assumes that you have created a new Data Context (project), as covered here: Run great_expectations init. Creating expectations is an opportunity to blend contextual knowledge from subject-matter experts and insights from profiling and performing exploratory analysis on your dataset. Once the initial setup of Great. For a broader understanding of the typical workflow read this article: typical_workflow. Expectations are grouped into Expectations Suites. An Expectation Suite combines multiple expectations into an overall description of a dataset. For example, a team can group all the expectations about the rating table in the movie ratings database into an Expectation Suite and call it “movieratings.table.expectations”. Each Expectation Suite is saved as a JSON file in the great_expectations/expectations subdirectory of the Data Context. Users check these files into version control each time they are updated, in the same way they treat their source files. The lifecycle of an Expectation Suite starts with creating it. Then it goes through a loop of Review and Edit as the team’s understanding of the data described by the suite evolves. We will describe the Create, Review and Edit steps in brief: Create an Expectation Suite¶ Expectation Suites are saved as JSON files, so you could create a new suite by writing a file directly. However the preferred way is to let the CLI save you time and typos. If you cannot use the CLI in your environment (e.g., in a Databricks cluster), you can create and edit an Expectation Suite in a notebook. Jump to this section for details: Jupyter Notebook for Creating and Editing Expectation Suites. To continue with the CLI, run this command in the root directory of your project (where the init command created the great_expectations subdirectory: great_expectations suite new This command prompts you to name your new Expectation Suite and to select a sample batch of the dataset the suite will describe. Then it profiles the selected sample and adds some initial expectations to the suite. The purpose of these expectations is to provide examples of what properties of data can be described using Great Expectations. They are only a starting point that the user builds on. The command concludes by saving the newly generated Expectation Suite as a JSON file and rendering the expectation suite into an HTML page in the Data Docs website of the Data Context. Review an Expectation Suite. Reviewing expectations is best done in Data Docs: Edit an Expectation Suite¶ The best interface for editing an Expectation Suite is a Jupyter notebook. Editing an Expectation Suite means adding expectations, removing expectations, and modifying the arguments of existing expectations. For every expectation type there is a Python method that sets its arguments, evaluates this expectation against a sample batch of data and adds it to the Expectation Suite. Take a look at the screenshot below. It shows the HTML view and the Python method for the same expectation ( expect_column_distinct_values_to_be_in_set) side by side: The CLI provides a command that, given an Expectation Suite, generates a Jupyter notebook to edit it. It takes care of generating a cell for every expectation in the suite and of getting a sample batch of data. The HTML page for each Expectation Suite has the CLI command syntax in order to make it easier for users. The generated Jupyter notebook can be discarded, since it is auto-generated. To understand this auto-generated notebook in more depth, jump to this section: Jupyter Notebook for Creating and Editing Expectation Suites. Jupyter Notebook for Creating and Editing Expectation Suites¶ If you used the CLI suite new command to create an Expectation Suite and then the suite edit command to edit it, then the CLI generated a notebook in the great_expectations/uncommitted/ folder for you. There is no need to check this notebook in to version control. Next time you decide to edit this Expectation Suite, use the CLI again to generate a new notebook that reflects the expectations in the suite at that time. If you do not use the CLI, create a new notebook in the``great_expectations/notebooks/`` folder in your project. 1. Setup¶ from datetime import datetime import great_expectations as ge import great_expectations.jupyter_ux from great_expectations.data_context.types.resource_identifiers import ValidationResultIdentifier # Data Context is a GE object that represents your project. # Your project's great_expectations.yml contains all the config # options for the project's GE Data Context. context = ge.data_context.DataContext() # Create a new empty Expectation Suite # and give it a name expectation_suite_name = "ratings.table.warning" # this is just an example context.create_expectation_suite( expectation_suite_name) If an expectation suite with this name already exists for this data_asset, you will get an error. If you would like to overwrite this expectation suite, set overwrite_existing=True. 2. Load a batch of data to create Expectations¶ Select a sample batch of the dataset the suite will describe. batch_kwargs provide detailed instructions for the datasource how to construct a batch. Each datasource accepts different types of batch_kwargs: pandas A pandas Pandas read_csv. batch_kwargs might look like the following: { "path": "/data/npidata/npidata_pfile_20190902-20190908.csv", "reader_options": { "sep": "|" } } If you already loaded the data into a Pandas DataFrame called df, you could use following batch_kwargs to instruct the datasource to use your DataFrame as a batch: batch_kwargs = {'dataset': df} pyspark A pyspark Spark DataFrameReader SQLAlchemy A SQLAlchemy datasource can accept batch_kwargs that DataContext’s get_batch method is used to load a batch of a data asset: batch = context.get_batch(batch_kwargs, expectation_suite_name) batch.head() Calling this method asks the Context to get a batch of data and attach the expectation suite expectation_suite_name to it. The batch_kwargs argument specifies which batch of the data asset should be loaded. 4. Finalize: # save the Expectation Suite (by default to a JSON file in great_expectations/expectations folder batch.save_expectation_suite(discard_failed_expectations=False) - # This step is optional, but useful - evaluate the expectations against the current batch of data run_id = datetime.datetime.utcnow().strftime(“%Y%m%dT%H%M%S.%fZ”) results = context.run_validation_operator(“action_list_operator”, assets_to_validate=[batch], run_id=run_id) expectation_suite_identifier = list(results[“details”].keys())[0] validation_result_identifier = ValidationResultIdentifier( expectation_suite_identifier=expectation_suite_identifier, batch_identifier=batch.batch_kwargs.to_id(), run_id=run_id ) # Update the Data Docs site to display the new Expectation Suite # and open the site in the browser context.build_data_docs() context.open_data_docs(validation_result_identifier) last updated: Feb 18, 2020
https://docs.greatexpectations.io/en/0.9.0/tutorials/create_expectations.html
2020-03-28T16:56:34
CC-MAIN-2020-16
1585370492125.18
[array(['../_images/sample_e_s_view1.png', '../_images/sample_e_s_view1.png'], dtype=object) array(['../_images/exp_html_python_side_by_side1.png', '../_images/exp_html_python_side_by_side1.png'], dtype=object) array(['../_images/edit_e_s_popup1.png', '../_images/edit_e_s_popup1.png'], dtype=object) ]
docs.greatexpectations.io
. Q. Moreover, the QGIS Server project provides the ‘Publish to Web’ plugin, a plugin for QGIS desktop that the option to introduce more complex cartographic visualization rules. For now, we recommend to read one of the following URLs to get more information: At this point, we will give a short and simple sample installation how-to for Debian Squeeze. Many other OSs provide packages for QGIS Server, too. If you have to build it all from source, please refer to the URLs above. Apart from QGIS and QGIS Server, you need a web server, in our case apache2. You can install all packages with aptitude or apt-get install together with other necessary dependency packages. After installation, you should test to confirm that the web server and QGIS Server work as expected. Make sure the apache server is running with /etc/init.d/apache2 start. Open a web browser and type URL:. If apache is up, you should see the message ‘It works!’. Now we test the QGIS Server installation. The qgis_mapserv.fcgi is available at /usr/lib/cgi-bin/qgis_mapserv.fcgi and provides a standard WMS that shows the state boundaries of Alaska. Add the WMS with the URL as described in Selecting WMS/WMTS Servers. Figure Server 1:. Figure Server 2: Definitions for a QGIS Server WMS/WFS/WCS project (KDE). WMS capabilitiesS from the Coordinate Reference System Selector, or click Used to add the CRS used in the QGIS project to the list. If you have print composers defined in your project, they will be listed in the GetCapabilities at once. You can receive requested GetFeatureInfo as plain text, XML and GML. Default is XML, text or GML format depends the output format choosen. WFS capabilities In the WFS capabilities area, you can select the layers that you want to publish as WFS, and specify if they will allow the update, insert and delete operations. If you enter a URL in the Advertised URL field of the WFS capabilities section, QGIS Server will advertise this specific URL in the WFS GetCapabilities response. WCS capabilities.: For vector layers, the Fields menu of the Layer ‣ Properties dialog allows you to define for each attribute if it will be published or not. By default, all the attributes are published by your WMS and WFS. If you want a specific attribute not to be published, uncheck the corresponding checkbox in the WMS or WFS column. You can overlay watermarks over the maps produced by your WMS by adding text annotations or SVG annotations to the project file. See section Annotation Tools in General Tools for instructions on creating annotations. For annotations to be displayed as watermarks on the WMS output, the Fixed map position check box in a way that it represents a valid relative path. In the WMS GetMap request, QGIS Server accepts a couple of extra parameters in addition to the standard parameters according to the OCG WMS 1.3.0 specification:&... DPI parameter: The DPI parameter can be used to specify the requested output resolution. Example:... OPACITIES parameter: Opacity can be set on layer or group level. Allowed values range from 0 (fully transparent) to 255 (fully opaque). Example:?\ REQUEST=GetMap&LAYERS=mylayer1,mylayer2&OPACITIES=125,200&...
https://docs.qgis.org/2.2/en/docs/user_manual/working_with_ogc/ogc_server_support.html
2020-03-28T17:06:38
CC-MAIN-2020-16
1585370492125.18
[]
docs.qgis.org
CSS Skin File Selectors The following table lists the CSS selectors and descriptions for RadPanelBar style sheets. Controls / PanelBar / Appearance and Styling The following table lists the CSS selectors and descriptions for RadPanelBar style sheets. Understanding the Skin CSS File Tutorial: Creating A Custom Skin Setting the CSS Class of Items.
https://docs.telerik.com/devtools/aspnet-ajax/controls/panelbar/appearance-and-styling/css-skin-file-selectors
2020-03-28T18:20:47
CC-MAIN-2020-16
1585370492125.18
[]
docs.telerik.com