content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The FeedHenry platform offers App developers a number of code free app generators which can be used to quickly generate working apps after giving us some configuration details. Once these apps have been generated, they can be deployed directly to device. You also have full access to the app source code should you wish to customise the app or integrate it into another app.
http://docs.feedhenry.com/v2/generate_an_app.html
2013-05-18T21:28:47
CC-MAIN-2013-20
1368696382892
[]
docs.feedhenry.com
Lacking universally accepted terminology to talk about classes, I'll make occasional use of Smalltalk and C++ terms. (I'd use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it.) aren obviates the need for two different argument passing mechanisms as in Pascal.
http://docs.python.org/release/1.5/tut/node57.html
2013-05-18T21:07:55
CC-MAIN-2013-20
1368696382892
[]
docs.python.org
/** * @class Function * *". * * <div class="notice"> * Documentation for this class comes from <a href="">MDN</a> * and is available under <a href="">Creative Commons: Attribution-Sharealike license</a>. * </div> */ /** * @method constructor * Creates new Function object. * * @param {String...} args * Names to be used by the function as formal argument names. Each must be a * string that corresponds to a valid JavaScript identifier or a list of such * strings separated with a comma; for example "`x`", "`theValue`", or "`a,b`". * @param {String} functionBody * A string containing the JavaScript statements comprising the function * definition. */ // Properties /** * @property {Number} length * Specifies the number of arguments expected by the function. */ //Methods /** * @method apply * {Array} argsArray An array like object, specifying the arguments with which fun should be * called, or null or undefined if no arguments should be provided to the function. * @return {Object} Returns what the function returns. */ /** * @method call * {Object...} args Arguments for the object. * @return {Object} Returns what the function returns. */ /** * @method toString * Returns a string representing the source code of the function. Overrides the * `Object.toString` method. * * The {@link. * * @return {String} The function as a string. */ // ECMAScript 5 methods /** * @method bind * * * * ## Partial. * * @param {Object} thisArg The value to be passed as the `this` * parameter to the target function when the bound function is * called. The value is ignored if the bound function is constructed * using the new operator. * * @param {Mixed...} [args] Arguments to prepend to arguments provided * to the bound function when invoking the target function. * * @return {Function} The bound function. */
http://docs.sencha.com/extjs/4.1.3/source/Function.html
2013-05-18T21:55:06
CC-MAIN-2013-20
1368696382892
[]
docs.sencha.com
Upgrading Through Backup Files In addition to using Plesk Migrator, you can back up data on a source server, transfer the resulting archive file to the destination server, and restore the data there. Backing up and restoring can be performed through the Plesk user interface or by means of command-line utilities pleskbackup and pleskrestore. The utilities are located in the /usr/local/psa/bin/ directory on Linux systems, and %plesk_cli% on Windows systems. Important: The backup and restore utilities and the corresponding functions in Plesk are available only if you install the optional Plesk components that are not included in typical installations. You can install these components by using the web-based installation and update wizard: in Server Administration Panel, go to Tools & Settings > Updates and Upgrades > Add Components, and select Plesk Backup Manager in the Server backup solutions group. The backup format changes from version to version, so it may be impossible to restore data from a backup through command line due to compatibility problems. Therefore, if you want to transfer data from earlier Plesk versions, do it through the Plesk user interface: In this case, the destination Plesk will automatically convert the backup file to the appropriate format. To back up data on the source server using Plesk user interface: Follow the instructions in the Administrator's Guide for your Plesk version: - Plesk 12.5:. - Plesk 12.0:. - Plesk 11.5:. If you have a Plesk version earlier than 8.6, use the Help link in the navigation pane to access the Administrator's Guide. To back up all data on the source server using command-line utilities: - On a Linux-based server - /usr/local/psa/bin/pleskbackup server <backup_file_name>. - On a Windows-based server running Plesk 9 and later - "%plesk_cli%\pleskbackup.exe" --server. - On a Windows-based server running Plesk 8.6 and earlier - "%plesk_cli%\pleskbackup.exe" --all <backup_file_name>. If you want to save backup files to an FTP server, specify a URL like ftp://[<login>[:<password>]@]<server>/<file_path> instead of <backup_file_name>. If you want to improve backup security, encrypt the backup by adding the -backup-password <your_password> option. Learn more about password-protected backups in the Administrator's Guide, section Backup and Restoration. If you want to perform a selective backup by means of command-line tools, follow the instructions for your Plesk version: - Plesk 10 and later: (for Linux) and (for Windows). After the data you want to transfer are backed up, upload the backup file to the server and restore the data it contains. To upload backup file to the destination server and restore data by means of the GUI: - Log in to Server Administration Panel on the destination server. - Go to Tools & Settings > Backup Manager (in the Tools & Resources group). - Click Upload. - Click Browse and select the backup file you want to upload. - If the backup was encrypted, specify the password that you used for encryption. - Click OK. The file is uploaded to the server storage. - On the Server Storage tab, click the link corresponding to the backup file you have just uploaded. - Select the types of data you want to restore and specify restoring options. - Click Restore and follow the on-screen instructions to complete restoring. To restore all data on the destination server by means of the pleskrestore command-line utility: - Upload a backup file to the server. - Prepare a mapping file, so that you could specify which IP addresses should be used on the server: - To create a mapping file, issue the following command. - On a Linux-based server - /usr/local/psa/bin/pleskrestore --create-map <path to backup file> -map <path to mapping file> - On a Windows-based server - "%plesk_cli%\pleskrestore.exe" --create-map <path to backup file> -map <path to mapping file> - Open the created mapping file with a text editor. - Locate the section starting with [ip-map]. It should contain entries like in the following example: [ip-map] # Unchanged IP addresses: # Please review default IP addresses mapping below: 10.52.30.170 shared -> 10.52.30.170 shared # ip address does not exist 10.52.30.170 10.52.120.243 exclusive -> 10.52.120.243 exclusive # ip address does not exist 10.52.120.243 - In the right part of each line after the ->characters, replace the present IP addresses with those that should be used on the destination server, and make sure that the allocation scheme for the new addresses is correctly indicated by the words sharedand exclusive. Sharedindicates a shared IP address, and exclusive, a dedicated IP address. - Save the file. - Restore the data from backup by issuing the following command: - On a Linux-based server - /usr/local/psa/bin/pleskrestore --restore <path_to_backup_file> -level server -map <path to mapping file> - On a Windows-based server - "%plesk_cli%\pleskrestore.exe" <path_to_backup_file> -level server -map <path to mapping file> If the backup was protected by password, use the -backup-passwordoption to specify the password that you used for encryption . If the restoration fails with the error message Unable to resolve all conflicts, refer to the section Troubleshooting Migration & Transfer Issues. Note: If you want to perform a selective restoration by means of command-line tools, follow the instructions in the Advanced Administration Guide for Linux and for Windows. After data are restored, each migrated website is associated with a separate hosting service subscription not linked to any particular hosting plan. To simplify further maintenance, you can now review properties of all new subscriptions and associate them with hosting plans. Important: If you upgrade from Plesk 9 or earlier, you should complete the transfer according to instructions provided in Completing Upgrade from Plesk 9 and Earlier.
https://docs.plesk.com/en-US/12.5/deployment-guide/upgrading-plesk/upgrading-plesk-by-transfer/upgrading-through-backup-files.66868/
2017-10-17T07:38:07
CC-MAIN-2017-43
1508187820930.11
[]
docs.plesk.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Lists categories for all event source types, or, if specified, for a specified source type. You can see a list of the event categories and source types in Working with Events and Notifications in the AWS Database Migration Service User Guide. For PCL this operation is only available in asynchronous form. Please refer to DescribeEventCategoriesAsync. Namespace: Amazon.DatabaseMigrationService Assembly: AWSSDK.DatabaseMigrationService.dll Version: 3.x.y.z Container for the necessary parameters to execute the DescribeEventCategories service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DMS/MDMSDMSDescribeEventCategoriesDescribeEventCategoriesRequest.html
2017-10-17T07:59:20
CC-MAIN-2017-43
1508187820930.11
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. This is the response object from the DescribeTargetHealth operation. Namespace: Amazon.ElasticLoadBalancingV2.Model Assembly: AWSSDK.ElasticLoadBalancingV2.dll Version: 3.x.y.z The DescribeTargetHealthResponse type exposes the following members This example describes the health of the targets for the specified target group. One target is healthy but the other is not specified in an action, so it can't receive traffic from the load balancer. var response = client.DescribeTargetHealth(new DescribeTargetHealthRequest { TargetGroupArn = "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067" }); List targetHealthDescriptions = response.TargetHealthDescriptions; This example describes the health of the specified target. This target is healthy. var response = client.DescribeTargetHealth(new DescribeTargetHealthRequest { TargetGroupArn = "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067", Targets = new List { new TargetDescription { Id = "i-0f76fade", Port = 80 } } }); List targetHealthDescriptions = response.TargetHealth
http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ElasticLoadBalancingV2/TELBV2DescribeTargetHealthResponse.html
2017-10-17T07:59:47
CC-MAIN-2017-43
1508187820930.11
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Creates an AWS IoT policy. The created policy is the default version for the policy. This operation creates a policy version with a version identifier of 1 and sets 1 as the policy's default version. For PCL this operation is only available in asynchronous form. Please refer to CreatePolicyAsync. Namespace: Amazon.IoT Assembly: AWSSDK.IoT.dll Version: 3.x.y.z Container for the necessary parameters to execute the CreatePolicy service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/MIoTIoTCreatePolicyCreatePolicyRequest.html
2017-10-17T07:59:59
CC-MAIN-2017-43
1508187820930.11
[]
docs.aws.amazon.com
Security Overview It goes without saying – security is of the utmost importance in your production infrastructure. Nanobox was built with strict security protocols in place. The following measures have been put in place to reduce your app's attack plane. The Initial Bootstrap Each and every server provisioned through Nanobox uses a simple Ubuntu bootstrap script (feel free to view the source). This bootstrap installs and configures Docker and the Nanobox agent as well as a brutal, default-deny firewall via iptables and a custom overlay network. It also ensures that the core software is up-to-date. Once a host is bootstrapped, you essentially have a machine that is fully locked-down, running only Docker, the Nanobox agent, and the virtual network. At this point, not even the other machines within the same network can talk to the machine. Updates via the Dashboard After the initial bootstrap, the only thing that can communicate with the machine is nanobox.io. Nanobox injects a set of security credentials during the bootstrap which it uses to communicate with the Nanobox agent. Nanobox sends commands to the agent over a secure socket layer, using credentials that only it knows about. Only authorized requests from nanobox.io and the Nanobox CLI (which go through nanobox.io) can interact with your servers' internals directly. From the dashboard, you can run an update process that will connect to each hosts' agents and ensure the agent and installed and software is up-to-date. Firewall "Hole-Punching" After new servers come online and are bootstrapped, Nanobox sends a series of commands to each agent in each machine (if you're using a multi-node infrastructure), informing them to punch an explicit hole in the firewall through which they can communicate. Once they're able to communicate, all machines add each other to the virtual network, which allows containers on each host to communicate. That process repeats each time a new host comes online or gets removed. Custom Overlay Network Part of the bootstrap installs a custom, virtual network driver. This essentially provides a fully-encapsulated layer 2 network over a peer-to-peer cluster of all of the host nodes of an app. Nanobox-Specific Images & Packages In order to reduce the attack plane, Nanobox provides slimmed-down versions of Docker images with extra or unnecessary packages stripped out. We also provide custom-built packages to: - Ensure the binaries are safe. - Eliminate unnecessary cruft, reducing the attack plane. Non-Privileged Users To reduce potential harm, your app and all processes run with a non-privileged user. Read-Only by Default To eliminate the potential of uploading and running malicious files, all app's filesystems are read-only by default. You can enable writable permissions with Network Directories and Writable Directories, but these should be used sparingly. Access Control & Remote Access Access control for all services as well as server console access is managed by Nanobox. Every access-request must to be authorized through Nanobox before access is allowed. Remote access is handled through the Nanobox CLI. There are two ways to connect remotely: nanobox console nanobox console drops you into a remote console inside a specific container. This acts like a traditional SSH console but doesn't actually use SSH. # console into a host server on DigitalOcean nanobox console do.1 # console into a specific component in your app nanobox console web.main nanobox tunnel nanobox tunnel establishes a secure port forward from your localhost to your live container. This is mainly used to manage production data with a local client. # tunnel into a database component nanobox tunnel data.db Provider Access When deploying apps with Nanobox, they are deployed to your server(s). You own them and still have all the access granted by your hosting provider. Any means for remote access provided by your service provider are still available. Because of this, it is a good idea to familiarize yourself with your provider's security policies and protocols. Reach out to [email protected] and we'll try to help.
https://docs.nanobox.io/security/
2017-10-17T07:43:52
CC-MAIN-2017-43
1508187820930.11
[]
docs.nanobox.io
Product Overview Welcome to Aspose.Email for C++ Aspose.Eamil. Product Description Aspose.Email for C++ is written in native C and can be used on both Windows as well as Linux applications. It. - Creating emails by mail merges from different types of data sources - Verifying email addresses. - Working with Outlook media types including Messages, tasks, contacts, calendar and Journal items - Create and manipulate Outlook PST and OST files
https://docs.aspose.com/email/cpp/product-overview/
2021-01-16T05:51:25
CC-MAIN-2021-04
1610703500028.5
[]
docs.aspose.com
Configurations API Creating, updating and fetching configurations from the Froomle platform is done through the Configurations API. Operations: Route Method Description /configurations POST Store a configuration. /configurations GET Get all configuration IDs /configurations/{configuration_id} GET Get the configuration associated with configuration_id. /configurations/{configuration_id} PUT Update a configuration. /configurations/{configuration_id} DELETE Delete a configuration.
https://docs.froomle.com/froomle-doc-main/reference/api_reference/configurations/index.html
2021-01-16T05:20:03
CC-MAIN-2021-04
1610703500028.5
[]
docs.froomle.com
OPENSEQ Synopsis OPENSEQ filename TO filevar [LOCKED statements] [ON ERROR statements] [THEN statements] [ELSE statements] Arguments Description The OPENSEQ statement is used to open a file for sequential access. This can be an existing file or a new file. It assigns the file to filevar. You can optionally specify a LOCKED clause, which is executed if OPENSEQ could not open the specified file due to lock contention. The LOCKED clause is optional, but strongly recommended; if no LOCKED clause is specified, program execution waits indefinitely for the conflicting lock to be released. The statements argument can be the NULL placeholder keyword, a single statement, or a block of statements terminated by the END keyword. A block of statements has specific line break requirements: each statement must be on its own line; there must be a line break between the LOCKED keyword and the first line. You can optionally specify an ON ERROR clause, which is executed if the file could not be opened. If no ON ERROR clause is present, the ELSE clause is taken for this type of error condition.. You can optionally specify a THEN clause, an ELSE clause, or both a THEN and an ELSE clause. If the file open is successful (the specified file exists), the THEN clause is executed. If file open. You can use the STATUS function to determine the status of the sequential file open operation, as follows: 0=success; -1=file does not exist. To create a file, you must first issue an OPENSEQ statement, giving the fully-qualified pathname for the file you wish to create. Because the file does not yet exist, the OPENSEQ appears to fail, taking its ELSE clause and setting the value returned by the STATUS function to -1. However, the OPENSEQ sets its filevar to an identifier for the specified file. You then supply this filevar to CREATE to create the new file. The filename must be a fully-qualified pathname. The directories specified in filename must exist for a file create to be successful. Pathnames are not case-sensitive; however, case is preserved when you specify a filename to create a sequential file. After opening a file, you can use the STATUS statement to obtain file status information. You can use READBLK, READSEQ, WRITEBLK, and WRITESEQ to perform sequential read and write operations. You can use CLOSESEQ to release an open file, making it available to other processes. File Locking Issuing OPENSEQ gives the process exclusive access to the specified file. An OPENSEQ locks the file against an OPENSEQ issued by any other process. This lock persists until the process that opened the file releases the lock, by issuing a CLOSE, a CLOSESEQ, or a RELEASE statement. Issuing an OPENSEQ for a non-existent file also performs an exclusive file lock, so that your process can issue a CREATE to create this file. A CLOSE or CLOSESEQ releases this file lock, whether or not the file has been successfully created. If an OPENSEQ without a LOCKED clause attempts to open a file already opened by another process, the OPENSEQ waits until the first process closes (or releases) the desired file. If an OPENSEQ with a LOCKED clause attempts to open a file already opened by another process, the OPENSEQ concludes by executing the LOCKED clause statements. The ELSE clause is not invoked because of lock contention. FILEINFO and @FILENAME You can use the FILEINFO function to return sequential file information, including whether a specified filevar has been defined (key=0) and the filename specified in OPENSEQ for that filevar (key=2). The @FILENAME system variable also contains the filename specified in the most recent OPENSEQ. In both cases, the file does not have to exist; if OPENSEQ specifies a non-existent file, both FILEINFO and @FILENAME return the specified pathname as a directory path. Subsequently creating this file does not change the FILEINFO and @FILENAME pathname values. If the file does not exist, the FILEINFO file type (key=3) is 0. Creating the file changes this FILEINFO file type to 5. Sequential File I/O Buffering By default, sequential file I/O is performed using I/O buffering. This buffer is automatically assigned as part of the OPENSEQ operation. I/O buffering significantly improves overall performance, but means that write operations are not immediately applied to the sequential file. Caché MVBasic provides two statements that override I/O buffering. The FLUSH statement immediately writes the current contents of the I/O buffer to the sequential file. The NOBUF statement disables the I/O buffer for the duration of the sequential file open. That is, all subsequent I/O write operations are immediately executed on the sequential file. Emulation For jBASE emulation, the filename argument can be specified with a two-part path,filename syntax. When executed, the two parts are concatenated together, with a delimiter added to the end of path, when necessary. For example, OPENSEQ 'c:\temp\','mytest.txt' TO FD or OPENSEQ 'c:\temp','mytest.txt' TO FD. For other emulation modes, the filename argument can be specified with a two-part file,itemID syntax. The file part is a dir-type file defined in the VOC master dictionary, and the itemID part is an operating system file within that directory. Examples The following example opens a sequential file on a Windows system and writes a line to it. If the file does not exist, it creates the file: filename='c:\myfiles\test1' OPENSEQ filename TO mytest ELSE STOP 201,filename IF STATUS()=0 THEN WRITESEQ "John Doe" TO mytest CLOSESEQ mytest END ELSE CREATE mytest IF STATUS()=0 THEN WRITESEQ "John Doe" TO mytest CLOSESEQ mytest END ELSE PRINT "File create failed" END END
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RVBS_COPENSEQ
2021-01-16T05:45:43
CC-MAIN-2021-04
1610703500028.5
[]
docs.intersystems.com
User's can play audio in their repls without creating a website. To make it as easier for users, we based audio on a request system. This means that users just need to request for a file to be played, rather than reading it directly. Although they aren't directly reading the files users can still control: Another builtin feature is the ability for users to play as many files as they would like. Supported files are .wav, .aiff, and .mp3 files (detailed more later). Currently, we have javascript and python libraries for audio. An example of the js library is shown below, and generated docs can be found here. An example using the python library is shown below. Generated docs can be found with the python replit package documented here. Since not everyone uses python or js, we decided to document how to make a library. Currently, supported file types are .wav, .aiff, and .mp3 files. Files are played in mono / single channel mode, files with multiple channels will be read and converted into single channel data. To make this as light as possible on your repl's resources, audio files are played via a request system. To make a request, simply write to a named pype, /tmp/audio. An example request might look like: { "Paused": false, "Name": "My tone", "Type": "tone", "Volume": 1, "DoesLoop": false, "LoopCount": 0, "Args": { "Pitch": 400, "Seconds": 5, "Type": 1, "Path": "" } } What these fields mean:, current supported types An example status for audio is shown below. { "Sources": [ { "Name": "1", "Type": "tone", "Volume": 1, "Duration": 2000, "Remaining": 1995, "Paused": false, "Loop": 0, "ID": 1, "EndTime": "2020-08-20T18:15:27.763933471Z", "StartTime": "2020-08-20T18:15:25.763933471Z", "Request": { "ID": 0, "Paused": false, "Name": "1", "Type": "tone", "Volume": 1, "DoesLoop": false, "LoopCount": 1, "Args": { "Pitch": 400, "Seconds": 2, "Type": 1 } } } ], "Disabled": false, "Running": true } What this is: Sources- A list of playing sources. Name- The name of the source Type- The type of the source (types documented above) Volume- The volume of the source ( float64) Duration- The (estimated) duration of the source (in milliseconds) ( int64) Remaining- The (estimated) time remaining for the source (in milliseconds) ( int64) Paused- Whether the source is paused or not ( bool) Loop- How many times the source will play itself again. Negative values are infinite. ( int64) ID- The ID of the source used for updating it. ( int64) EndTime- The estimated time when the source will be done playing. StartTime- When the source started playing. Request- The request used to create the source. (documented above) Disabled- Whether the pid1 audio player is disabled or not - useful for debugging. Running- Whether pid1 is sending audio or not - useful for debugging. Note: estimated end time is based on the current loop, and does not factor in the fact that the source may repeat itself. Note: Timestamps are formatted like so: yyyy-MM-dd'T'HH:mm:ssZ In order to read the data from the sources, you need to read /tmp/audioStatus.json. The file is formatted as shown below: Note: After a source finishes playing it is removed from the known sources. In order to pause or edit a playing source, you first need it's ID. You can get it's ID by reading /tmp/audioStatus.json, as detailed above. Edit requests are formatted as shown below: { "ID": 1, // The id of the source "Volume": 1, // The volume for the source to be played at "Paused": false, // Whether the file is paused or not. "DoesLoop": false, // Whether the file should be repeated or not. "LoopCount": -1 // How many times to repeat the file. Set to a negative value to create an endless loop. } All fields must be provided, with the exception of LoopCount when DoesLoop is false. For editing a source, I would just do something like the following, or the equivelent in other langs: import json class NoSuchPlayerException(Exception): pass def update_source(id, **changes): player_data = read_status() # Assume read_status reads /tmp/audioStatus.json for s in player_data['Sources']: if s['ID'] == id: data = s break if not data: raise NoSuchPlayerException(f'No player with id "{id}" found!') data.update({key.title(): changes[key] for key in changes}) with open('/tmp/audio', 'w') as f: f.write(json.dumps(data)) There is also a simple demo created in python available here
https://docs.repl.it/repls/audio
2021-01-16T05:10:27
CC-MAIN-2021-04
1610703500028.5
[]
docs.repl.it
Ungrouping Layers If needed you can ungroup layers you have previously grouped together. NOTE For ungrouping layers in the Node view, see Ungrouping Nodes - In the Timeline view, select the group you want to ungroup. Right-click on the group and select Ungroup. All the group's layers will be placed outside of the group, and the group will be removed.
https://docs.toonboom.com/help/harmony-17/premium/layers/ungroup-layers.html
2021-01-16T05:36:58
CC-MAIN-2021-04
1610703500028.5
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
The administrator who has USER_ADMINISTRATION role can access this page and all the operations that belongs to users: list and search users create, edit and delete users synchronize remote users and groups resolve user conflicts The list of users from the local DB and from external authentication sources is displayed on the main page. New users (only local) can be added, existing local user details can be found and modified here. Filtering of the list of all type of users is possible based on username or e-mail address. In the Source column the source of the given user’s authentication is visible: it can be a locally created user or a remote one: LDAP, Active Directory or SAML. By default the local authentication is used, but AD/LDAP authentication can be configured on the Authentication Providers page. In Active column the status of the given user (active/inactive) is visible. Please find more information about deactivate/activate a user here: Deactivate/Activate a user. To access the Groups page, the Administrator user should have the same USER_ADMINISTRATION role as in case of the Users page. Groups include sets of roles (each group one set of roles). Roles are sets of actions — actions like registration, amendment etc. — which those users to whom a given role have been granted are allowed to execute. On this page Administrator users can: List and search groups by name or role Create, edit and delete groups Associate roles and users to a group Synchronize groups with remote authentication providers The Source column indicated the origin of the group, that can beLocal (DB), LDAP, Active Directory or SAML AD/LDAP authentication can be configured on the Authentication Providers page. Please find more details about SAML here: Authentication. Using Projects you can control who can access the compounds that belong to a certain project. Projects are usually defined in order to apply project based access to the Registration system. By default, this functionality is turned off, and accordingly, the project field can be used to store data, but no data filtering or data access will be controlled based on the user and the project info. In order to have a project based access in your system you need to turn on this functionality. More about Project based access can be found here. On this page you can: manage the list of projects and assign users to different projects (for full control you need to have ROLE_ACL_READ_PROJECTDETAILS, ROLE_ACL_MODIFY_PROJECTDETAILS and ROLE_ACL_MODIFY_PROJECTS roles) change project related settings ( MODIFY_PARAMETERS role is necessary to be able to modify these settings) When creating a project, users can be associated with it. User(s) can have different permission(s) within projects. Currently five types of permissions are available: Read / write all submissions (1) Read all , write own submissions (2) Read all submissions (3) Read / write own submissions (4) You can configure Compound Registration to use an external service to authenticate users. At the moment Compound Registration supports LDAP, Active Directory and SAML. C urrently only LDAP and Active Directory configuration is exposed on the Administration UI ( Administration > Access Control > Authentication Providers ). To configure SAML please visit this page.
https://docs.chemaxon.com/display/lts-fermium/access-control.md
2021-01-16T06:05:51
CC-MAIN-2021-04
1610703500028.5
[]
docs.chemaxon.com
Stream Security Introduction Security usually is a challenging, technically aspects of any software system. However, diving into technical details before understanding the principles of the solution can lead to false assumptions about the level of security in place. Therefore, this section provides a high-level overview of Opencast's stream security functionality. Content Security in Opencast In many settings, some or even all content published by an Opencast installation must not be accessible by everyone. Instead, access should be restricted to those users with corresponding permissions. So, if access control already ensures that each user only has access to the recordings he or she is allowed to see, what does stream security add to the mix? Looking more closely at what it means to serve recordings to a viewer reveals that a distinction needs to be made between: - the presentation of the video player, the recording metadata - the serving of the video streams, preview images etc. to that player. The former is protected by the engage part of Opencast. The latter may be served by download and streaming servers. Those distribution servers are independent of Opencast and have no knowledge about the current user and its permissions with regard to the requested video asset . To summarize: Opencast is capable of assessing a recording’s access control list and the current user’s permissions to decide if a user is allowed to access the recording’s metadata and the player. External download and streaming servers, serving the actual video files are not aware of these permissions. As a result, nothing prevents an authorized user from passing on the actual video and image URLs to the public, thereby circumventing the restrictions implied on the presentation layer earlier on. Securing the Streams Since the download and streaming servers do not (and should not) have access to security related information about the user, its roles nor its permissions with regard to the media files, there is no way to perform authorization checks the same way Opencast is performing them while serving up recording metadata. The only way to decide if a given request should be served or not is to leave authorization to Opencast and agree on a secure protocol that defines whether a request is meant to be granted by Opencast or not. Stream security solves the problem exactly as described: Each request that is sent to any of the download or streaming servers must contain a validation policy, indicating for how long access should be granted and optionally even from which IP address. Signing of the policy ensures that potential changes to the policy will be detected. On the other end, the server must be enabled to verify the signature and extract the policy to verify whether it should comply with the request or not. What is secured and what is not? Even with Stream security enabled, some loopholes exist where unauthorized viewers might be able to get access to protected resources, even though for a limited time only. The following section describes in detail what is and what is not secured. URL hacking Executive summary: Accessing a resource with an unsigned or incorrectly signed URL is impossible. Resources distributed by Opencast are organized in a file structure that is built upon a resource’s series identifier as well as the identifier of the recording itself. Since those identifiers usually are based on UUIDs, guessing the URL is hard but not impossible. In addition, a malicious user might be getting hold of a valid identifier through network sniffing, social hacking or by other means. With Stream Security enabled, a user cannot access that resource, since the URL for accessing the resource would either be lacking the policy and signature completely or would contain a broken signature due to an identifier mismatch in the policy. It is important to note that, if stream security is enabled, all resources will be signed and protected, even ones that do not have any access restrictions defined in their access control lists. Accessing resources with unsigned URLs will not be possible. Revoking access rights Executive summary: Access is revoked once the digital signature expires. If a user has the rights to access a resource, it does not automatically mean that permission has been granted for a lifetime. After a signed URL’s policy has expired, the URL must receive an updated policy and be signed again in order to provide continuous access to the corresponding resource, so in the case of revoked access rights, the user in question will be able to keep access to the resource as long as the initially signed url is valid. After that, Opencast will not provide a signed URL anymore due to the change in permissions. On the other hand, there is no way to revoke access to that resource for that particular user unless the URL expires. The only way would be to completely remove the resource from the distribution server. It is therefore important to choose reasonable expiration times for signed URLs. Unauthorized sharing of URLs Executive summary: Leaked signed URLs are only accessible for the duration of the validity of the signature. A signed URL shared by an authorized user with a non-authorized third party will expire (as explained above). The expiration time can be set as low as some seconds but will then require even authorized users to obtain newly signed URLs as they continue to access protected content (e.g. the user takes a quick break watching a recording by hitting “pause”, then hits “play” again to resume). This risk can be lowered further by restricting a resource to a client’s IP address so that it can only be played by someone with the same IP. Downloading or ripping content Executive summary: Content protected by stream security is not protected against unauthorized publication through authorized users. Since stream security does not implement digital rights management (DRM), authorized users may download content while in possession of correctly signed URLs. When that content is republished on systems that are not under the control of the original owner (i.e. are not protected by stream security or any other means), it is publicly available. Most institutions will have a policy in place that legally prevents circumventing protection and sharing of protected media, and as a result, the above scenario will be taxed as piracy. Technical Overview Stream security consists of several components, and each of these components must be installed and configured properly, otherwise the system may not behave as expected. This part of the documentation describes how each of the components need to be installed and holds information on which configuration options are available. Terms For the understanding of this document it is important to have the following terms clearly defined. Policy A policy defines the timeframe and (optionally) from which addresses a specified resource may be accessed. In order to exchange the policy between system components, the involved components must agree on a serialization specification. Signature The signature expresses the validity of a policy. As with the policy, the system’s signature components, must follow a predefined signing algorithm. Only then is it possible to verify if the signature was issued for a specific policy, or if either the signature or the policy was modified. Key Using keys is a common way to protect information that is being shared between two or more systems. In stream security, keys are used to prevent signature forgery. A key consists of an identifier (ID) and a secret value. The keys need to be kept private, otherwise everyone can create signatures and thereby gain unlimited access to all resource protected by that key. The combination of a policy specification and a signature algorithm forms the signing protocol, where the policy contains the rules to be applied and the signature ensures that the rules remain unaltered. Components that implement the same signing protocol are compatible and can be used in combination. Components A typical signing infrastructure consists of two main components: a signing service and a verification component. While the signing service is used to sign arbitrary URLs, the verification component is located on the distribution servers to protect the resources and only serve requests that have been properly signed. All signing providers and verification components developed by the Opencast community implement the Opencast signing protocol as documented in the developer guide and are therefore compatible. URL Signing Service The URL signing service is designed to support one or more signing implementations called signing providers. With this concept, different signing protocols, and by virtue, different verification components are supported. The resource is presented to each signing provider in turn, where it is either signed or passed on. This process continues until a signature is obtained. Out of the box, Opencast provides the following implementation: - Generic Signing Provider: This provider may be used in combination with HTTP servers. It appends the necessary information (policy, signature and key id) to the URL. The URL signing service makes it straightforward to provide additional implementations to handle third party distribution servers URL signatures. This becomes important in situations where files are served by a server that is currently not supported or if files are served by a CDN that implements its own proprietary signing protocol. Verification components In order to take advantage of the signed URLs, a verification component needs to reside on the distribution servers to verify the validity of the signature (i.e. check that the URL has not been altered after it was signed) and then grant or deny access to the resource, based on the policy associated with the URL. In addition to these external verification components there is also an Opencast verification component called the UrlSigningFilter that is used to protect files that Opencast itself provides. Verification components have the option of strict or non-strict checking. Strict verification of resources means the entire URL will be considered when comparing the incoming request for a resource against the policy, including the scheme (http, https, etc.), hostname and port. If using non-strict checking, only the path to the resource will be considered. So if the request is for a resource at, only the /the/full/path/video.mp4 part of the URL will be checked against the policy’s path. This is useful when using a load balancer so that the requested hostname does not have to match the actual hostname or if a video player is rewriting requests, e.g. by inserting the port number. Further Information For further technical information like installation instructions, configuration guides, server plugins and the signing specification, please have a look at these documents: - Stream Security Configuration & Testing - The Opencast Signing Protocol is defined in the subsection Stream Security in the modules section of the developer guide.
https://docs.opencast.org/r/5.x/admin/modules/stream-security/
2021-01-16T06:29:16
CC-MAIN-2021-04
1610703500028.5
[]
docs.opencast.org
Ellipse Tool Properties The Ellipse tool allows you to quickly draw an ellipse or a circle. - In the Tools toolbar, select the Ellipse. ><<
https://docs.toonboom.com/help/harmony-17/paint/reference/tool-properties/ellipse-tool-properties.html
2021-01-16T05:22:39
CC-MAIN-2021-04
1610703500028.5
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Reference/tool-properties/ellipse-tool-prop-ADV.png', 'Shape Tool Properties View - Rectangle Shape Tool Properties View - Rectangle'], dtype=object) array(['../../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/pencil-tool-properties-dialog-button.png', 'Stroke Preview Stroke Preview'], dtype=object) array(['../../Resources/Images/HAR/Reference/tool-properties/Pencil-tool-size-smoothness.png', None], dtype=object) array(['../../Resources/Images/HAR/Reference/tool-properties/pencil-tool-texture-tab.png', None], dtype=object) ]
docs.toonboom.com
A. Blocking and non-blocking transports There are two main types of transports: blocking and non-blocking. In a blocking transport, the I/O threads are blocked because the same worker thread that sends the request to the server remains open to receive the response. These threads are blocked until messages are completely processed by the underlying Axis2 engine. In non-blocking transports, the worker thread that sends the request will not wait for the response, and another thread will receive the response. Therefore, non-blocking transports increase the performance of the server. For more information on transports, see the following topics:
https://docs.wso2.com/display/ESB490/Working+with+Transports
2021-01-16T05:39:21
CC-MAIN-2021-04
1610703500028.5
[]
docs.wso2.com
. First, we will verify that the error in the computed fields decreases monotonically with decreasing tolerance of the iterative solver. And then, we will demonstrate qualitative agreement with the frequency-domain fields computed using a different method: Fourier transforming the time-domain fields in response to a narrowband Gaussian-pulse source.. import meep as mp import numpy as np from numpy import linalg as LA import matplotlib.pyplot as plt n = 3.4 w = 1 r = 1 pad = 4 dpml = 2 sxy = 2*(r+w+pad+dpml) c1 = mp.Cylinder(radius=r+w, material=mp.Medium(index=n)) c2 = mp.Cylinder(radius=r) fcen = 0.118 df = 0.08 src = mp.Source(mp.ContinuousSource(fcen,fwidth=df), mp.Ez, mp.Vector3(r+0.1)) sim = mp.Simulation(cell_size=mp.Vector3(sxy,sxy), geometry=[c1,c2], sources=[src], resolution=10, force_complex_fields=True, symmetries=[mp.Mirror(mp.Y)], boundary_layers=[mp.PML(dpml)]) num_tols = 5 tols = np.power(10, np.arange(-8.0,-8.0-num_tols,-1.0)) ez_dat = np.zeros((122,122,num_tols), dtype=np.complex_) for i in range(num_tols): sim.init_sim() sim.solve_cw(tols[i], 10000, 10) ez_dat[:,:,i] = sim.get_array(center=mp.Vector3(), size=mp.Vector3(sxy-2*dpml,sxy-2*dpml), component=mp.Ez) err_dat = np.zeros(num_tols-1) for i in range(num_tols-1): err_dat[i] = LA.norm(ez_dat[:,:,i]-ez_dat[:,:,num_tols-1]) plt.figure(dpi=100) plt.loglog(tols[:num_tols-1], err_dat, 'bo-'); plt.xlabel("frequency-domain solver tolerance"); plt.ylabel("L2 norm of error in fields"); plt.show() eps_data = sim.get_array(center=mp.Vector3(), size=mp.Vector3(sxy-2*dpml,sxy-2*dpml), component=mp.Dielectric) ez_data = np.absolute(ez_dat[:,:,num_tols-1])() if np.all(np.diff(err_dat) < 0): print("PASSED solve_cw test: error in the fields is decreasing with increasing resolution") else: print("FAILED solve_cw test: error in the fields is NOT decreasing with increasing resolution") The results are shown in the figure below. The error in the fields decreases monotonically with decreasing tolerance of the frequency-domain solver. As a further validation of the frequency-domain solver, we will compare its fields with those computed using time-stepping. This involves taking the Fourier transform of Ez via the add_dft_fields routine. At the end of the time stepping, these frequency-domain fields are then output to an HDF5 file via output_dft. The script is extended as follows. sim.reset_meep() src = mp.Source(mp.GaussianSource(fcen,fwidth=df), mp.Ez, mp.Vector3(r+0.1)) sim = mp.Simulation(cell_size=mp.Vector3(sxy,sxy), geometry=[c1,c2], sources=[src], resolution=10, symmetries=[mp.Mirror(mp.Y)], boundary_layers=[mp.PML(dpml)]) where = mp.Volume(center=mp.Vector3(), size=mp.Vector3(sxy-2*dpml,sxy-2*dpml)) dfts = sim.add_dft_fields([mp.Ez], fcen, fcen, 1, where=where) sim.run(until_after_sources=100) sim.output_dft(dfts, "dft_fields") import h5py f = h5py.File("dft_fields.h5", 'r') ezi = f["ez_0.i"].value ezr = f["ez_0.r"].value ez_dat = ezr+1j*ezi eps_data = sim.get_array(center=mp.Vector3(), size=mp.Vector3(sxy-2*dpml,sxy-2*dpml), component=mp.Dielectric) ez_data = np.absolute(ez_dat)() The left inset of the figure above shows the magnitude of the scalar Ez field, computed using the frequency-domain solver with a tolerance of 10-12, superimposed on the ring-resonator geometry. Note the three-fold mirror symmetry of the field pattern (fundamental mode) and faint presence of the point source. The right inset is for the Fourier-transformed fields of the time-domain calculation. The results are qualitatively similar.
https://meep.readthedocs.io/en/latest/Python_Tutorials/Frequency_Domain_Solver/
2018-10-15T11:40:34
CC-MAIN-2018-43
1539583509170.2
[array(['../../images/CWsolver-python.png', None], dtype=object)]
meep.readthedocs.io
API Credentials API credentials allow secure access to the appbase.io APIs. They offer a variety of rules to granularly control access to the APIs. DefaultsDefaults When creating an app in appbase.io, you have access to two types of API Credentials. A Read-only API key offers access to read based endpoints of the API (you can get a document, search for documents, but not create or update a document) while an Admin API key offers access to both read an write based endpoints (you can create, update and even delete documents). How are the credentials authenticated?How are the credentials authenticated? An appbase.io credential consists of a username:password format and are authenticated using the HTTP Basic Authentication method. When making the API request, the credentials are passed using the Authorization header with a base64 encoded value of the actual credentials. If you are using the appbase-js or ReactiveSearch libraries, you don’t have to worry about the base64 conversion, these libraries do that for you. If you are using a server-side language, then you will have to add the Authorization header with the correct base64 encoded value of the API key. You can read more about it in the REST API Reference. Adding and Updating CredentialsAdding and Updating Credentials You can also create a new credentials or modify the existing defaults. When doing so, you can set one or more of the following restrictions. Let’s go over each kind of authorization constraint you can apply to a key: Key Type determines the main type of operations this key will be responsible for. There are three key types: - Read-only key, - Write-only key, - Admin key (aka read + write). ACLs determine granularly what type of actions are allowed for the API key in addition to the broad Key Type. - Index (index) allows indexing and update actions. - Get (get) allows retrieving documents and data mappings. - Search (search) allows searching for documents in an app. - Settings (settings) allow access to the settings endpoints. - Stream (stream) allows access to the streaming endpoints for realtime data updates. - Bulk (bulk) allows access to the bulk endpoints. - Delete (delete) allows access to all the deletion related endpoints. - Analytics (analytics) allows access to the Analytics APIs programmatically. [only available for growthplan users] Security constraints allow authorizing API access based on the selected HTTP Referers and IP Source values. -.
https://docs.appbase.io/concepts/api-credentials.html
2018-10-15T11:10:45
CC-MAIN-2018-43
1539583509170.2
[array(['https://i.imgur.com/hkMdS7u.png', None], dtype=object) array(['https://i.imgur.com/UlF6rv8.png', None], dtype=object) array(['https://i.imgur.com/vt8NUmx.png', None], dtype=object) array(['https://i.imgur.com/QXpdEhH.png', None], dtype=object)]
docs.appbase.io
All content with label datagrid. » Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/labels/viewlabel.action?ids=4456478&startIndex=30
2018-10-15T10:57:42
CC-MAIN-2018-43
1539583509170.2
[]
docs.jboss.org
Contents Now Platform Capabilities Previous Topic Next Topic SC Categories widget ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share SC Categories widget The SC Categories widget displays Service Catalog categories. The system renders the categories available in this widget from the Categories table in Service Catalog [sc_category]. Figure 1. SC Categories widget Instance options Figure 2.. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/build/service-portal/concept/sc-categories-widget.html
2018-10-15T11:05:26
CC-MAIN-2018-43
1539583509170.2
[]
docs.servicenow.com
Ensemble Virtual Documents Virtual Documents [Back] Ensemble Interoperability > Ensemble Virtual Documents > Virtual Documents Class Reference Search : This chapter explains what virtual documents are, why there are useful, and how they are different from standard messages. It also briefly introduces tools that Ensemble provides so that virtual documents can be used in all the same ways as standard messages. This chapter contains the following sections: Introduction Kinds of Virtual Documents Access to Contents of Virtual Documents Support for Filter and Search Introduction A virtual document is a kind of message that Ensemble parses only partially. To understand the purpose of virtual documents, it is useful to examine Ensemble messages a little more closely. Every Ensemble message consists of two parts: The message header contains the data needed to route the message within Ensemble. The message header is always the same type of object. This is a persistent object, meaning that it is stored within a table in the Ensemble, Ensemble provides an alternative type of message body called a virtual document . A virtual document allows you to send raw document content as x body of an Ensemble Ensemble can handle the following kinds of documents as virtual documents: Kind of Document See ASTM documents Ensemble ASTM Development Guide EDIFACT documents Ensemble EDIFACT Development Guide HL7 version 2 messages Ensemble HL7 Version 2 Development Guide X12 documents Ensemble X12 Development Guide XML documents Ensemble XML Virtual Document Development Guide You can also handle XML documents as standard messages. To do so, you can generate classes from the corresponding XML schema. For information, see Using Caché XML Tools . Other books in the Interoperability set do not describe virtual documents. Access to Contents of Virtual Documents To work with data in a virtual document, you must be able to identify a specific data item within it. The data item is called a virtual property . An virtual property path is the syntax that Ensemble, Ensemble indexes these properties as if they were properties in a standard message body. Users can then use these properties directly without having to know the property paths that they use. For example: For details on defining search tables, see Defining Search Tables , later in this book. [Back] [Top of Page]   © 1997-2018, InterSystems Corporation Content for this page loaded from EEDI.xml on 2018-10-15 05:49:28
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EEDI_intro
2018-10-15T10:34:10
CC-MAIN-2018-43
1539583509170.2
[]
docs.intersystems.com
S. Prerequisites Please note Using an object container as a file system can impact the performance of your operations. Create your file system - Install S3QL: admin@serveur1:~$ sudo apt-get install s3ql The latest version should be available in the Debian 8 repository - Create a file containing the login information: admin@serveur1:~$ sudo vim s3qlcredentials.txt [swift] backend-login: TENANT_NAME:USERNAME backend-password: PASSWORD storage-url: swiftks://auth.cloud.ovh.net/REGION_NAME:CT_NAME fs-passphrase: PASSPHRASE Information such as TENANT_NAME, USERNAME can be found in your OpenRC file. You can follow this guide below in order to retrieve it: The REGION_NAME and CT_NAME arguments can be adapted according the name and location of your object container. - Change authentication file access permissions: admin@serveur1:~$ sudo chmod 600 s3qlcredentials.txt - Object container formating: admin@serveur1:~$ sudo mkfs.s3ql --authfile s3qlcredentials.txt swiftks://auth.cloud.ovh.net/GRA1:CT_S3QL You then have to add the passphrase to your authentication file. If you do not want to configure it, you have to delete the "fs-passphrase: PASSPHRASE" line from your file. Configure your file system - Create the mounting point admin@serveur1:~$ sudo mkdir /mnt/container - Mount the object container admin@serveur1:~$ sudo mount.s3ql --authfile s3qlcredentials.txt swiftks://auth.cloud.ovh.net/GRA1:CT_S3QL /mnt/container/ - Check mounting: admin@serveur1:~$ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.8G 927M 8.5G swiftks://auth.cloud.ovh.net/GRA1:CT_S3QL 1.0T 0 1.0T 0% /mnt/container You cannot use S3QL in offline mode, you should not configure persistance via the /etc/fstab file but by using a script which will run when your sever starts up. Please do not hesitate to see view the S3QL FAQ
https://docs.ovh.com/gb/en/storage/use_s3ql_to_mount_object_storage_containers/
2018-10-15T11:19:21
CC-MAIN-2018-43
1539583509170.2
[]
docs.ovh.com
.py. The Python Script We begin by importing the meep and argparse library modules: import meep as mp import argparse def main(args): We then define the parameters of the problem with exactly the same values as in the 2d simulation: n = 3.4 # index of waveguide w = 1 # width of waveguide r = 1 # inner radius of ring pad = 4 # padding between waveguide and edge of PML dpml = 2 # thickness of PML Now, we'll define the dimensions and size of the computational cell: sr = r + w + pad + dpml # radial size (cell is from 0 to sr) dimensions = mp.CYLINDRICAL cell = mp.Vector3(sr, 0, 0) 0 because it is in 2d. The φ size is also 0, a command-line argument that is 3 by default. invovles =384.% python ring-cyl.py -m 3 -fcen 0.118 -df 0.01.
https://meep.readthedocs.io/en/latest/Python_Tutorials/Ring_Resonator_in_Cylindrical_Coordinates/
2018-10-15T11:39:41
CC-MAIN-2018-43
1539583509170.2
[array(['../../images/Ring-cyl-ez-0.118.png', None], dtype=object) array(['../../images/Ring-cyl-ez-0.148.png', None], dtype=object) array(['../../images/Ring-cyl-ez-0.176.png', None], dtype=object)]
meep.readthedocs.io
Hardware and Software Requirements for BizTalk Server 2013 and 2013 R2 This topic lists the BizTalk Server 2013 and BizTalk Server 2013 R2 requirements. Hardware Requirements The following table shows the minimum hardware requirements for your BizTalk Server computer. In a production environment, the volume of traffic may dictate greater hardware requirements for your servers. Community Addition: Recommendations for Installing, Sizing, Deploying, and Maintaining a BizTalk Server Solution Software requirements & supported versions This table lists the minimum software required for running BizTalk Server. You’ll be guided through installation steps for all of these prerequisites in a later section. Service Pack and Cumulative Update Support. Refer to Support Lifecycle Index for BizTalk Server, SQL Server, Visual Studio, and other Microsoft programs. Considerations Before Installing BizTalk Server 2013 and 2013 R2 See Also Installation Overview for BizTalk Server 2013 and 2013 R2 Appendix A: Silent Installation Appendix B: Install the Microsoft SharePoint Adapter Appendix C: Redistributable CAB Files Appendix D: Create the SMTP Server
https://docs.microsoft.com/en-us/biztalk/install-and-config-guides/hardware-and-software-requirements-for-biztalk-server-2013-and-2013-r2?redirectedfrom=MSDN
2018-10-15T11:38:43
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
Contents Now Platform Capabilities Previous Topic Next Topic Supported browsers for Connect ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Supported browsers for Connect The system supports Connect Chat and Connect Support on most modern browsers. The latest public release of Firefox or Firefox ESR The latest public release of Chrome Safari version 6.1 and later Internet Explorer version 10 and later Edge mode is supported. Compatibility mode is not supported. Setting Security Mode to High (via the Internet Options > Security tab) is not supported. Internet Explorer 11 is susceptible to memory leaks, which may impact performance, especially in Windows 7. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/collaboration/reference/r_SupportedBrowsersForConnect.html
2018-10-15T11:19:12
CC-MAIN-2018-43
1539583509170.2
[]
docs.servicenow.com
Series Items. Each chart series item encapsulates a single data point. The key properties are shown in the figure below in the property category "Basic value set". For simple charts along a single axis populate the YValue property.Use the XValue property to add a second data dimension. For example, the Y values might be "Sales Volume" and the X values might be time periods or geographic regions.XValue2 and YValue2 are used by Gantt type to indicate a period of time and the Bubble chart type to show amplitude of data. Set the Empty property to true to have RadChart approximate the value. The example below has a third item with the Empty property set to True, causing the item to display by default as a unfilled dotted line with a label of 30.5 (the average of the values that come before and after, 5 and 56, respectively). The look of the empty value is controlled by the EmptyValue property for the series Appearance. Other significant properties for the ChartSeriesItemare: ActiveRegion: Contains HTML Attributes, ToolTip and URL. These properties support making image maps. Appearance: This contains common visual properties Border, Corners, FillStyle and Visible. In addition the property Exploded is specific to the Pie chart property type. When true, Exploded displays a chart series item (a pie slice in this context) as slightly separated from the rest of the pie. Label: Use this property to override the default item label. By default the numeric values of each data point are displayed on the chart. Here you can use the Label.TextBlock.Text to add a more specific description of the data point. You have full control over each label HTML characteristics with the Label.ActiveRegion.Control visual display and layout using the Label.Appearance property.
https://docs.telerik.com/devtools/aspnet-ajax/controls/chart/understanding-radchart-elements/series-items
2018-10-15T10:53:41
CC-MAIN-2018-43
1539583509170.2
[array(['images/radchart-understandingelements031.png', 'Empty Property'], dtype=object) array(['images/radchart-understandingelements018.png', 'ChartSeriesItem Collection'], dtype=object) ]
docs.telerik.com
Active Directory Replication Concepts Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 Before designing site topology, become familiar with some Active Directory replication concepts. Connection object A connection object is an Active Directory object that represents a replication connection from one domain controller to another. A domain controller is a member of a single site and is represented in the site by a server object in Active Directory. <automatically generated> and are considered adequate under normal operating conditions. Connection objects created by an administrator are manually created connection objects. A manually created connection object is identified by the name assigned by the administrator when it was created. When you modify an <automatically generated> connection object, you convert it into an administratively modified connection object and the object appears in the form of a GUID. The KCC does not make changes to manual or modified connection objects. KCC The KCC is. Within a site, the connections between in the Windows Server 2003 Technical Reference on the Microsoft Web site (). replicate in both directions. For replication to occur between Seattle and Los Angeles, one domain controller in each site has a replication agreement with a domain controller in the other site. Between sites, these replication partners replicate in both directions according to configuration settings for a site link object that represents the physical WAN connecting the two sites. In Figure 3.3, domain controllers DC-3 and DC-4 are replication partners between the Seattle and Los Angeles sites. Figure 3.3 Intersite and Intrasite Replication Connections .gif) Failover functionality Sites ensure that replication is routed around network failures and offline domain controllers. The KCC runs at specified intervals to adjust the replication topology for changes that occur in Active Directory, Microsoft® Windows® 2000, intersite replication of the directory partitions (domain, configuration, and schema) between domain controllers in different sites is performed by the bridgehead servers (one per directory partition) in those sites. In Windows Server 2003,. However, you can run the Active Directory Load Balancing tool (Adlb.exe) to rebalance the load each time a change occurs in the site topology or in the number of domain controllers the site. In addition, Adlb can stagger schedules so that the outbound replication load for each domain controller is spread out evenly across time. Consider using Adlb to balance replication traffic between the Windows Server 2003–based domain controllers when they are replicating to more than 20 other sites hosting the same domain. For more information about using Adlb.exe and managing environments that have 100 or more branch sites, see the Windows Server 2003 Active Directory Branch Office Guide on the Microsoft Web did. Global catalog server A global catalog server is a domain controller that stores information about all objects in the forest so that applications can search Active Directory" later in this chapter.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc756899(v=ws.10)
2018-10-15T10:47:33
CC-MAIN-2018-43
1539583509170.2
[array(['images%5ccc756899.d4c3a58b-c04c-4c45-9581-b1cc338f9271(ws.10', 'Intersite and Intrasite Replication Connections Intersite and Intrasite Replication Connections'], dtype=object) ]
docs.microsoft.com
Contents Now Platform User Interface Previous Topic Next Topic UI properties ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share UI properties You can customize the following theming properties by navigating to System Properties > UI Properties. Icons used in the activity formatter Background colors for Additional Comments and Work Notes Button placement on forms Icons used in the Task Activity formatter Background colors for Incident Additional Comments and Work Notes Related TasksConfigure logo, colors, and system defaults for UI16Configure logo, colors, and system defaultsChange survey question header colorsRelated ConceptsMenu categoriesBusiness service map propertiesCSS propertiesCSS theme supportRelated ReferenceHelsinki CSS class support On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-platform-user-interface/page/administer/navigation-and-ui/concept/c_UIProperties.html
2018-10-15T11:02:19
CC-MAIN-2018-43
1539583509170.2
[]
docs.servicenow.com
Amazon SWF Metrics and Dimensions Amazon SWF sends data points to CloudWatch for several metrics. Some of the Amazon SWF metrics for CloudWatch are time intervals, always measured in milliseconds. These metrics generally correspond to stages of your workflow execution for which you can set workflow and activity timeouts, and have similar names. For example, the DecisionTaskStartToCloseTime metric measures the time it took for the decision task to complete after it began executing, which is the same time period for which you can set a DecisionTaskStartToCloseTimeout value. Other Amazon SWF metrics report results as a count. For example, WorkflowsCanceled, records a result as either one or zero, indicating whether or not the workflow was canceled. A value of zero does not indicate that the metric was not reported, only that the condition described by the metric did not occur. For count metrics, minimum and maximum will always be either zero or one, but average will be a value ranging from zero to one. For more information, see Viewing Amazon SWF Metrics for CloudWatch using the AWS Management Console; in the Amazon Simple Workflow Service Developer Guide. Workflow Metrics The AWS/SWF namespace includes the following metrics for Amazon SWF workflows: Dimensions for Amazon SWF Workflow Metrics Activity Metrics The AWS/SWF namespace includes the following metrics for Amazon SWF activities:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/swf-metricscollected.html
2017-08-16T17:35:30
CC-MAIN-2017-34
1502886102309.55
[]
docs.aws.amazon.com
Arrange Cards to Build a Flow (Design Center) Flows are composed of components that are chained together, where each component sends its output to the next component. Each of these components is represented in Design Center as a card. Each flow must have a trigger that executes the flow. The flow can be triggered from an outside event (like an HTTP request) or by a Scheduler that schedules the flow to execute at a certain time. As an exception to this rule, a flow that is meant to be referenced by another doesn’t need to have its own trigger. Add a component To add a first component to your flow, select from the menu that appears when you first create the flow. To add a component in the next position of your flow, click on the plus sign that’s at the end , then select a component. To add a component between two existing ones, hover over the arrow in between to reveal a plus sign and click it, then select a component. Some tips for navigating the list of components: You can use the search box to find a component by name. A dropdown menu on the top lets you filter through categories of components. Click the information icon for a component. This icon is displayed when hovering over a component when applicable. It links to the Exchange page for the connector, where you can learn more about it. Working with Scopes Scopes like Try and For Each can be added to your flow like any other component. These scopes do not perform any actions on their own, they’re buckets in which you can place other components. By adding components inside a scope, their behavior is affected by the scope. To add a component inside a scope, click the plus sign that appears inside the scope. You can add as many components as you want inside a single scope, you can also nest scopes inside other scopes. Reorder Components You can change the order in which the components in your flow are executed without losing its configurations. To change the order, you drag the components and drop them onto another position in the flow.
https://docs.mulesoft.com/design-center/v/1.0/arrange-cards-flow-design-center
2017-08-16T17:27:51
CC-MAIN-2017-34
1502886102309.55
[]
docs.mulesoft.com
In addition to the provided use case code snippets, you can expand your options for working with the vRealize Automation REST API by using related tools and documentation. You can use the vRealize CloudClient to simplify your interaction with the vRealize Automation REST API. You can also use third party tools such as Chrome Developer Tools or Firebug to further expand your vRealize Automation REST API programming options. For a complete list and description of available vRealize Automation REST API service calls and their usage, see the Swagger documentation for the product.
https://docs.vmware.com/de/vRealize-Automation/7.2/com.vmware.vra.programming.doc/GUID-6EAF35EE-15A7-4DA5-9A5D-D1C4FC9FA11A.html
2017-08-16T17:41:17
CC-MAIN-2017-34
1502886102309.55
[]
docs.vmware.com
: userand/or grouparguments), the if_missingargument no longer has any connection to which path(s) have ownership enforced. Instead, the paths are determined using the either the newly-added archive.listfunction, or the newly-added enforce_ownership_onargument. if_missingalso. enforce_toplevelto False. tar_optionsand zip_optionsarguments have been deprecated in favor of a single optionsargument. archive_formatargument is now optional. The ending of the sourceargument is used to guess whether it is a tar, zip or rar file. If the archive_formatcannot be guessed, then it will need to be specified, but in many cases it can now be omitted. A number of new arguments were also added. See the docs py:func:docs for the archive.extracted state <salt.states.archive.extracted> for more information. Additionally, the following changes have been made to the archive execution module: archive.list) has been added. This function lists the files/directories in an archive file, and supports a verboseargument that gives a more detailed breakdown of which paths are files, which are directories, and which paths are at the top level of the archive. archive.is_encrypted) has been added. This function will return Trueif the archive is a password-protected ZIP file, Falseif not. If the archive is not a ZIP file, an error will be raised. archive.cmd_unzipnow supports passing a password, bringing it to feature parity with archive.unzip. Note that this is still not considered to be secure, and archive.unzipis recommended for dealing with password-protected ZIP archives. extract_permsargument to archive.unziphas Falseto True git_pillar_ssl_verify: Changed from Falseto True winrepo_ssl_verify: Changed from Falseto loadavgbeacon now outputs averages as integers instead of strings. (Via issue #31124.) __utils__. salt://_utils/) are now able to be synced to the master, making it easier to use them in custom runners. A saltutil.sync_utilsfunction has been added to the saltutil runnerto facilitate the syncing of utility modules to the master. saltutil.sync_utilsrunner, it is now easier to get ref:utility modules <writing-utility-modules> synced to the correct location on the Master so that they are available in execution modules called from Pillar SLS files.. The connection is established via the NAPALM proxy. In the current release, the following modules were included: NAPALM grains- Select network devices based on their characteristics NET execution module- Networking basic features NTP execution module BGP execution module Routes execution module SNMP execution module Users execution module Probes execution module NTP peers management state SNMP configuration management state Users management state Beginning with 2016.11.0, there is a proxy minion that can be used to configure nxos cisco devices over ssh. Proxy Minion Execution Module State Module Beginning with 2016.11.0, there is a proxy minion to use the Cisco Network Services Orchestrator as a proxy minion. Proxy Minion Execution Module State Module.beacons.haproxy salt.beacons.status salt.cloud.clouds.azurearm salt.modules.boto_cloudwatch_event salt.modules.celery salt.modules.ceph salt.modules.influx08 salt.modules.inspectlib.entities salt.modules.inspectlib.fsdb salt.modules.inspectlib.kiwiproc salt.modules.inspector salt.modules.libcloud_dns salt.modules.openstack_mng salt.modules.servicenow salt.modules.testinframod salt.modules.win_lgpo salt.modules.win_pki salt.modules.win_psget salt.modules.win_snmp salt.modules.xbpspkg salt.output.pony salt.returners.zabbix_return salt.sdb.env salt.states.boto_cloudwatch_event salt.states.csf salt.states.ethtool salt.states.influxdb08_database salt.states.influxdb08_user salt.states.libcloud_dns salt.states.snapper salt.states.testinframod salt.states.win_lgpo salt.states.win_pki salt.states.win_snmp: salt.minion.parse_args_and_kwargsfunction has been removed. Please use the salt.minion.load_args_and_kwargsfunction instead. vspherecloud driver has been removed. Please use the vmwarecloud driver instead. private_ipoption in the linodecloud driver is deprecated and has been removed. Use the assign_private_ipoption instead. create_dns_recordand delete_dns_recordfunctions are deprecated and have been removed from the digital_oceandriver. Use the post_dns_recordfunction instead. The blockdev execution module had four functions removed:: contains_regex_multilinefunction was removed. Use file.searchinstead. file.grepshould be passed one at a time. Please do not pass more than one in a single argument. The lxc execution module has the following changes: run_cmdfunction was removed. Use lxc.runinstead. nicargument was removed from the lxc.initfunction. Use network_profileinstead. cloneargument was removed from the lxc.initfunction. Use clone_frominstead. lxc.initfunction will be assumed to be hashed, unless password_encrypted=False. restartargument for lxc.startwas removed. Use lxc.restartinstead.: Use the following functions instead: read_keyuse read_value set_keyuse set_value create_keyuse set_valuewith no vnameand no vdata delete_keyuse. compactoutputter has been removed. Set state_verboseto Falseinstead. grains.cacherunner no longer accepts outputteror minionas keyword arguments. Users will need to specify an outputter using the --outoption. tgtis replacing the minionkwarg. fileserverrunner no longer accepts the outputterkeyword argument. Users will need to specify an outputter using the --outoption. jobsrunner no longer accepts the outputterkeyword argument. Users will need to specify an outputter using the --outoption. virtrunner module: hyperkwarg was removed from the init, list, and queryfunctions. Use the hostoption instead. next_hyperfunction was removed. Use the next_hostfunction instead. hyper_infofunction was removed. Use the host_infofunctionstate. started: replaced by the runningstate. cloned: replaced by the presentstate. Use the clone_fromargument. jid_dirand jid_loadwere removed from the salt.utils.jid. jid_dirfunctionality for job_cache management was moved to the local_cachereturner. jid_loaddata is now retrieved from the master_job_cache. ip_in_subnetfunction in salt.utils.network.pyhas been removed. Use the in_subnetfunction instead. iamutils module had two functions removed: salt.utils.iam.get_iam_regionand salt.utils.iam.get_iam_metadatain favor of the aws utils functions salt.utils.aws.get_region_from_metadataand salt.utils.aws.creds, respectively.
https://docs.saltstack.com/en/latest/topics/releases/2016.11.0.html
2018-12-10T05:08:44
CC-MAIN-2018-51
1544376823303.28
[]
docs.saltstack.com
Step 2 - Prepare the Page¶ In this step you'll prepare and configure your front page, together with its layout and templates. Create Pages. Click Edit and you will see that the Home Page has only one zone with the block. The design for the website you are making needs a layout with two zones: a main column and a narrower sidebar. eZ Enterprise provides only a one-zone default layout, so you need to create a new-ez-zone-id attribute (lines 2 and 19). A block must have the data-ez-block-id attribute (lines 7 and 20) With these three elements: configuration, thumbnail and template, the new layout is ready to use. Change Home Page layout¶ Now you can change the Home Page to use the new layout. In Page mode edit Home, open the options menu and select Switch layout. Choose the new layout called "Main section with sidebar on the right". The empty zones you defined in the template will be visible in the editor. Tip If the new layout is not available when editing the Page, you may need to clear the cache (using php app/console cache:clear) and/or reload the app. You can also remove the block with "eZ Studio". Hover over it and select the trash icon from the menu. Publish the Home Page. You will notice that it still has some additional text information. This is because the looks of a Page are controller by two separate template files, and you have only prepared one of those. The sidebar.html.twig file defines how zones are organized and how content is displayed in them. But you also need a general template file that will be used for every Page, regardless of its layout. Add this new template, app/Resources/views/full/landing_page.html.twig: This template simply renders the page content. If there is any additional content or formatting you would like to apply to every Page, it should be placed in this template. Now you need to tell the app to use this template to render Pages. Edit the app/config/views.yml file and add the following code under the full: key: After adding this template you can check the new Page. The part between menu and footer should be empty, because you have not added any content to it yet.
https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/tutorials/enterprise_beginner/2_prepare_the_landing_page/
2018-12-10T04:33:07
CC-MAIN-2018-51
1544376823303.28
[array(['../img/enterprise_tut_starting_point.png', "It's a Dog's World - Starting point It's a Dog's World - Starting point"], dtype=object) array(['../img/enterprise_tut_home_is_an_lp.png', 'Home Content item is a Landing Page'], dtype=object) array(['../img/enterprise_tut_empty_single_block.png', 'Empty Page with default layout'], dtype=object) array(['../img/enterprise_tut_select_layout.png', 'Select layout window'], dtype=object) array(['../img/enterprise_tut_new_layout.png', 'Empty page with new layout'], dtype=object) array(['../img/enterprise_tut_empty_page.png', 'Empty Page'], dtype=object)]
ez-systems-developer-documentation.readthedocs-hosted.com
- ‘Exception:. ‘1.0.1’) in designatedashboard/__init__.py. Copy panel plugin files into your Horizon config. These files can be found in designatedashboard/enabled and should be copied to /usr/share/openstack-dashboard/openstack_dashboard/local/enabled or the equivalent directory for your openstack-dashboard install. Make sure your keystone catalog contains endpoints for service type ‘dns’. If no such endpoints are found, the designatedashboard panels will not render. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/designate-dashboard/latest/readme.html
2018-12-10T04:34:35
CC-MAIN-2018-51
1544376823303.28
[]
docs.openstack.org
Enabling and disabling chat Conference participants using a variety of clients can chat and share links with other participants who are in the same Virtual Meeting Room or Virtual Auditorium. Supported clients include Microsoft Lync, Skype, and Skype for Business. Chat is also natively supported in Pexip's own Infinity Connect suite. Microsoft Skype for Business / Lync and Infinity Connect clients can also chat when calling each other directly via the Pexip Distributed Gateway. You can enable or disable chat on a platform-wide basis, and it is enabled by default. To disable or re-enable this feature: - Go to. - From within the Enable chat.section, deselect or select You can also override the global setting on a per conference basis if required. To do this: - Go to, or . From within the Enable chat options:section, choose one of the - Use global chat setting: as per the global configuration setting. - Yes: chat is enabled. - No: chat is disabled. Default: Use global chat setting. When chat is disabled, Infinity Connect clients do not show the chat window. Providing chat to participants using unsupported clients Conference participants who are not using one of the supported clients will not be able to read or send chat messages. However, if they have access to a web browser they can use the Infinity Connect web app to join the conference without video or audio. This will give them access to the chat room, as well as the ability to view and share presentations, view the participant list, and (if they are Host participants) control aspects of the conference. For more information on using and administering the Infinity Connect suite of clients, see Introduction to Infinity Connect.
https://docs.pexip.com/admin/enabling_chat.htm
2018-12-10T04:38:52
CC-MAIN-2018-51
1544376823303.28
[array(['../Resources/Images/admin_guide/chat_options.png', None], dtype=object) ]
docs.pexip.com
Step 3 - Creating a content list¶ The next thing you will extend in this tutorial is the top menu. You will add a "Content list" item under "Content". It will list all Content items existing in the Repository. You will be able to filter the list by Content Types using a drop-down menu. Add an event listener¶ The first step is to add an event listener. To register the listener as a service, add the following block to src/EzSystems/ExtendingTutorialBundle/Resources/config/services.yml. Place the block indented, under the services key: Then create a MyMenuListener.php file in src/EzSystems/ExtendingTutorialBundle/EventListener: This listener subscribes to the ConfigureMenuEvent::MAIN_MENU event (see line 14). Line 28 points to the new route that you need to add to the routing file. Add routing¶ Add the following block to src/EzSystems/ExtendingTutorialBundle/Resources/config/routing.yml: Create a controller¶ As you can see in the code above, the next step is creating a controller that will take care of the article list view. First, ensure that the controller is configured in services.yml. Add the following block (indented, under the services key) to that file: Then, in src/EzSystems/ExtendingTutorialBundle/Controller create a AllContentListController.php file: The highlighted line 41 indicates the template that will be used to display the list. Add a template¶ Finally, create an all_content_list.html.twig file in src/EzSystems/ExtendingTutorialBundle/Resources/views/list: Check results¶ Tip If you cannot see the results, clear the cache and reload the application. At this point you can go to the Back Office and under "Content" you will see the new "Content list" item. Select it and you will see the list of all Content items in the Repository.
https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/tutorials/extending_admin_ui/3_creating_a_content_list/
2018-12-10T04:24:10
CC-MAIN-2018-51
1544376823303.28
[array(['../img/top_menu.png', 'Top menu'], dtype=object) array(['../img/content_list_unfiltered.png', 'Content list with unfiltered results Content list with unfiltered results'], dtype=object) ]
ez-systems-developer-documentation.readthedocs-hosted.com
Tips and tricks mSupply Client: Connecting to a different server It is unusual, but you may be in a situation where your mSupply client application needs to switch between more than one mSupply Server. Launch the mSupply client application by double-clicking on the mSupply client icon, then immediately hold down the Alt key until the mSupply Server connection dialog box appears: If you tick the Display this dialog at next startup tick-box, this dialog box will automatically be displayed on startup of the mSupply client application, and you will not need to do the double-click, Alt routine described above. There are three ways of selecting the mSupply server to connect to. These are accessed via the tabs: Recent, Available and Custom: Recent tab The Recent tab retains a list of all mSupply servers recently used. The list is sorted by alphabetical order. To connect to a server from this list, double-click on its name or select it and click the OK button. Available tab The mSupply Server includes a built-in TCP/IP broadcasting system that publishes by default the name of the mSupply Server databases available over the network. These names are listed on the Available tab of the connection dialog box. This list is sorted by order of appearance and is updated dynamically. To connect to a server from this list, double-click on its name or select it and click the OK button. Computer networks can be configured to stop dynamic publication of the database name on the network. In this case, you will need to manually configure the the connection on the Custom tab. Custom tab The Custom tab allows assigning a published server on the network using its IP address and database name. Database name: allows defining the name of the mSupply Server database. Network address: allows entering the IP address of the machine where the mSupply Server was launched. If two servers are operating simultaneously on the same machine, the IP address must be followed a colon and port number, for example: 192.168.92.104:19814. By default, the publishing port of a mSupply Server is 19813. This number can be modified in the Database Settings… under Client-server tab of mSupply server. If a database was selected in the Recent or Available tabs when you clicked on the Custom tab, these two fields display the corresponding information from that tab. Once the server details have been entered, clicking the OK button will connect you to the server. If you tick the Force the update of the local resources tick-box, it allows you systematic updating of the local resources on the client machine when it connects. As a rule, updating of the local resources is automatic on the remote machine each time it connects, when the structure of the database has been modified between two connections. Most of the time, this option is unnecessary. Nevertheless, in certain specific cases, it may be necessary to force the update. Emptying out a store and starting again... Sometimes, the actual stock situation in a store becomes so out of step with mSupply's records that you want to start again from scratch with a brand new stocktake. To do this, you will need to: - Confirm or delete all customer invoices with status = nw(New) or sg(Suggested). This should remove from stock all lines that have been made unavailable, but are still showing as in stock - see figure below. - Click on Customer - Click on Show Customer Invoices - Click on By Status - Click on All new and suggested - Double-click on the first displayed invoice - Click on Confirm (or, if you want to delete the CI, delete each of the lines, and then Click on Delete) - Click on OK & Next - Repeat steps 6 & 7 until all invoices are confirmed (or deleted). - Empty the mSupply store of all stock by means of an Inventory Adjustment based on an mSupply stocktake of all stock in the store - Create the stocktake - see figure below. - Click on Item - Click on Stocktakes - Click on New stocktake - Click OK (accept default filter values which includes all stock) - Set all actual quantities to zero ( 0) - see figure below. - Click once on the first stock line. The whole line will become highlighted. - Click once on the value in the 'Enter Quantity' column. The Enter Quantity value will become highlighted - Type 0, then press Tab. Repeat this for each line of the whole stocktake. You should be able to be faster than 2 stock lines per second, maybe a lot faster. At 2 stock lines per second, you can get through 1000 stock lines in less than 10 minutes… - Click on Create Inventory adjustments If you've got a lot of stock lines, this could take a while… - Import new stock as described in Importing items & stock Creating customers that are not visible to all the stores In a multi-store system, you will often be wanting to create new customers that are visible in only one store. By default, when new customers are created, they will be made visible to all stores in the system. If you actually want these new customers to be visible to only one store, you will need to edit the store visibility for that customer, and un-tick the visibility for all the other stores - a tedious exercise in a system with scores or hundreds of stores! For this reason, there is a solution: There is a store preference called Names created in this store not visible in other stores. To use it: - Edit the store preferences for the store that needs to be able to see the new customers and turn on the Names created in this store not visible in other stores store preference. - Make sure that you have user permission to create customers in this store. - Log in to the store - Create the new customers Previous: FAQ: Imprest Work Flow Back to beginning: Why mSupply?
http://docs.msupply.org.nz/faq:tipsandtricks
2018-12-10T04:27:27
CC-MAIN-2018-51
1544376823303.28
[]
docs.msupply.org.nz
Electronic invoices mSupply has a system that allows users to send electronic invoices to other users of mSupply. Possible uses include: - If you have a manufacturing unit and a separate warehouse, you can run two copies of mSupply and move stock from one location to another using electronic invoices. - If you have customers using mSupply, they can import a 100 line invoice in a few seconds, where entering manually might take thirty minutes to an hour. Setting up electronic invoices: Supplier: - In the Preferences of the copy of mSupply that is sending invoices, enter the supplier code that your customers will use for you. - If your customer(s) have an email address and you want to send the electronic invoices via email, then enter their email address in the customer details window. Customer - For each item that will be received from a particular supplier, enter a quotation for the item. This is most easily done from the quotes tab of the supplier details window. - For each quotation, enter the supplier code for that item. This means your own code for the item does not need to match the supplier code. (Note that you do not have to enter quotation prices for the electronic invoice system- just the item code). Steps to use electronic invoices - Supplier creates an invoice - Supplier chooses customer | export invoice to create an invoice - Supplier sends the invoice to the customer (If the customer has an e-mail address entered, the invoice can be automatically attached to an email, or, the file produced can be attached to an e-mail using your normal e-mail client. Alternatively it may be transferred on removable medium (floppy, Zip, CD etc..) - Customer receives electronic invoice - Customer chooses Supplier | import invoice to import the invoice. - Customer checks the supplier invoice that is created against other documentation and against actual goods received. Note: - The invoice can still be edited after import - The standard rules for calculating selling prices are used. - If you want your suppliers to send you electronic invoices, persuade them to buy mSupply! Alternatively, we can supply the mSupply invoice format to their software vendor for inclusion in their own software. - We recommend you perform a trial of the system on a backup data file before using in a production situation. Previous: Backorders Next: Transferring goods to another Store
http://docs.msupply.org.nz/issuing_goods:electronic_invoices
2018-12-10T03:44:59
CC-MAIN-2018-51
1544376823303.28
[]
docs.msupply.org.nz
Introduction OpenShift Online is a container-based platform that can be accessed as a public cloud service. It is designed for individuals and teams that want to deploy a functional Kubernetes cluster on their local computers and use remote services as if they were running locally. Some of the Bitnami containers have been developed as non-root containers, following the Red Hat guidelines for deploying containers on OpenShift. You can find a wide selection of Bitnami containers in the Red Hat Container Catalog. This tutorial will walk you through the process of using OpenShift Online to deploy the Bitnami Docker image for PHP-FPM on an OpenShift cluster. Since OpenShift Online offers a free starter plan, you can learn and experiment with OpenShift and Bitnami containers without worrying about being billed. Overview PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with some additional features that are useful for sites of any size, especially for busier sites. This tutorial shows you how to obtain the Bitnami Docker image for PHP-FPM from the Red Hat Container Catalog and how to push it to and deploy it on an OpenShift cluster using OpenShift Online. But PHP-FPM it is just an example: you can find more Bitnami containers in the Red Hat Container Catalog. They are available to run in the OpenShift deployment model of your choice. Here are the steps to follow in this tutorial: - Step 1: Register with OpenShift Online - Step 2: Create a new project - Step 3: Obtain the Bitnami PHP-FPM image from the Red Hat Container Catalog and pull it to your OpenShift project - Step 4: Deploy the Bitnami Docker PHP-FPM on OpenShift - Step 5: Scale up (and down) and perform rolling updates (and rollbacks) This tutorial assumes that you have basic understanding of Docker containers and Kubernetes. Step 1: Register with OpenShift Online - At the end of this step, you will have signed up for OpenShift Online with a “Starter Plan” free account. If you already have a Red Hat or an OpenShift account, you may skip this step. - Begin by creating an OpenShift account by browsing and clicking “Learn More” in the Red Hat OpenShift Online deployment option. You will be redirected to the “Plans & Pricing” page. Select the “Sign up for free” option of the “Starter Plan”: In the resulting screen, click the “Sign up for OpenShift Online” link: If you already have an OpenShift account, enter your email address or Red Hat login ID and your password. If you don’t, click the “Create one now” link: To register for OpenShift Online, enter the required information in the form, accept the terms and conditions, and click “Create my account” to proceed. The next step is to verify your email address. Once you have done so, to confirm your selection and to finish the registration, click “Confirm Subscription”. Congratulations! Your subscription is now active and ready to operate on OpenShift Online! Step 2: Create a new project - At the end of this step, you will have created a new project on OpenShift Online. To start managing containers and creating deployments, you should have, at least, one project created. (Remember that if you chose the “Starter Plan”, the maximum number of projects you can create is just one). Follow these steps to create a new project on OpenShift Online: Log in to OpenShift Online. Then click “Open Web Console”: To create a new project, click the “+ Create Project” button located in the right-side menu of the screen. This opens a window in which you have to enter a name and a description of the project: Once the project is created, you will see it in the list of available projects: If you click on the recently created project, you will see the “Overview” screen that shows the different options you have. In this case, “Deploy Image” will be the option selected. To do so it is necessary to push an image to our project. Check the step 3 to learn how to provision your project with a Bitnami PHP-FPM image. Step 3: Obtain the Bitnami PHP-FPM image from the Red Hat Container Catalog and pull it to your OpenShift project - At the end of this step, you will have the command that allows you to pull a Bitnami PHP-FPM image to your OpenShift project and the image ready to deploy from the OpenShift Web Console. To deploy an image in your project, first you need to pull it to OpenShift. In the Red Hat Container Catalog you can search for the container image you want to deploy and find the specific command you need to pull it to OpenShift Online. Follow these instructions to learn how to obtain a Bitnami Docker image for PHP-FPM. - In the Red Hat Container Catalog main page, enter “Bitnami PHP-FPM” in the search box to find the Bitnami PHP-FPM images available. Once you have selected the image you want to deploy, in the resulting screen select the “Get Latest Image” tab and choose “Red Hat OpenShift” from the “Choose your platform” dropdown menu. This displays the oc import-image command for this image: Return to the OpenShift Web Console, click your username and select the “Copy Login Command”: Open a terminal window on your local system and paste the login command. If you have more than one project, execute oc project PROJECTNAME. (Remember that PROJECTNAME is a placeholder for the project where you want to deploy the PHP-FPM image). Now it is time to add the image to your project “Image Streams”. The image will be available to use in your deployments. To do so: - Execute the oc import-image IMAGE/TAG –confirm command you have obtained in the Red Hat Container Catalog. The image will be pulled to your OpenShift project. TIP: The output of this command will show you the attributes and variables defined for the image. Step 4: Deploy the Bitnami Docker PHP-FPM on OpenShift - At the end of this step, you will have a Bitnami PHP-FPM image deployed and running on a OpenShift cluster using OpenShift Online. Once you have imported the image to OpenShift Online, you can manage and deploy it directly from the Web Console. Follow these steps: - On the OpenShift Online Web Console, navigate to your project and click “Builds -> Images”. In the resulting screen, you will see the “Image Streams” screen with a list of all the images that have been pulled to your project. Click “php-fpm-redhat” to see more details: To deploy the image, back to the “Overview” screen, click “Deploy Image” to start the deployment of PHP-FPM: In the “Deploy Image” screen, select “Image Stream Tag”, your project, the image stream, and tag: To set environment variables just scroll down. For demo purposes, this deployment is set to allow an empty password. This is not recommended in production environments. Click “Deploy” to proceed. Once the cluster is successfully deployed, you can see it in the “Overview” screen. Click on it to see the details of the deployment configuration such as the number of pods, network parameters or memory usage: Congratulations! You have a OpenShift cluster up and running with one PHP-FPM container running in one pod. Step 5: Scale up (and down) and perform rolling updates (and rollbacks) - At the end of this step, you will have explored some of the possibilities that OpenShift Online offers you to manage and automate your clusters. In this example, you will learn how to scale up and down and how to perform updates on your deployments. Now that your PHP-FPM cluster is deployed, it is time to explore some of the infinite possibilities that OpenShift offers to manage your Kubernetes clusters easily and quickly. First let’s see how to scale up and down your running deployment. Scale up (and down) You can scale up and down your deployment from the OpenShift Online Web Console at any moment without the need of a terminal. Scale up - To scale up your deployment, in the “Overview” screen, click on your deployment to access its information. In the resulting screen, click the “Configuration” tab and you will see the deployment configuration. Edit the number of replicas (pods) you want to add to the cluster: TIP: In the “Configuration” tab, you can select the option “Add Autoscaler” on the right side of the screen. Use this to automate the deployment scaling. You can specify both the minimum and the maximum number of replicas that your deployment can add depending on the CPU usage. If you return to the “Overview” screen, you see that your deployment has two replicas now. Click on it for more information: Scroll down to see the “Pods” section: Scale down You can scroll down from the same “Deployments” screen. To do so: - Edit the number of replicas or use the arrows to decrease the number of available pods in your cluster. - Once you confirm your selection, the deployment will be updated automatically. This process can take a few minutes. Perform rolling updates (and rollbacks) To show you how to perform rolling updates and rollbacks, we will add a new environment variable so we will have two different versions of the deployment. Follow these steps: Rolling updates Navigate to “Applications -> Deployments” and click the deployment you want to update. In the resulting screen, select the “Environment” tab: Scroll down to see the “Environment Variables” section. Click the “Add Value” link and enter the BITNAMI_APP_NAME variable. Add “php-fpm-cluster” as the value for that variable. Click “Save” to make the changes take effect. Your deployment will be automatically updated. To check the deployment versions, navigate to the left-side menu and click “Applications -> Deployments”. Click the deployment name to see all the versions available and a summary of the changes made: To see the new BITNAMI_APP_NAME variable, you can check its YAML file. To do so, select the latest version of the deployment and click “Actions”. Then select “Edit YAML”. Navigate to the containers environment variable specification. You will see the environment variables added from the Web Console: Rollbacks Rollbacks are equally simple. Just need to click on the version you want to return to and in the deployment details screen, click “Roll Back”. This displays a menu with different setting options. Once you have selected the settings, click “Roll Back” to create a new deployment with the same settings as the selected version: When you view the YAML file to check the environment variables of the new deployment, you will see that revision #2 will have been superseded by a copy of revision #1, but this time it is labelled as revision #3. Useful links To learn more about the topics discussed in this guide, use the links below:
https://docs.bitnami.com/containers/how-to/get-started-openshift/
2018-12-10T05:39:34
CC-MAIN-2018-51
1544376823303.28
[]
docs.bitnami.com
Regular expressions for finding text You can perform sophisticated find and replace operations in Microsoft Expression Web by using regular expressions. Regular expressions are useful when you do not know the exact text or code you are looking for, or when you are looking for all occurrences of strings of text or code with one or more similarities. A regular expression is a pattern of text that describes one or more variations of text or code that you want to find. A regular expression consists of specific characters — for example, the letters "a" through "z" — and special characters that describe the pattern of text — for example, an asterisk (*). For example, to find all variations of "page" in your website, you can search for "page*." If you do so, Expression Web finds all instances of "page," "pages," "pager," and any other words that begin with "page" in your website. When you use regular expressions in your searches, there are specific rules that control which combination of characters perform specific matches. Each regular expression or combination of regular expressions is referred to as syntax. You can use multiple regular expressions in one syntax to precisely target your search. To use regular expressions, see Find and replace text and code. Regular expressions syntax See also Concepts Find and replace text and code Set HTML rules for finding text
https://docs.microsoft.com/en-us/previous-versions/visualstudio/design-tools/expression-studio-3/cc295435(v=expression.10)
2018-12-10T04:33:46
CC-MAIN-2018-51
1544376823303.28
[]
docs.microsoft.com
This tutorial demonstrates using the various HTTP modules available in Salt. These modules wrap the Python tornado, urllib2, and requests libraries, extending them in a manner that is more consistent with Salt workflows. salt.utils.httpLibrary¶ This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality. Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below. This library can be imported with: import salt.utils.http This library can make use of either tornado, which is required by Salt, urllib2, which ships with Python, or requests, which can be installed separately. By default, tornado will be used. In order to switch to urllib2, set the following variable: backend: urllib2 In order to switch to requests, set the following variable: backend: requests This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions. salt.utils.http.query()¶ This function forms a basic query, but with some add-ons not present in the tornado, urllib2, and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations. A basic query can be performed by calling this function with no more than a single URL: salt.utils.http.query('') By default the query will be performed with a GET method. The method can be overridden with the method argument: salt.utils.http.query('', 'DELETE') When using the POST method (and others, such as PUT), extra data is usually sent as well. This data can be sent directly (would be URL encoded when necessary), or in whatever format is required by the remote server (XML, JSON, plain text, etc). salt.utils.http.query( '', method='POST', data=json.dumps(mydict) ) Bear in mind that the data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated): salt.utils.http.query( '', method='POST', data_file='/srv/salt/somefile.xml' ) To pass through a file that contains jinja + yaml templating (the default): salt.utils.http.query( '', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_dict={'key1': 'value1', 'key2': 'value2'} ) To pass through a file that contains mako templating: salt.utils.http.query( '', method='POST', data_file='/srv/salt/somefile.mako', data_render=True, data_renderer='mako', template_dict={'key1': 'value1', 'key2': 'value2'} ) Because this function uses Salt's own rendering system, any Salt renderer can be used. Because Salt's renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary. salt.utils.http.query( '', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_dict={'key1': 'value1', 'key2': 'value2'}, opts=__opts__ ) salt.utils.http.query( '', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_dict={'key1': 'value1', 'key2': 'value2'}, node='master' ) Headers may also be passed through, either as a header_list, a header_dict, or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict. salt.utils.http.query( '', method='POST', header_file='/srv/salt/headers.jinja', header_render=True, header_renderer='jinja', template_dict={'key1': 'value1', 'key2': 'value2'} ) Because much of the data that would be templated between headers and data may be the same, the template_dict is the same for both. Correcting possible variable name collisions is up to the user. The query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively. salt.utils.http.query( '', username='larry', password=`5700g3543v4r`, ) If the tornado backend is used ( tornado is the default), proxy information configured in proxy_host, proxy_port, proxy_username, proxy_password and no_proxy from the __opts__ dictionary will be used. Normally these are set in the minion configuration file. proxy_host: proxy.my-domain proxy_port: 31337 proxy_username: charon proxy_password: obolus no_proxy: ['127.0.0.1', 'localhost'] salt.utils.http.query( '', opts=__opts__, backend='tornado' ) Note Return data encoding If decode is set to True, query() will attempt to decode the return data. decode_type defaults to auto. Set it to a specific encoding, xml, for example, to override autodetection. Because Salt's http library was designed to be used with REST interfaces, query() will attempt to decode the data received from the remote server when decode is set to True. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded. JSON data is translated into a dict using Python's built-in json library. XML is translated using salt.utils.xml_util, which will use Python's built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set: salt.utils.http.query( '', decode_type='xml' ) Once translated, the return dict from query() will include a dict called dict. If the data is not to be translated using one of these methods, decoding may be turned off. salt.utils.http.query( '', decode=False ) If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below). The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on. salt.utils.http.query( '', status=True, headers=True, text=True ) The return from these will be found in the return dict as status, headers and text, respectively. It is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this. salt.utils.http.query( '', text=False, headers=False, text_out='/path/to/url_download.txt', headers_out='/path/to/headers_download.txt', ) By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off. salt.utils.http.query( '', verify_ssl=False, ) The requests library has its own method of detecting which CA (certificate authority) bundle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable. salt.utils.http.query( '', ca_bundle='/path/to/ca_bundle.pem', ) The update_ca_bundle() function can be used to update the bundle file at a specified location. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bundle will be downloaded from the cURL website. CAUTION: The target and the source should always be specified! Failure to specify the target may result in the file being written to the wrong location on the local system. Failure to specify the source may cause the upstream URL to receive excess unnecessary traffic, and may cause a file to be download which is hazardous or does not meet the needs of the user. salt.utils.http.update_ca_bundle( target='/path/to/ca-bundle.crt', source='', opts=__opts__, ) The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively. ca_bundle: /path/to/ca-bundle.crt ca_bundle_url: If Salt is unable to auto-detect the location of the CA bundle, it will raise an error. The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file. salt.utils.http.update_ca_bundle( opts=__opts__, merge_files=[ '/etc/ssl/private_cert_1.pem', '/etc/ssl/private_cert_2.pem', '/etc/ssl/private_cert_3.pem', ] ) This function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent. Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a custom, non-destructive URL to be used for testing when necessary. The http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary. Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here. All methods for the library are available in the execution module, as kwargs. salt myminion http.query method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True Like the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config. All methods for the library are available in the runner module, as kwargs. salt-run http.query method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True The state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which specifies a pattern to look for in the return text. By default, this will perform a string comparison of looking for the value of match in the return text. In Python terms this looks like: if match in html_text: return True If more complex pattern matching is required, a regular expression can be used by specifying a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python's re.search() rather than re.match(). Therefore, the following states are valid:: http.query: - match: 'SUCCESS' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True: http.query: - match_type: pcre - match: '(?i)succe[ss|ed]' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument:: http.query: - status: '200' If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting. Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively.
https://docs.saltstack.com/en/develop/topics/tutorials/http.html
2018-12-10T05:20:32
CC-MAIN-2018-51
1544376823303.28
[]
docs.saltstack.com
Manage SQS Queues New in version 2014.7.0. Create and destroy SQS queues. Be aware that this interacts with Amazon's services, and so may incur charges. This module uses boto, which can be installed via package, or pip. This module accepts explicit SQ: sqs.keyid: GKTADJGHEIQSXMKKRBJ08H sqqueue: boto_sqs.present: - region: us-east-1 - keyid: GKTADJGHEIQSXMKKRBJ08H - key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs - attributes: ReceiveMessageWaitTimeSeconds: 20 # Using a profile from pillars myqueue: boto_sqs.present: - region: us-east-1 - profile: mysqsprofile # Passing in a profile myqueue: boto_sq_sqs. absent(name, region=None, key=None, keyid=None, profile=None)¶ Ensure the named sqs queue is deleted. salt.states.boto_sqs. present(name, attributes=None, region=None, key=None, keyid=None, profile=None)¶ Ensure the SQS queue exists.
https://docs.saltstack.com/en/latest/ref/states/all/salt.states.boto_sqs.html
2018-12-10T05:10:15
CC-MAIN-2018-51
1544376823303.28
[]
docs.saltstack.com
- - Read-Only Values - Copy on Write - Magic Variables - Assigning Magic - Magic Virtual Tables - Finding Magic - Understanding the Magic of Tied Hashes and Arrays - Localizing changes - Subroutines - Memory Allocation - PerlIO - Compiled code - Examining internal data structures with the dump functions - How multiple interpreters and concurrency are supported - Internal Functions - Unicode Support - Custom Operators - AUTHORS - SEE ALSO NAME perlguts - Introduction to the Perl API DESCRIPTION This document attempts to describe how to use the Perl API, as well as to provide some info on the basic workings of the Perl core. signed integer type that is guaranteed to be large enough to hold a pointer (as well as an integer). Additionally, there is the UV, which is simply an unsigned IV.. Working with SVs An SV can be created and loaded with one command. There are five types of values that can be loaded: an integer value (IV), an unsigned integer value (UV), a double (NV), a string (PV), and another scalar (SV). ("PV" stands for "Pointer Value". You might think that it is misnamed because it is described as pointing only to strings. However, it is possible to have it point to other things. For example, it could point to an array of UVs. But, using it for non-strings requires care, as the underlying assumption of much of the internals is that PVs are just for strings. Often, for example, a trailing NUL is tacked on automatically. The non-string use is documented only in this paragraph.)UV(SV*) SvNV(SV*) SvPV(SV*, STRLEN len) SvPV_nolen(SV*) which will automatically coerce the actual scalar type into an IV, UV, double, or string. In the SvPV macro, the length of the string returned is placed into the variable len (this is a macro, so you do not use &len). If you do not care what the length of the data is, use the SvPV_nolen macro. Historically the SvPV macro with the global variable PL_na has been used in this case. But that: SV *s; STRLEN len; char *ptr; ptr = SvPV(s, moves the PV pointer (called SvPVX) forward by the number of bytes chopped off, and adjusts SvCUR and SvLEN accordingly. (A portion of the space between the old and new PV pointers is used to store the count of chopped bytes.)); it: SV** hv_store(HV*, const char* key, U32 klen, SV* val, U32 hash); SV** hv_fetch(HV*, const. The first of these two functions checks if a hash table entry exists, and the second deletes it. bool hv_exists(HV*, const char* key, U32 klen); SV* hv_delete(HV*, const* get_hv("package::varname", 0); This returns NULL if the variable does not exist.. perlapi for detailed descriptions. The following macros must always be used to access the contents of hash entries. Note that the arguments to these macros must be simple variables, since they may get evaluated more than once. See perlapi. ); References References are a special type of scalar that point to other data types (including other_PVAV Scalar SVt_PVAV Array SVt_PVHV Hash SVt_PVCV Code SVt_PVGV Glob (possibly a file handle) See "svtype" in perlapi for more details. Blessed References and Class Objects References are also used to support object-oriented programming. In perl. variables that are defined(const char* name, I32. = get_sv("dberror", GV_ADD);. Read-Only Values Copy on Write Perl implements a copy-on-write (COW) mechanism for scalars, in which string copies are not immediately made when requested, but are deferred until made necessary by one or the other scalar changing. This is mostly transparent, but one must take care not to modify string buffers that are shared by multiple SVs.. Assigning Magic Perl adds magic to an SV using the sv_magic function:); Magic Virtual Tables The mg_virtual field in the MAGIC structure is a pointer to; ... }. See perlapi for a description of these functions. For example, calls to the sv_cat*() functions typically need to be followed by SvSETMAGIC(), but they don't need a prior SvGETMAGIC() since their implementation handles 'get' magic. Finding. int mg_copy(SV* sv, SV* nsv, const PERL_MAGIC_tied", GV_ADD); sv_bless(tie, stash); hv_magic(hash, (GV*)tie, PERL_MAGIC_tied);mort.(DESTRUCTORFUNC_NOCONTEXT_t f, void *p) At the end of pseudo-block the function fis called with the only argument p. SAVEDESTRUCTOR_X(DESTRUCTORFUNC_t f, void *p) At the end of pseudo-block the function fis called with the implicit context argument (if any), and.. Autoloading with XSUBs If an AUTOLOAD routine is an XSUB, as with Perl subroutines, Perl puts the fully-qualified name of the autoloaded subroutine in the $AUTOLOAD variable of the XSUB's package.. Calling Perl Routines from within C Programs There are four routines that can be used to call a Perl subroutine from within a C program. These four are: I32 call_sv(SV*, I32); I32 call_pv(const char*, I32); I32 call_method(const char*, I32); I32 call_argv(const char*, I32, char**); The routine most often used is... Memory Allocation.DEBUGGINGOPs.. Compile pass 1: check routines; Pluggable runops The compile tree is executed in a runops function. There are two runops functions, in run.c and in dump.c. Perl_runops_debug is used with DEBUGGING and Perl_runops_standard is used otherwise.. Compile-time scope hooksand pre/post_endwill match. Anything pushed onto the save stack by this hook will be popped just before the scope ends (between the pre_and post_endhooks,to nest, if there is something on the save stack that calls string eval. void bhk_eval(pTHX_ OP *const o) This is called just before starting to compile an eval STRING, do FILE, requireor use, after the eval has been set up. o is the OP that requested the eval, and will normally be an OP_ENTEREVAL, OP_DOFILE. Examining internal data structures with the. How multiple interpreters and concurrency are supported Background and PERL_IMPLICIT_CONTEXT The Perl interpreter can be regarded as a closed box: it has an API for feeding it code or otherwise making it do things, but it also has functions for its own use. This smells a lot like an object, and there are ways for you to build Perl so that you can have multiple interpreters, with one interpreter represented either as a C very different ways of building the interpreter, the Perl source (as it does in so many other situations) makes heavy use of macros and subroutine naming conventions. First problem: deciding which functions will be public API functions and which will be private. All functions whose names begin S_ are private (think "S" for "secret" or "static"). All other functions begin with "Perl_", but just because a function begins with "Perl_" does not mean it is part of the API. . Second problem: there must be a syntax so that the same subroutine declarations and calls can pass a structure as their first argument, or pass nothing. To solve this, the subroutines are named and declared in a particular way. Here's a typical start of a static function used within the Perl guts: STATIC void S_incline(pTHX_ char *s) STATIC becomes "static" in C, and may be #define'd to nothing in some configurations in the future. A public function (i.e. part of the internal API, but not necessarily sanctioned for use in extensions) begins like this: void Perl_sv_setiv(pTHX_ SV* dsv, IV num) pTHX_ is one of a number of macros (in perl.h) that hide the details of the interpreter's context. THX stands for "thread", "this", or "thingy", as the case may be. (And no, George Lucas is not involved. :-) The first character could be 'p' for a prototype, 'a' for argument, or 'd' for declaration, so we have pTHX, aTHX and dTHX, and their variants. When Perl is built without options that set PERL_IMPLICIT_CONTEXT, there is no first argument containing the interpreter's context. The trailing underscore in the pTHX_ macro indicates that the macro expansion needs a comma after the context argument because other arguments follow it. If PERL_IMPLICIT_CONTEXT is not defined, pTHX_ will be ignored, and the subroutine is not prototyped to take the extra argument. The form of the macro without the trailing underscore is used when there are no additional explicit arguments. When a core function calls another, it must pass the context. This is normally hidden via macros. Consider sv. This doesn't work so cleanly for varargs functions, though, as macros imply that the number of arguments is known in advance. Instead we either need to spell them out fully, passing aTHX_ as the first argument (the Perl core tends to do this with functions like Perl_warner), or use a context-free version. The context-free version of Perl_warner is called Perl_warner_nocontext, and does not take the extra argument. Instead it does dTHX; to get the context from thread-local storage. We #define warner Perl_warner_nocontext so that extensions get source compatibility at the expense of performance. (Passing an arg is cheaper than grabbing it from thread-local storage.) You can ignore [pad]THXx when browsing the Perl headers/sources. Those are strictly for use within the core. Extensions and embedders need only be aware of [pad]THX. So what happened to dTH. How do I use all this in extensions? When Perl is built with PERL_IMPLICIT_CONTEXT, extensions that call any functions in the Perl API will need to pass the initial context argument somehow. The kicker is that you will need to write it in such a way that the extension still compiles when Perl hasn't been built with PERL_IMPLICIT_CONTEXT enabled. There are three ways to do this. First, the easy but inefficient way, which is also the default, in order to maintain source compatibility with extensions: whenever XSUB.h is #included, it redefines the aTHX and aTHX_ macros to call a function that will return the context. Thus, something like: sv_setiv(sv, num); in your extension will translate to this when PERL_IMPLICIT_CONTEXT is in effect: Perl_sv_setiv(Perl_get_context(), sv, num); or to this otherwise: Perl_sv_setiv(sv, num); You don't have to do anything new in your extension to get this; since the Perl library provides Perl_get_context(), it will all just work. The second, more efficient way is to use the following template for your Foo.xs: #define PERL_NO_GET_CONTEXT /* we want efficiency */ #include "EXTERN.h" #include "perl.h" #include "XSUB.h" STATIC void my_private_function(int arg1, int arg2); STATIC void my_private_function(int arg1, int arg2) { dTHX; /* fetch context */ ... call many Perl API functions ... } [... etc ...] MODULE = Foo PACKAGE = Foo /* typical XSUB */ void my_xsub(arg) int arg CODE: my_private_function(arg, 10); Note that the only two changes from the normal way of writing an extension is the addition of a #define PERL_NO_GET_CONTEXT before including the Perl headers, followed by a dTHX; declaration at the start of every function that will call the Perl API. (You'll know which functions need this, because the C compiler will complain that there's an undeclared identifier in those functions.) No changes are needed for the XSUBs themselves, because the XS() macro is correctly defined to pass in the implicit context if needed. */ void my_xsub(arg) int arg CODE: my_private_function(aTHX_ arg, 10); This implementation never has to fetch the context using a function call, since it is always passed as an extra argument. Depending on your needs for simplicity or efficiency, you may mix the previous two approaches freely. Never add a comma after pTHX yourself--always use the form of the macro with the underscore for functions that take explicit arguments, or the form without the argument for functions with no explicit arguments. ... Future Plans and PERL_IMPLICIT_SYS Just as PERL_IMPLICIT_CONTEXT provides a way to bundle up everything that the interpreter knows about itself and pass it around, so too are there plans to allow the interpreter to bundle up everything it knows about the environment it's running on. This is enabled with the PERL_IMPLICIT_SYS macro. Currently it only works with USE_ITHREADS on Windows. This allows the ability to provide an extra pointer (called the "host" environment) for all the system calls. This makes it possible for all the system stuff to maintain their own state, broken down into seven C structures. These are thin wrappers around the usual system calls (see win32/perllib.c) for the default perl executable, but for a more ambitious host (like the one that would do fork() emulation) all the extra work needed to pretend that different interpreters are actually different "processes", would be done here. The Perl engine/interpreter and the host are orthogonal entities. There could be one or more interpreters in a process, and one or more "hosts", with free association between them. Internal Functions an interpreter context, so the definition has no pTHX, and it follows that callers don't use aTHX. (See "Background and PERL_IMPLICIT_CONTEXT".) - force a rebuild of embed.h and other auto-generated files. Formatted Printing of IVs, UVs, and NVs. Pointer-To-Integer and Integer-To-Pointer); Exception Handling. Backwards compatibility. How does UTF-8 represent Unicode characters?; character 192 is v195.128. And so it goes on, moving to three bytes at character 2048. "Unicode Encodings" in perlunicode has pictures of how this works..) How does Perl store UTF-8 strings? Currently, Perl deals with UTF-8 strings and non-UTF-8 strings slightly differently. A flag in the SV, SVf_UTF8, indicates that the string is internally encoded as UTF-8. Without it, the byte value is the codepoint number and vice versa. This flag is only meaningful if the SV is SvPOK or immediately after stringification via SvPV or a similar macro. You can check and manipulate this flag with the following macros:" in perlapi. How do I convert a string to UTF-8? If you're mixing UTF-8 and non-UTF-8 strings, it is necessary to upgrade the non-UTF-8. How do I compare strings? "sv_cmp" in perlapi and "sv_cmp_flags" in perlapi do a lexigraphic comparison of two SV's, and handle UTF-8ness properly. Note, however, that Unicode specifies a much fancier mechanism for collation, available via the Unicode::Collate module.). Is there anything else I need to know? Not really. Just remember these things: There's no way to tell if a char *or U8 *string is UTF-8 or not. But you can tell if an SV is to be treated as UTF-8 by calling DO_UTF8on it, after stringifying it with SvPVor a similar macro. And, you can tell if SV is actually UTF-8 (even if it is not to be treated as such) by looking at its SvUTF8flag (again after stringifying it).chr_bufto get at the value, unless UTF8_IS_INVARIANT(*s)in which case you can use *s. When writing a character UV to a UTF-8 string, always use uvchr_to_utf8, unless UVCHR_IS_INVARIANT(uv))in which case you can use *s = uv. Mixing UTF-8 and non-UTF-8 strings is tricky. Use bytes_to_utf8to get a new string which is UTF-8 encoded, and then combine them. Custom Operators Custom operator support is an: - xop_name A short name for your op. This will be included in some error messages, and will also be returned as $op->nameby the B module, so it will appear in the output of module like B::Concise. - xop_desc A short description of the function of the op. - xop_class Which of the various *OPstructures this op uses. This should be one of the OA_*constants from op.h, namely - OA_BASEOP - - OA_UNOP - - OA_BINOP - - OA_LOGOP - - OA_LISTOP - - OA_PMOP - - OA_SVOP - - OA_PADOP - - OA_PVOP_OR_SVOP This should be interpreted as ' PVOP' only. The _OR_SVOPis because the only core PVOP, OP_TRANS, can sometimes be a SVOPinstead. - OA_LOOP - - OA_COP - The other OA_*constants should not be used. - xop_peep This member is of type Perl_cpeep_t, which expands to void (*Perl_cpeep_t)(aTHX_ OP *o, OP *oldop). If it is set, this function will be called from Perl_rpeepwhen ops of this type are encountered by the peephole optimizer. o is the OP that needs optimizing; oldop is the previous OP optimized, whose op_nextpoints to o. B::Generate directly supports the creation of custom ops by name. AUTHORS Until May 1997, this document was maintained by Jeff Okamoto <[email protected]>. It is now maintained as part of Perl itself by the Perl 5 Porters <[email protected]>.. SEE ALSO perlapi, perlintern, perlxs, perlembed
http://docs.activestate.com/activeperl/5.22/perl/lib/Pod/perlguts.html
2018-12-10T05:11:50
CC-MAIN-2018-51
1544376823303.28
[]
docs.activestate.com
Contents ServiceNow Platform Previous Topic Next Topic Example escalation scenario ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Example escalation scenario Acme Pharmaceuticals needs to support a simple escalation process. When a critical or high incident is raised, a member of the Network group should be assigned to the incident based on the Network group's on-call schedule. First, the trigger rule is defined with its conditions. Figure 1. Example trigger points and actions The trigger action runs a custom workflow, Escalations by email workflow for on-call scheduling, by which the Network group's current on-call resource automatically receives an email notification. Related TasksCreate a trigger ruleRelated ConceptsEscalation triggersEscalations by email workflow for on-call schedulingEscalation chain On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/on_call_scheduling/concept/c_ExampleEscalationScenario.html
2018-12-10T04:41:02
CC-MAIN-2018-51
1544376823303.28
[]
docs.servicenow.com
The first 1001 community members to use our app are eligible to claim CARROT. Thanks for the as a member of the Carrotswap community. Click on the place shown in the picture Simple click on the Claim CARROT button and sign the transaction to receive the CARROT token airdrop! If your wallet is eligible you will see an this popup in the CarrotSwap app. Click on “Claim CARROT” and finalize your claim. Don’t forget to set appropriate gas limit due to high gas prices. If you are using Metamask, you can further increase your gas fee by clicking edit > Advanced and manually enter the gas price and gas limit. Enjoy the biggest airdrop of the year!!! You can stake or farm with CARROT tokens.
https://docs.carrotswap.org/guides/airdrop
2021-01-16T05:06:00
CC-MAIN-2021-04
1610703500028.5
[]
docs.carrotswap.org
Viewing Performance Data Configuring Performance Graphs Configuring Performance Alerts - - - -! Viewing Performance Data The Performance tab shows performance data for the selected server or virtual machine in graph form. For servers you can view: - CPU, memory, and network I/O usage data. - You can add graphs showing extra resource usage data, if necessary. For example, you can include the Control Domain Load. This load is the average (Linux loadavg) of the number of processes queued inside the control domain over the last 5 minutes. - Lifecycle events for all the VMs hosted on the server are shown in the VM Lifecycle Events pane. For VMs, graphs showing CPU, memory, network I/O, and disk usage data are shown by default. At the bottom of the tab, the summary graph gives a quick overview of what is happening on the machine. This graph also allows you to adjust the time frame that is shown in the other graphs. The time frame can be changed either to show data from a longer or shorter period, or to show data from an earlier period. To include other types of performance data on the tab or to change the appearance of the graphs, see Configuring performance graphs. To view data from a longer or shorter time period By default, data from the last 10 minutes is displayed. To view data from a longer or shorter time period, do one of the following: - To view available performance data for the last hour, 24 hours, week, month, or year, click Zoom. Select 1 Hour, 1 Day, 1 Week, 1 Month, or 1 Year. To resize the time period that is displayed in the graphs, in the summary graph, point to the vertical split bar at the edge of the sample area. When the pointer changes to a double-headed arrow, drag the vertical split bar right or left. For example: To view data from a different time period To move the time frame for data displayed in the graphs, point to any graph. When the pointer changes to a move cursor, drag the graph or the sample area in the summary graph to the left or right. For example: To view VM lifecycle event data on a server To view lifecycle events for the VMs hosted on a server, use the VM Lifecycle Events list. - Each event has a tooltip with the full message for that lifecycle event (“Virtual Machine ‘Sierra’ has been started”). - You can use the cursor keys to navigate the items in the list. - Double clicking or pressing Enter zooms the graphs to the point when the selected lifecycle event occurred. - Selecting (single click or highlight with cursor keys) one of the events causes the lifecycle event on the graph itself to be highlighted..
https://docs.citrix.com/en-us/xencenter/current-release/performance-viewing.html
2021-01-16T06:43:56
CC-MAIN-2021-04
1610703500028.5
[array(['/en-us/xencenter/media/dragcursor.png', 'The Move Cursor icon - an equal armed cross with an arrowhead at each line end.'], dtype=object) array(['/en-us/xencenter/media/perfsampleperiod2.png', 'Two images. The top shows a 40 minute interval selected with the move cursor. An arrow on the diagram indicates the direction that the cursor is to be moved. The second image is after the move cursor has been used. The window has been shifted to highlight an earlier 40 minutes.'], dtype=object) ]
docs.citrix.com
Keep the Webhook URL (to which the tracker artifact events would be sent) handy before you proceed with setting up Webhooks in TeamForge. You can set up Webhooks for Tracker, webhook URL,. Update a Webhook On the webhooks list page, click the webhook that you want to edit. Make the desired changes on the Edit Webhook page. Click Save. Delete a Webhook On the webhook list page, click the Delete icon of the webhook that you want to delete. A confirmation message shows up. Are you sure you want to remove the webhook from project? Click OK to delete. Related Links []:
https://docs.collab.net/teamforge192/webhooksfortrackerartifacts.html
2021-01-16T06:39:23
CC-MAIN-2021-04
1610703500028.5
[]
docs.collab.net
MAPPING element (List) Applies to: SharePoint 2016 | SharePoint Foundation 2013 | SharePoint Online | SharePoint Server 2013 Maps a value to a choice that is displayed in a Choice field. <MAPPING Value = "Text"> </MAPPING> Elements and attributes The following sections describe attributes, child elements, and parent elements. Attributes Child elements None Parent elements Occurrences - Minimum: 0 - Maximum: Unbounded
https://docs.microsoft.com/en-us/sharepoint/dev/schema/mapping-element-list
2021-01-16T07:11:57
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding FCOS, which is the operating system used by OKD, will help you see how the host systems protect containers and hosts from each other. OKD 4 runs on FCOS hosts, with the option of using Fedora as worker nodes, the following concepts apply by default to any deployed OKD cluster. These Fedora security features are at the core of what makes running containers in OpenShift page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using FCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. FCOS is a version of Fedora that is specially configured to work as control plane (master) and worker nodes on OKD clusters. So FCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift services. To further protect FCOS systems in OKD clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OKD clusters. How nodes enforce resource constraints Managing security context constraints Machine requirements for a cluster with user-provisioned infrastructure Choosing how to configure FCOS in order. When you deploy OKD, OKD deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OKD, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OKD OKD infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OKD cluster updates.
https://docs.okd.io/latest/security/container_security/security-hosts-vms.html
2021-01-16T06:40:53
CC-MAIN-2021-04
1610703500028.5
[]
docs.okd.io
Cut Development Time: Use.
https://docs.microsoft.com/en-us/archive/blogs/charlie/cut-development-time-use-linq
2021-01-16T06:55:26
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
Scalability and performance targets for VM disks on Windows. See Windows VM sizes for additional details. Managed virtual machine disks Sizes denoted with an asterisk are currently in preview. See our FAQ to learn what regions they are available in. See also Azure subscription and service limits, quotas, and constraints
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-scalability-targets
2021-01-16T07:22:41
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
How To Set Session Limits¶ What is the session limit for?¶ By default an orchestrator can handle a maximum of 10 incoming livestreams at a time. Whenever the number of concurrent transcoding sessions goes above this, the orchestrator returns an OrchestratorCapped error to the broadcaster. Of course, the transcoding capacity and the bandwidth varies for everyone - so you can set the -maxSessions CLI parameter with a value that reflects available resources to maximize work received from broadcasters. For example if you have enough transcoding capacity and bandwidth to handle 30 streams, you can set the max sessions using the following command (other flags omitted): $ livepeer -orchestrator -transcoder -maxSessions 30 How to calculate my session limit?¶ Calculate the session limit based on (1) transcoding hardware and (2) bandwidth as explained below, and take the minimum of the two. Finally, pass it through the -maxSessions parameter to your node as explained above. The bandwidth and computational power needed to transcode a video stream varies with the source video and requested outputs’ configuration. Thus any session limit estimate only serves as a ballpark, and you may want to tweak it after some real use on the network. The steps below assume that the incoming streams are configured with the most-commonly found Adaptive Bitrate (ABR) ladder on the network. You may calculate it similarly for a different ABR ladder. 1. Transcoding Hardware Capacity The livepeer_bench tool can help you get a rough idea of the number of concurrent transcoding sessions your hardware can handle. Refer to the Benchmarking Guide on how to setup the tool and to learn more about the CLI parameters. Once you’ve got the tool setup - you can benchmark with the most common transcoding ABR ladder ( transcodingOptions.json). For example to test for a range of concurrent sessions on a single Nvidia GPU with device id ‘0’ you can use the following bash script: #!/bin/bash for i in {1..20} do ./livepeer_bench -in bbb/source.m3u8 \ -transcodingOptions transcodingOptions.json \ -nvidia 0 \ -concurrentSessions $i |& grep "Took" >> bench.log done To test on a CPU omit the -nvidia 0 flag and change the loop’s maximum from 20 to something lower like 5. You will see the final output in a file called bench.log as following Took <X> seconds to transcode 30 segments of total duration 60s (1 concurrent sessions) Took <X> seconds to transcode 30 segments of total duration 60s (2 concurrent sessions) ... Took <X> seconds to transcode 30 segments of total duration 60s (20 concurrent sessions) The goal here is to have the transcode time X remain within real-time, leaving about ~20% buffer room for network transit. Thus X <= 60s/1.20 = 50 seconds. If your transcode time was quite fast even for the limit of 20 sessions in the above script, feel free to increase it to something higher. If you have multiple GPUs you can multiply whatever limit you calculate with a single GPU above, or pass all your devices in like -nvidia 0,1,2. 2. Bandwidth Capacity The most common transcoding ABR ladder found on the network is (assuming source is 1080p30fps) - For a single stream you require 6000 kbps for fetching source rendition and 5600 kbps for sending the transcoded renditions. Thus you will roughly need - Download Bandwidth = 6 Mbps Per Stream Upload Bandwidth = 5.6 Mbps Per Stream To get an idea of the number of streams you can handle, divide the above from your network provider’s limits. For example a typical broadband connection with upstream/downstream of 100 Mbps can serve ~16 streams reliably. You can probably stretch it by ~20% more as not all streams’ segments would be processed at the same time. You may want to refer to some suggestions in Bandwidth Requirements around testing your available upload/download bandwidth.
https://livepeer.readthedocs.io/en/latest/guides/session_limits.html
2021-01-16T06:31:16
CC-MAIN-2021-04
1610703500028.5
[]
livepeer.readthedocs.io
KtoA 3.1.1.2 is a bugfix release using Arnold 6.1.0.1. System Requirements - Katana 3.2v1+, 3.5v1+, 3.6v1+ and 4.0v1+ - Windows 7 or later, with the Visual Studio 2019. We recommend using the 450.57 or higher drivers on Linux and 451.77 or higher on Windows. See Getting Started with Arnold GPU for more information. Installation - Download KtoA for your platform and Katana version. - Run the self-extracting installer. See the installation steps here. Bug fixes - ktoa#547 License Manager shelf script does not work - ktoa#553 Conditional vis ops don't work as intended -
https://docs.arnoldrenderer.com/display/A5KTN/KtoA+3.1.1.2
2021-01-16T06:04:01
CC-MAIN-2021-04
1610703500028.5
[]
docs.arnoldrenderer.com
Merchants can often create issues for themselves by leaving the option to use Dynamic Checkout options, on the Cart Page. This includes Google Pay, PayPal, KlarnaPay and other third party payment providers. You can enable these payment providers by going to your theme settings, and enabling through the customiser. Dynamic Checkout Buttons should only be used on the Checkout page, and not the Cart page. The customer can use the Accelerated Checkout options on the Checkout Page with no issue, but allowing the customer to use them on the Cart Page side-loads the transaction into the Checkout system of PayPal, Google Pay etc. This is because there is a redirect in place, and when the options are clicked, this takes the customer to another page, away from the Shopify platform. This interrupts the order and does not complete it, provides missing or incorrect information, and massive confusion for all parties. In short the buttons should only be used on the Checkout page, once the order has been finalised and passed the Cart page. A solution to hiding these buttons would be to use CSS code to hide the buttons from the Cart page. For this specific task, we would advise speaking with your theme developers. If not possible, and support is not provided with your theme - get in touch with us via email, at [email protected].
https://docs.bundlebuilder.app/further-help/troubleshooting/using-dynamic-checkout-buttons
2021-01-16T06:31:01
CC-MAIN-2021-04
1610703500028.5
[]
docs.bundlebuilder.app
How can you accurately predict future revenues? How can you get more insight into your deals and your sales pipeline? Before we delve into deal stages, let’s briefly explain what’s a deal and a pipeline in the CRM context. A deal is an opportunity to increase revenues, but not only. Deals can help you get a clear idea of where your team is in the sales process, and the steps they need to take in order to close them. Deals are an integral part of the CRM, because they generate revenue for your company. They are the very heart of the sales process. But what about the sales pipeline? In the CRM context, a pipeline is an accurate visual representation of your sales cycle. Each pipeline has stages that are unique, depending on how the sales team actually operates. With the concepts of deals and pipeline explained, let’s now explore deal stages and the WON probability. How can you understand them? What purpose do they serve? What is the WON probability and why is it important for your organization? Each company has its own sales cycles, its sales philosophy. To understand deal stages in Flexie CRM, let’s say you’re in the business of selling bikes. You’ve identified a sales opportunity, which in Flexie CRM is called a deal. You want to know where the deal stands and what is the probability of that deal being won. The first step is to build a sales pipeline. Go to Deals and on the drop-down menu click Pipelines. Give the pipeline a name and a description. Then click Save & Close at the upper-right corner of the screen: Let’s say you have created the deal and now you want to manage each stage. How do you do that? Once again, go and click the Pipelines button. Next, click the Manage Stages button. Here you can set the stages for the pipeline you’ve created. Notice that each stage has a success percentage attached to it. But why do success percentages matter? They are at the very heart of sales forecasting. Success percentages on each stage give you an idea of the expected revenue in a certain pipeline. For example, $5000 at 80 percent probability is more valuable than that same exact amount at 20 percent. Why? Because a deal with 80 percent probability is more likely to close. In Flexie CRM, you can create your own deal stages. Let’s say you have created a pipeline with the name Mountain bike sale. Next, you proceed to create your own stages: Scheduled an appointment to see the mountain bike(20 %), bike inspected (40 %), documents prepared (80 %) and then Transaction Completed. Remember, a deal is either WON or LOST. In our case, we want to explain the WON probability. The first deal stage, scheduling an appointment to see the bike (20 %), could be seen as the initial stage in the sales cycle. Your sales rep has scheduled an appointment with someone who wants to see bike. Let’s suppose that person inspects the bike. You move the deal from the first stage to the second, bike inspected (40%). The potential buyer likes the bike and documents are prepared (80 %). Next, transaction is complete (90 %). The deal is close to being won. If everything goes as predicted, the deal will be WON. As you can see, each stage has a success percentage. Bear in mind that different products have different sales cycles, hence deal stages will be customized to match the sales cycle of each and every product. In Flexie CRM, you can easily move between deal stages by a simple drag and drop. The WON probability helps you in sales forecasting. The more your success percentages reflect the actual likelihood of a certain deal being closed, the more accurate your sales forecasting will be. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/administrator-guide/understanding-deal-stages-won-probability/
2021-01-16T05:27:02
CC-MAIN-2021-04
1610703500028.5
[array(['https://flexie.io/wp-content/uploads/2017/08/Capture-92.png', 'Pipelines'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-93.png', 'New deal'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-63.png', 'Manage stages'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-94.png', 'Pipeline stages'], dtype=object) ]
docs.flexie.io
GP Regression with LOVE for Fast Predictive Variances and Sampling¶ Overview¶ In this notebook, we demonstrate that LOVE (the method for fast variances and sampling introduced in this paper) can significantly reduce the cost of computing predictive distributions. This can be especially useful in settings like small-scale Bayesian optimization, where predictions need to be made at enormous numbers of candidate points. In this notebook, we will train a KISS-GP model on the skillcraftUCI dataset, and then compare the time required to make predictions with each model. NOTE: The timing results reported in the paper compare the time required to compute (co)variances only. Because excluding the mean computations from the timing results requires hacking the internals of GPyTorch, the timing results presented in this notebook include the time required to compute predictive means, which are not accelerated by LOVE. Nevertheless, as we will see, LOVE achieves impressive speed-ups. [1]: import math import torch import gpytorch import tqdm from matplotlib import pyplot as plt # Make plots inline %matplotlib inline For this example notebook, we’ll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we’ll simply be splitting the data using the first 40% of the data as training and the last 60% as testing. Note: Running the next cell will attempt to download a small dataset file to the current directory. [14]: import urllib.request import os from scipy.io import loadmat from math import floor # this is for running the notebook in our testing framework smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../elevators.mat'): print('Downloading \'elevators\' UCI dataset...') urllib.request.urlretrieve('', '../elevators.mat') if smoke_test: # this is for running the notebook in our testing framework X, y = torch.randn(100, 3), torch.randn(100) else: data = torch.Tensor(loadmat('../elevators.mat')['data']) X = data[:, :-1] X = X - X.min(0)[0] X = 2 * (X / X.max(0)[0]) - 1 y = data[:, -1] train_n = int(floor(0.8 * len(X))) train_x = X[:train_n, :].contiguous() train_y = y[:train_n].contiguous() test_x = X[train_n:, :].contiguous() test_y = y[train_n:].contiguous() if torch.cuda.is_available(): train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda() LOVE can be used with any type of GP model, including exact GPs, multitask models and scalable approximations. Here we demonstrate LOVE in conjunction with KISS-GP, which has the amazing property of producing constant time variances. The KISS-GP + LOVE GP Model¶ We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an Deep RBF base kernel. The forward method passes the input data x through the neural network feature extractor defined above, scales the resulting features to be between 0 and 1, and then calls the kernel. The Deep RBF kernel (DKL) uses a neural network as an initial feature extractor. In this case, we use a fully connected network with the architecture d -> 1000 -> 500 -> 50 -> 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers. [3]: class LargeFeatureExtractor(torch.nn.Sequential): def __init__(self, input_dim): super(LargeFeatureExtractor, self).__init__() self.add_module('linear1', torch.nn.Linear(input_dim, 1000)) self.add_module('relu1', torch.nn.ReLU()) self.add_module('linear2', torch.nn.Linear(1000, 500)) self.add_module('relu2', torch.nn.ReLU()) self.add_module('linear3', torch.nn.Linear(500, 50)) self.add_module('relu3', torch.nn.ReLU()) self.add_module('linear4', torch.nn.Linear(50, 2)) class GPRegressionModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(GPRegressionModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.GridInterpolationKernel( gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()), grid_size=100, num_dims=2, ) # Also add the deep net self.feature_extractor = LargeFeatureExtractor(input_dim=train_x.size(-1)) def forward(self, x): # We're first putting our data through a deep net (feature extractor) # We're also scaling the features so that they're nice values projected_x = self.feature_extractor(x) projected_x = projected_x - projected_x.min(0)[0] projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1 # The rest of this looks like what we've seen mean_x = self.mean_module(projected_x) covar_x = self.covar_module(projected_x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) likelihood = gpytorch.likelihoods.GaussianLikelihood() model = GPRegressionModel(train_x, train_y, likelihood) if torch.cuda.is_available(): model = model.cuda() likelihood = likelihood.cuda() Training the model¶ The cell below trains the GP model, finding optimal hyperparameters using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds. [5]: training_iterations = 1 if smoke_test else 20 #) def train(): iterator = tqdm.notebook.tqdm(range(training_iterations)) for i in iterator: optimizer.zero_grad() output = model(train_x) loss = -mll(output, train_y) loss.backward() iterator.set_postfix(loss=loss.item()) optimizer.step() %time train() CPU times: user 2.1 s, sys: 136 ms, total: 2.24 s Wall time: 2.23 s Computing predictive variances (KISS-GP or Exact GPs)¶ Using standard computaitons (without LOVE)¶ The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean) using the standard SKI testing code, with no acceleration or precomputation. Note: Full predictive covariance matrices (and the computations needed to get them) can be quite memory intensive. Depending on the memory available on your GPU, you may need to reduce the size of the test set for the code below to run. If you run out of memory, try replacing test_x below with something like test_x[:1000] to use the first 1000 test points only, and then restart the notebook. [6]: import time # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() preds = likelihood(model(test_x)) exact_covar = preds.covariance_matrix exact_covar_time = time.time() - start_time print(f"Time to compute exact mean + covariances: {exact_covar_time:.2f}s") Time to compute exact mean + covariances: 1.81s Using LOVE¶ Next we compute predictive covariances (and the predictive means) for LOVE, but starting from scratch. That is, we don’t yet have access to the precomputed cache discussed in the paper. This should still be faster than the full covariance computation code above. To use LOVE, use the context manager with gpytorch.settings.fast_pred_var(): You can also set some of the LOVE settings with context managers as well. For example, gpytorch.settings.max_root_decomposition_size(100) affects the accuracy of the LOVE solves (larger is more accurate, but slower). In this simple example, we allow a rank 100 root decomposition, although increasing this to rank 20-40 should not affect the timing results substantially. [7]: # Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(100): start_time = time.time() preds = model(test_x) fast_time_no_cache = time.time() - start_time The above cell additionally computed the caches required to get fast predictions. From this point onwards, unless we put the model back in training mode, predictions should be extremely fast. The cell below re-runs the above code, but takes full advantage of both the mean cache and the LOVE cache for variances. [8]: with torch.no_grad(), gpytorch.settings.fast_pred_var(): start_time = time.time() preds = likelihood(model(test_x)) fast_covar = preds.covariance_matrix fast_time_with_cache = time.time() - start_time [9]: print('Time to compute mean + covariances (no cache) {:.2f}s'.format(fast_time_no_cache)) print('Time to compute mean + variances (cache): {:.2f}s'.format(fast_time_with_cache)) Time to compute mean + covariances (no cache) 0.32s Time to compute mean + variances (cache): 0.14s Compute Error between Exact and Fast Variances¶ Finally, we compute the mean absolute error between the fast variances computed by LOVE (stored in fast_covar), and the exact variances computed previously. Note that these tests were run with a root decomposition of rank 10, which is about the minimum you would realistically ever run with. Despite this, the fast variance estimates are quite good. If more accuracy was needed, increasing max_root_decomposition_size would provide even better estimates. [10]: mae = ((exact_covar - fast_covar).abs() / exact_covar.abs()).mean() print(f"MAE between exact covar matrix and fast covar matrix: {mae:.6f}") MAE between exact covar matrix and fast covar matrix: 0.000657 Computing posterior samples (KISS-GP only)¶ With KISS-GP models, LOVE can also be used to draw fast posterior samples. (The same does not apply to exact GP models.) Drawing samples the standard way (without LOVE)¶ We now draw samples from the posterior distribution. Without LOVE, we accomlish this by performing Cholesky on the posterior covariance matrix. This can be slow for large covariance matrices. [11]: import time num_samples = 20 if smoke_test else 20000 # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() exact_samples = model(test_x).rsample(torch.Size([num_samples])) exact_sample_time = time.time() - start_time print(f"Time to compute exact samples: {exact_sample_time:.2f}s") Time to compute exact samples: 1.92s Using LOVE¶ Next we compute posterior samples (and the predictive means) using LOVE. This requires the additional context manager with gpytorch.settings.fast_pred_samples():. Note that we also need the with gpytorch.settings.fast_pred_var(): flag turned on. Both context managers respond to the gpytorch.settings.max_root_decomposition_size(100) setting. [12]: # Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(200): # NEW FLAG FOR SAMPLING with gpytorch.settings.fast_pred_samples(): start_time = time.time() _ = model(test_x).rsample(torch.Size([num_samples])) fast_sample_time_no_cache = time.time() - start_time # Repeat the timing now that the cache is computed with torch.no_grad(), gpytorch.settings.fast_pred_var(): with gpytorch.settings.fast_pred_samples(): start_time = time.time() love_samples = model(test_x).rsample(torch.Size([num_samples])) fast_sample_time_cache = time.time() - start_time print('Time to compute LOVE samples (no cache) {:.2f}s'.format(fast_sample_time_no_cache)) print('Time to compute LOVE samples (cache) {:.2f}s'.format(fast_sample_time_cache)) Time to compute LOVE samples (no cache) 0.74s Time to compute LOVE samples (cache) 0.02s Compute the empirical covariance matrices¶ Let’s see how well LOVE samples and exact samples recover the true covariance matrix. [13]: # Compute exact posterior covar with torch.no_grad(): start_time = time.time() posterior = model(test_x) mean, covar = posterior.mean, posterior.covariance_matrix exact_empirical_covar = ((exact_samples - mean).t() @ (exact_samples - mean)) / num_samples love_empirical_covar = ((love_samples - mean).t() @ (love_samples - mean)) / num_samples exact_empirical_error = ((exact_empirical_covar - covar).abs()).mean() love_empirical_error = ((love_empirical_covar - covar).abs()).mean() print(f"Empirical covariance MAE (Exact samples): {exact_empirical_error}") print(f"Empirical covariance MAE (LOVE samples): {love_empirical_error}") Empirical covariance MAE (Exact samples): 0.0043566287495195866 Empirical covariance MAE (LOVE samples): 0.0061592841520905495 [ ]:
https://docs.gpytorch.ai/en/latest/examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.html
2021-01-16T06:33:21
CC-MAIN-2021-04
1610703500028.5
[]
docs.gpytorch.ai
When you upgrade, the existing JMS message security mode setting used for the previous version is retained. Beginning with Horizon 6 version 6.1, you can use Horizon Administrator to change this setting to Enhanced. This procedure shows how to use Horizon Administrator to change the message security mode to Enhanced and monitor the progress of the change on all Horizon components. You can alternatively use the vdmutil command-line utility to change the mode and monitor progress. See the Horizon 7 Administration document. To use Access Point appliances instead of Horizon security servers, you must upgrade the Connection Server instances to version 6.2 or later before installing and configuring the Access Point appliances to point to the Connection Server instances or the load balancer that fronts the instances. For more information, see Deploying and Configuring Unified Access Gateway. Prerequisites Verify that you have upgraded all Horizon Connection Server instances, security servers, and Horizon desktops to Horizon 6 version 6.1 or a later release. View components that predate Horizon 6 version 6.1 cannot communicate with a Connection Server 6.1 instance that uses Enhanced mode. Procedure - Configure back-end firewall rules to allow security servers to send JMS traffic on port 4002 to Connection Server instances. - In Horizon Administrator, go to , and on the Security tab, set Message security mode to Enhanced. - Manually restart the VMware Horizon. Results When servers communicate with clients, servers will configure clients to use enhanced message security mode.
https://docs.vmware.com/en/VMware-Horizon-7/7.13/horizon-upgrades/GUID-38A092C7-F5ED-40BF-85FE-467FA8811847.html
2021-01-16T04:47:00
CC-MAIN-2021-04
1610703500028.5
[]
docs.vmware.com
The cart module provides snippets and manager functionality to allow users to add products to a shopping cart, pay for their selected products via various online payment gateways and finally monitor the order and previous orders. Note: The cart module requires clearFusionCMS 1.2.0 or later in order to install and function. After the shopping cart module has been uploaded and installed it must have an initial configuration applied before it can be used. Setting the store address is the first operation that must be done after installing the cart module, until this step is completed you will simply get errors. To set the store address within the site manager select Settings then Cart followed by Store Address, you must complete all the fields on this page before clicking Save Address. The cart uses information from this page when calculating shipping and other portions of the order. Note: If you've already tried to use the cart without setting the store address then you will need to clear your browser cache and the cookies to get everything working properly. Now the cart has a home address the next stage is to create at least one zone. Zones allow you to break the countries of the world into manageable groups so rather than having to create settings for every country you only have to configure a zone and all the countries in that zone will have those options applied to them. From the Dashboard click Zones followed by Add Zone. For the purposes of this example we're going to create a zone just for Australia, so in the Zone Name box enter Australia and then tick all the Australia regions in the regions list, finally click Create Zone. You can create as many zones as you require, you may have a zone for your country and a zone for the rest of the world, if you did this then you can offer different tax and shipping rates depending on the zone the shopper resides in. In most cases stores will only require a single product class, however there are cases where multiple product classes are required e.g. if a class of products attracts a different rate of tax to your standard products. All stores are required to have at least one product class to function. From the Dashboard select Product Classes followed by Add Product Class and in the Product Class Name field enter Standard as we're only creating a single standard product class in this example. Click Add Product Class to create your new class. The Taxes option allows you to create tax rates for different combinations of zones and product classes, it allows you to charge tax for on shore customers and no tax for exports. To create a tax rate, in this case GST at 10%, from the Dashboard select Taxes and click Add Tax. Enter GST in the Tax Name field and 10 in the Tax Rate % field, finally click Australia under Apply Tax to Zones and Standard under Apply Tax to Product Classes before clicking Create Tax. A tax can be applied based on the billing address or on the shipping address, which is appropriate will depend on the tax laws within your country, if you're unsure you should seek advice from your local tax office. In our sample store we only have one zone and one product class but in a live store selling internationally you will most likely have multiple zones and product classes. When creating or editing a tax you can select as many zone or classes as required. If you don't need to charge the customer tax then you must still create a tax information for your cart but in this case set the tax rate to 0 then return to Settings then Cart. Under Cart Settings tick Do not show amount of tax fields on cart and orders and make sure Show without tax prices as well as price including tax is not ticked. This will avoid showing tax information to customers. The cart is installed without currency information, so next select Currency from the Dashboard and click Add Currency. We're going to add Australian Dollars as the default currency, you can add multiple currencies with different exchange rates, if you choose to do so shoppers are able to select the currency that they wish to view your prices in. In the Currency Code field put AUD, tick Use as Default Currency and in the Left Currency Symbol filed enter $, now click Create Currency. With this setup prices will be formatted as $123,456.00. The final step of configuring the cart module is to create one or more shipping methods, without a shipping method your customers will not be able to complete a purchase from your site. From the Dashboard select Shipping and click Add Shipping Method. In this example we're going to create a flat fee shipping method charging $10 per shipment. Enter Standard in the Shipping Method Name field, from the Cost Calculation dropdown select Flat Rate and enter 10 in the Cost field. The Product Class dropdown allows you to treat shipping differently to other products by selecting a different class, for now leave this as Standard. Tick both Australia under Geographic Regions and Standard under Product Classes before clicking Create Shipping Method. By choosing the Geographic Regions and Product Classes a shipping method applies to you can define complex options of which shipping methods will be available for each product and shipping destination. Note: The cart can support multiple shipping methods per order, multiple options are only presented to the shopper if the cart has valid shipping options for all the products but products don't share common shipping methods. The cart module includes a default style sheet to get you started, just add the following line above </head> in your template to include it: <link href="system/modules/cart/cart.css" rel="stylesheet" type="text/css"> While the default CSS provides a useful stating point you will in most cases want to refine the style to make your site unique.
https://docs.clearfusioncms.com/modules/cart/
2021-01-16T04:49:23
CC-MAIN-2021-04
1610703500028.5
[]
docs.clearfusioncms.com
SendGrid Webhook ON THIS PAGE . Hevo can bring email activity data from your SendGrid account into your Destination. Hevo connects to SendGrid through Webhooks. Add Webhook URL in your SendGrid Account - Copy the generated webhook URL. - Go to your SendGrid account, open Settings > Mail Settings in the SendGrid UI. - Turn on Event Notification. - In the HTTP POST URL field, paste the unique URL that you copied in step 1. - Select the Event notifications you would like to test. - Click the checkmark in the top corner to save these updates into your settings. You can read more about how Webhooks work in SendGrid here. Sample Event Data: { "email": "[email protected]", "timestamp": 1580102529, "smtp-id": "<14c5d75ce93.dfd.64b469@ismtpd-555>", "event": "deferred", "category": "cat facts", "sg_event_id": "P0onudGCXGlIhfAoy831Nw==", "sg_message_id": "14c5d75ce93.dfd.64b469.filter0001.16648.5515E0B88.0", "response": "400 try again later", "attempt": "5" } See Also Last updated on 31 Dec 2020
https://docs.hevodata.com/sources/marketing-&-email/sendgrid-webhook/
2021-01-16T05:40:15
CC-MAIN-2021-04
1610703500028.5
[]
docs.hevodata.com
[…] What does the Support Team offer when I open a ticket? The Support Team is the most important section for most of the users that for some reason may need a proper attendance and guidance either solving their needs and doubts that may occur along the way. Below are some of the orientations that our users shall be aware while requesting help on our Support Channel. […]
https://docs.listingprowp.com/knowledgebase-tag/customization/
2021-01-16T05:35:20
CC-MAIN-2021-04
1610703500028.5
[]
docs.listingprowp.com
Community Convergence L Welcome to the 50th issue of Community Convergence. Now that Visual Studio 2010 Beta is out, I think it might be helpful to draw attention to the new dynamic programming and office development features from the C# team that appear in that release. The C# team defines the core syntax found in C# 4.0, and the IDE features that make it easy for developers to access those features. In this post, I’ll focus on language features, in upcoming post, I’ll focus on IDE features. There are, of course, many new tools and APIs that will appear in Visual Studio 2010. In this post I’m focusing exclusively on core language features from the C# team itself, as opposed to features that come from the WPF, WCF, Silverlight, or other teams. The text presented here is meant to provide a high level overview of these features. At the end of the article I will point you to more technical resources that will allow you to explore these features in more depth, and to read more precise, rigorous definitions of the new features in C# 4.0. Language Features Dynamic programming and enhanced interaction with Office are the main theme in C# 4.0. Dynamic programming is a means of writing code that does not rely on a static types linked at compile time. Traditionally C# has been a strongly typed language defined in large part by static types that must be declared explicitly at compile time. For instance, string, int, and object are all strongly typed static types that are resolved at compile time. A dynamic type is not linked at compile time, but is instead resolved at run time. Why should C# developers care about dynamic programming? Isn’t the strongly typed nature of the C# language one of the language’s best features? The answer to this question is a resounding yes. Strong typing is one of the great advantages of the C# language, and the team expects most developers to continue to write strongly typed code nearly all the time. There is nothing in the new C# 4.0 features that prevents you from continuing to exclusively write strongly typed code, if that is what you prefer. However, there are occasions when it is convenient to write dynamic code. For instance, languages such as Python and Ruby are dynamically typed. It is awkward, and sometimes nearly impossible, to call into these languages from a strongly typed language such as C# 3.5. The Office API is also difficult to call from C# 3.5; it has long been awkward for C# developers to call into Office using the current set of language features. C# 4.0 is adding dynamic features so that it will be easier for developers to call into dynamic languages such as Python, and into more flexible APIs such as Office. You can think of the entire dynamic programming effort as a move by the C# team to fill in existing holes in the language. C# has always provided a great way to write strongly typed code, now we’re adding support for dynamic code, and for calling into Office applications. The team has broken out the new features in C# 4.0 into four categories, all but the last of which are directly related to dynamic programming or Office development: - Dynamic Lookup - A syntax for making calls into dynamic languages and APIs. - Named and Option Arguments - Allows developers to easily call into and create methods that support optional parameters. This is a big aid to Office developers since the Office API’s make heavy use of optional parameters. - COM Interop - Language features that make it easier to call COM objects. When calling COM objects you can now omit the keyword ref, you need to make fewer casts, and you no longer need depend on heavy-weight files called Primary Interop Assemblies, or PIAs. - Variance Covariance and contravariance have nothing to do with dynamic programming or Office development. Instead, they represent a small, very technical enhancement to the language that makes it easier to use inheritance when writing generic code. In general, this feature ensures that the language behaves as expected, rather than forcing you to rewrite code that looks like it should compile. For instance, the following code has a natural syntax that looks like it should work, but it does not work in C# 3.5. The upcoming changes in C# 4.0 ensure that it will compile: IEnumerable<string> strings = new List<string>(); IEnumerable<object> objects = strings; That’s the end of our quick overview of the new features in the C# 4.0 language. Again, remember that there are many other new features of in Visual Studio, and many new enhancements to existing technologies such as WPF, WCF, Silverlight and LINQ. In this post, I’ve focused exclusively on changes to the basic syntax of the language that have been implemented by the C# team itself. References Here are few references for people who want to explore this subject in more depth. A guide for developers interested in all things related to C# 4.0 is the C# Futures site on Code Gallery. There you will find a downloads page that contains the following key resources: - An excellent and definitive white paper on C# 4.0 language features. Mads Torgersen was the primary author of this document. - The C# 4.0 samples Other important resources include C# 4.0 sections Eric Lippert’s and Sam Ng’s blogs:
https://docs.microsoft.com/en-us/archive/blogs/charlie/community-convergence-l
2021-01-16T07:16:51
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
.. Segment validation is saved on completion in the system_distributed.nodesync_status table, which is used internally for resuming on failure, prioritization, segment locking, and by tools; not meant to be read directly.
https://docs.datastax.com/en/dse/6.8/dse-arch/datastax_enterprise/dbArch/archNodesync.html
2021-01-16T06:05:27
CC-MAIN-2021-04
1610703500028.5
[]
docs.datastax.com
Bevel Bevel Bevel Class Definition Bevel. When the object is serialized out as xml, its qualified name is a:bevel. public class Bevel : DocumentFormat.OpenXml.Drawing.BevelType type Bevel = class inherit BevelType Public Class Bevel Inherits BevelType - Inheritance - Remarks [ISO/IEC 29500-1 1st Edition] bevel (Bevel) This element defines the properties of the bevel associated with the 3D effect applied to a cell in a table. [Note: The W3C XML Schema definition of this element’s content model (CT_Bevel) is located in §A.4.1. end note] � ISO/IEC29500: 2008.
https://docs.microsoft.com/en-us/dotnet/api/documentformat.openxml.drawing.bevel?view=openxml-2.8.1
2021-01-16T07:25:46
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
Power Query query folding This article targets data modelers developing models in Power Pivot or Power BI Desktop. It describes what Power Query query folding is, and why it is important in your data model designs. It also describes the data sources and transformations that can achieve query folding, and how to determine that your Power Query queries can be folded—whether fully or partially. Query folding is the ability for a Power Query query to generate a single query statement to retrieve and transform source data. The Power Query mashup engine strives to achieve query folding whenever possible for reasons of efficiency. Query folding is an important topic for data modeling for several reasons: - Import model tables: Data refresh will take place efficiently for Import model tables (Power Pivot or Power BI Desktop), in terms of resource utilization and refresh duration. - DirectQuery and Dual storage mode tables: Each DirectQuery and Dual storage mode table (Power BI only) must be based on a Power Query query that can be folded. - Incremental refresh: Incremental data refresh (Power BI only) will be efficient, in terms of resource utilization and refresh duration. In fact, the Power BI Incremental Refresh configuration window will notify you of a warning should it determine that query folding for the table cannot be achieved. If it cannot be achieved, the objective of incremental refresh is defeated. The mashup engine would then be required to retrieve all source rows, and then apply filters to determine incremental changes. Query folding may occur for an entire Power Query query, or for a subset of its steps. When query folding cannot be achieved—either partially or fully—the Power Query mashup engine must compensate by processing data transformations itself. This process can involve retrieving source query results, which for large datasets is very resource intensive and slow. We recommend that you strive to achieve efficiency in your model designs by ensuring query folding occurs whenever possible. Sources that support folding Most data sources that have the concept of a query language support query folding. These data sources can include relational databases, OData feeds (including SharePoint lists), Exchange, and Active Directory. However, data sources like flat files, blobs, and web typically do not. Transformations that can achieve folding Relational data source transformations that can be query folded are those that can be written as a single SELECT statement. A SELECT statement can be constructed with appropriate WHERE, GROUP BY, and JOIN clauses. It can also contain column expressions (calculations) that use common built-in functions supported by SQL databases. Generally, the following list describes transformations that can be query folded. Removing columns. Renaming columns (SELECT column aliases). Filtering rows, with static values or Power Query parameters (WHERE clause predicates). Grouping and summarizing (GROUP BY clause). Expanding record columns (source foreign key columns) to achieve a join of two source tables (JOIN clause). Non-fuzzy merging of fold-able queries based on the same source (JOIN clause). Appending fold-able queries based on the same source (UNION ALL operator). Adding custom columns with simple logic (SELECT column expressions). Simple logic implies uncomplicated operations, possibly including the use of M functions that have equivalent functions in the SQL data source, like mathematic or text manipulation functions. For example, the following expressions returns the year component of the OrderDate column value (to return a numeric value). Date.Year([OrderDate]) Pivoting and unpivoting (PIVOT and UNPIVOT operators). Transformations that prevent folding Generally, the following list describes transformations that prevent query folding. This is not intended to be an exhaustive list. Merging queries based on different sources. Appending (union-ing) queries based on different sources. Adding custom columns with complex logic. Complex logic implies the use of M functions that have no equivalent functions in the data source. For example, the following expressions formats the OrderDate column value (to return a text value). Date.ToText([OrderDate], "yyyy") Adding index columns. Changing a column data type. Note that when a Power Query query encompasses multiple data sources, incompatibility of data source privacy levels can prevent query folding from taking place. For more information, see the Power BI Desktop privacy levels article. Determine when a query can be folded In the Power Query Editor window, it is possible to determine when a Power Query query can be folded. In the Query Settings pane, when you right-click the last applied step, if the View Native Query option is enabled (not greyed out), then the entire query can be folded. Note The View Native Query option is only available for certain relational DB/SQL generating connectors. It doesn't work for OData based connectors, for example, even though there is folding occurring on the backend. The Query Diagnostics feature is the best way to see what folding has occurred for non-SQL connectors (although the steps that fold aren't explicitly called out—you just see the resulting URL that was generated). To view the folded query, you select the View Native Query option. You are then be presented with the native query that Power Query will use to source data. If the View Native Query option is not enabled (greyed out), this is evidence that all query steps cannot be folded. However, it could mean that a subset of steps can still be folded. Working backwards from the last step, you can check each step to see if the View Native Query option is enabled. If this is the case, then you have learned where, in the sequence of steps, that query folding could no longer be achieved. Next steps For more information about Query Folding and related topics, check out the following resources:
https://docs.microsoft.com/en-us/power-query/power-query-folding
2021-01-16T06:29:33
CC-MAIN-2021-04
1610703500028.5
[array(['media/power-query-folding/query-folding-example.png', 'Example of determining that Power Query can achieve query folding in Power BI Desktop'], dtype=object) array(['media/power-query-folding/native-query-example.png', 'Example of a native query in Power BI Desktop'], dtype=object) array(['media/power-query-folding/query-folding-not-example.png', 'Example of determining that Power Query cannot achieve query folding in Power BI Desktop'], dtype=object) ]
docs.microsoft.com
You can set up multiple virtual flash resources using a batch configuration. About this task Set up multiple virtual flash resources at the same time or increase virtual flash resource capacity on hosts already configured with virtual flash resource. Prerequisites Verify that hosts are configured with ESXi 5.5 or higher and have eligible SSDs so the hosts can appear in the Add Virtual Flash Capacity list. Procedure - In the vSphere Web Client, navigate to the host. - Right-click the host, select . - Select one or more SSD. The selected SSDs are formatted so that the virtual flash resource can use them. All data on the disks is erased. - Click OK. Results Multiple virtual flash resources are created.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-C9781528-3727-4D08-809C-0FAD16C5EE61.html
2018-06-18T05:49:04
CC-MAIN-2018-26
1529267860089.11
[]
docs.vmware.com
Unit Testing Slash¶ The following information is intended for anyone interested in developing Slash or adding new features, explaining how to effectively use the unit testing facilities used to test Slash itself. The Suite Writer¶ The unit tests use a dedicated mechanism allowing creating a virtual test suite, and then easily writing it to a real directory, run it with Slash, and introspect the result. The suite writer is available from tests.utils.suite_writer: >>> from tests.utils.suite_writer import Suite >>> suite = Suite() Basic Usage¶ Add tests by calling add_test(). By default, this will pick a different test type (function/method) every time. >>> for i in range(10): ... test = suite.add_test() The created test object is not an actual test that can be run by Slash – it is an object representing a future test to be created. The test can later be manipulated to perform certain actions when run or to expect things when run. The simplest thing we can do is run the suite: >>> summary = suite.run() >>> len(summary.session.results) 10 >>> summary.ok() True We can, for example, make our test raise an exception, thus be considered an error: >>> test.when_run.raise_exception() Noe let’s run the suite again (it will commit itself to a new path so we can completely diregard the older session): >>> summary = suite.run() >>> summary.session.results.get_num_errors() 1 >>> summary.ok() False The suite writer already takes care of verifying that the errored test is actually reported as error and fails the run. Adding Parameters¶ To test parametrization, the suite write supports adding parameters and fixtures to test. First we will look at parameters (translating into @slash.parametrize calls): >>> suite.clear() >>> test = suite.add_test() >>> p = test.add_parameter() >>> len(p.values) 3 >>> suite.run().ok() True Adding Fixtures¶ Fixtures are slightly more complex, since they have to be added to a file first. You can create a fixture at the file level: >>> suite.clear() >>> test = suite.add_test() >>> f = test.file.add_fixture() >>> _ = test.depend_on_fixture(f) >>> suite.run().ok() True Fixtures can also be added to the slashconf file: >>> f = suite.slashconf.add_fixture() Fixtures can depend on each other and be parametrized: >>> suite.clear() >>> f1 = suite.slashconf.add_fixture() >>> test = suite.add_test() >>> f2 = test.file.add_fixture() >>> _ = f2.depend_on_fixture(f1) >>> _ = test.depend_on_fixture(f2) >>> p = f1.add_parameter() >>> summary = suite.run() >>> summary.ok() True >>> len(summary.session.results) == len(p.values) True You can also control the fixture scope: >>> f = suite.slashconf.add_fixture(scope='module') >>> _ = suite.add_test().depend_on_fixture(f) >>> suite.run().ok() True And specify autouse (or implicit) fixtures: >>> suite.clear() >>> f = suite.slashconf.add_fixture(scope='module', autouse=True) >>> t = suite.add_test() >>> suite.run().ok() True
http://slash.readthedocs.io/en/master/unit_testing.html
2018-06-18T05:26:58
CC-MAIN-2018-26
1529267860089.11
[]
slash.readthedocs.io
Forced source-fitting stage¶ (See also the relevant configuration parameters.) When the source association stage is complete, the pipeline proceeds to handle forced-fits. These may be required either for measuring ‘null detections’ or sources added to the monitoringlist. Null detection handling¶ A “null detection” is the term used to describe a source which was expected to be measured in a particular image (because it has been observed in previous images covering the same field of view) but was in fact not detected by the blind extraction step. After the blindly-extracted source measurements have been stored to the database and have been associated with the known sources in the running catalogue, the null detection stage starts. We retrieve from the database a list of sources that serve as the null detections. Sources on this list come from the runningcatalog and - Do not have a counterpart in the extractedsources of the current image after source association has run. - Have been seen (in any band) at a timestamp earlier than that of the current image. We determine null detections only as those sources which have been seen at earlier times which don’t appear in the current image. Sources which have not been seen earlier, and which appear in different bands at the current timestep, are not null detections, but are considered as “new” sources. For all sources identified as null detections, we measure fluxes by performing a forced elliptical Gaussian fit to the expected source position on the image. The procedure followed is similar to that used for blind extraction, but rather than allowing the pixel position of the barycentre to vary freely, it is held to the known source position. No deblending is performed. The results of these “forced” source measurements are marked as such and appended to the database. After being added to the database, the forced fits are matched back to their running catalog counterparts in order to append it as a datapoint in the light curve. This matching is does not include the De Ruiter radius, since the source position came from the running catalog. It is sufficient to use the weighted positional error as a cone search, since the positions are identical. Therefore the forced fit position is not included as an extra datapoint in the position of the running catalog. The fluxes, however, are included into the statistical properties and the values are updated. It is worth emphasizing that the above procedure guarantees that every known source will have either a blind detection or a forced-fit measurement in every image from the moment it was detected for the first time. Null-detection depends upon the same job configuration file parameters as the “blind” source-extraction stage. Monitoringlist¶ If monitoringlist positions have been specified when running a job, then these are always force-fitted whenever an image’s source-extraction region contains the designated co-ordinates. These forced fits are used to build up a special runningcatalog entry which is excluded from association with regular extractions.
http://tkp.readthedocs.io/en/release2.0/userref/structure/stages/forcedfits.html
2018-06-18T05:16:41
CC-MAIN-2018-26
1529267860089.11
[]
tkp.readthedocs.io
Generate a scheduled assessment manually Administrators can generate scheduled assessments manually. Before you beginRole required: assessment_admin or admin About this task Use this option, for example, if you have set a schedule but want to generate assessments before the next scheduled run date. Procedure Navigate to Assessments > Metric Definition > Types. Open the appropriate metric type record. Click Generate Assessments to trigger the scheduled job immediately. Note: Be careful to click Generate Assessments, not Generate Assessable Records.
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/assessments/task/t_GenSchedAssessmentManually.html
2018-06-18T05:40:32
CC-MAIN-2018-26
1529267860089.11
[]
docs.servicenow.com
As of FlowJo (10.0.7), there is a tool to assist in rapid visualization of plate-based flow data. Looking for outliers in a flow cytometry-based assay? There is a new plate tool built into the layout. In the example below, I would like to generate heatmaps for the MFI (median fluorescence intensity) of phosphorylated STAT5 on CD4+ cells. In order to do this: - Open the Layout Editor. - Select the plate tool and put it into the layout to set a single well. - By default, there is only one cell. - Select the group from which the statistics will be drawn using the plate menu at left. In this case, I have selected the group – “Plate 1.” - Drag and drop the statistic you would like heatmapped onto the plate. - The plate will rescale and display the heatmap of the statistic: Heatmap of pSTAT5 MFI in CD4+ T cells. - At right you’ll note that the statistic in the heatmap for this plate has been updated as well as annotation at the bottom left of the plate. In addition, there is an option to display the sample name at right. Additional Options: If you drag a second statistic into the plate, you can generate a split heatmap of both statistics: The tray at the right will allow you to control which statistic is listed in which halve: Finally, up to 10 statistics can be viewed using Chernoff faces: The list of attributes that can be assigned is presented here: We are also working on methods of plate annotation to facilitate automated analysis, and encourage you to contact us (flowjo [at] treestar.com or visit our information page for high throughput analyses) to get the latest update to this platform. See Also: - Metadata annotation overview - Importing a spreadsheet of keywords - Setting up a dilution factor keyword set - Plates Band - Plate Editor overview
http://docs.flowjo.com/vx/experiment-based-platforms/plate-editor/visualizing-plate-based-data/
2018-06-18T05:15:35
CC-MAIN-2018-26
1529267860089.11
[array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/FlowJo-Layouts-20130618.png', 'FlowJo Layouts---20130618'], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/Screen-Shot-2013-06-18-at-10.33.07-AM-1.png', 'Screen Shot 2013-06-18 at 10.33.07 AM-1'], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/FlowJo_Layouts-20130814.jpg', 'FlowJo_Layouts---20130814'], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/FlowJo_Layouts-20130814-2.jpg', 'FlowJo_Layouts---20130814 2'], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/FlowJo_Layouts-20130814-3.jpg', 'FlowJo_Layouts---20130814 3'], dtype=object) array(['http://docs.flowjo.com/wp-content/uploads/sites/5/2013/04/FlowJo_Layouts-20130814-4.jpg', 'FlowJo_Layouts---20130814 4'], dtype=object) ]
docs.flowjo.com
Error getting tags : error 404Error getting tags : error 404 convert dateAndTime [from format [and format]] to format [and format] convert "11/22/90" to long english date convert it from internet date to system date convert lastChangedTime to abbreviated time and long date convert the date && the time to seconds Use the convert command to change a date or time to a format that's more convenient for calculation or display. Parameters: The dateAndTime is a string or container with a date, a time, or a date and time separated by a space, tab, or return character. The format is one of the following (examples are February 17, 2000 at 10:13:21 PM, in the Eastern time zone, using US date and time formats): * short date: 2/17/00 * abbreviated date: Thu, Feb 17, 2000 * long date: Thursday, February 17, 2000 * short time: 10:13 PM * abbreviated time: 10:13 PM * long time: 10:13:21 PM * internet date: Thu, 17 Feb 2000 22:13:21 -0500 * seconds: the number of seconds since the start of the eon * dateItems: 2000,2,17,22,13,21,5 If you specify both a date and time format, they can be in either order and must be separated by a space. The resulting date and time are in the order you provided, separated by a space. If you specify seconds or dateItems, you can request only one format. If the dateAndTime is a container, the converted date and time is placed in the container, replacing the previous contents. If the dateAndTime is a string, the converted date and time is placed in the it variable. convert command can handle dates in dateItems format where one or more of the items is out of the normal range. This means you can add arbitrary numbers to an item in the dateItems and let the convert command handle the calculations that span minute, hour, day, month, and year boundaries. For example, suppose you start with 9:30 AM , convert that time to dateItems format, then add 45 minutes to item 5 (the minute) of the resulting value. This gives you 75 as the minute. When you convert the value to any other time format, the convert command automatically converts "75 minutes" to "1 hour and 15 minutes": convert "9:30 AM" to dateItems add 45 to item 5 of it convert it to time -- yields "10:15 AM" You can optionally use the english or system keyword before the short, abbreviated, or long date or time. If the useSystemDate is true, or if you use the system keyword, the user's current system preferences are used to format the date or time. Otherwise, the standard US date and time formats are used. The internet date, seconds, and dateItems formats are invariant and do not change according to the user's preferences. Note: The convert command assumes all dates / times are in local time except for 'the seconds', which is taken to be universal time. Note: If you convert a date without a time to a form that includes the time, the time will be given as midnight local time. Note: The range of dates that the convertcommand can handle is limited by the operating system's date routines. In particular, Windows systems are limited to dates after 1/1/1970. Changes to Revolution: The ability to use the date and time format preferred by the user was introduced in version 1.1. In previous versions, the convert command, along with the time and date functions, consistently used the standard U.S. format, even if the operating system's settings specified another language or date and time format. The ability to specify a format to convert from was introduced in version 1.1. In previous versions, Revolution automatically guessed the format to convert from.
http://docs.runrev.com/Command/convert
2018-06-18T05:42:31
CC-MAIN-2018-26
1529267860089.11
[]
docs.runrev.com
Web Plugin¶ The. Install¶ The Web interface depends on Flask. To get it, just run pip install flask. Then enable the web plugin in your configuration (see Using Plugins). If you need CORS (it’s disabled by default—see Cross-Origin Resource Sharing (CORS), below), then you also need flask-cors. Just type pip install flask-cors. Run the Server¶ Then just type beet web to start the server and go to. This is what it looks like: You can also specify the hostname and port number used by the Web server. These can be specified on the command line or in the [web] section of your configuration file. On the command line, use beet web [HOSTNAME] [PORT]. Or the configuration options below.. - readonly: If true, DELETE and PATCH operations are not allowed. Only GET is permitted. Default: true. Implementation¶; } JSON API¶ GET /item/¶ Responds with a list of all tracks in the beets library. { "items": [ { "id": 6, "title": "A Song", ... }, { "id": 12, "title": "Another Song", ... } ... ] } GET /item/6¶ Looks for an item with id 6 in the beets library and responds with its JSON representation. { "id": 6, "title": "A Song", ... } If there is no item with that id responds with a 404 status code. DELETE /item/6¶ Removes the item with id 6 from the beets library. If the ?delete query string is included, the matching file will be deleted from disk. Only allowed if readonly configuration option is set to no. PATCH /item/6¶ Updates the item with id 6 and write the changes to the music file. The body should be a JSON object containing the changes to the object. Returns the updated JSON representation. { "id": 6, "title": "A Song", ... } Only allowed if readonly configuration option is set to no. GET /item/6,12,13¶ Response with a list of tracks with the ids 6, 12 and 13. The format of the response is the same as for GET /item/. It is not guaranteed that the response includes all the items requested. If a track is not found it is silently dropped from the response. This endpoint also supports DELETE and PATCH methods as above, to operate on all items of the list. GET /item/path/...¶ Look for an item at the given absolute path on the server. If it corresponds to a track, return the track in the same format as /item/*. If the server runs UNIX, you’ll need to include an extra leading slash: GET /item/query/querystring¶ Returns a list of tracks matching the query. The querystring must be a valid query as described in Queries. { "results": [ { "id" : 6, "title": "A Song" }, { "id" : 12, "title": "Another Song" } ] } Path elements are joined as parts of a query. For example, /item/query/foo/bar will be converted to the query foo,bar. To specify literal path separators in a query, use a backslash instead of a slash. This endpoint also supports DELETE and PATCH methods as above, to operate on all items returned by the query. GET /item/6/file¶ Sends the media file for the track. If the item or its corresponding file do not exist a 404 status code is returned. Albums¶ For albums, the following endpoints are provided: GET /album/ GET /album/5 GET /album/5/art DELETE /album/5 GET /album/5,7 DELETE /album/5,7 GET /album/query/querystring DELETE /album/query/querystring The interface and response format is similar to the item API, except replacing the encapsulation key "items" with "albums" when requesting /album/ or /album/5,7. In addition we can request the cover art of an album with GET /album/5/art. You can also add the ‘?expand’ flag to get the individual items of an album. DELETE is only allowed if readonly configuration option is set to no.
http://docs.beets.io/en/stable/plugins/web.html
2022-08-07T19:10:45
CC-MAIN-2022-33
1659882570692.22
[array(['../_images/beetsweb.png', '../_images/beetsweb.png'], dtype=object)]
docs.beets.io
The Conveyor Panel The Conveyor panel defines spatial properties of the conveyor. The following properties are on the Straight Conveyor panel: Start Defines the X, Y, and Z position of the start of the conveyor. End Defines the X, Y, and Z position of the end of the conveyor. Reversing Direction Press the button to switch the start and end locations, reversing the conveyor's direction. Width Defines the width of the conveyor. Horizontal Length Defines the total length of the conveyor. Changing this value will adjust the End property. Virtual Length A virtual length lets you specify a length to simulate, rather than using the conveyor's physical length. To use, check the box and enter a length. Visualization Defines the conveyor's Visualization. Roller Skew Angle Defines which way the conveyor's rollers are offset from straight. If this value is positive, items will align to the left side of the conveyor. If it is negative, items will align to the right side of the conveyor. Align To Side This property is only visible for mass flow conveyors. It defines the side of the conveyor that mass flow units will be drawn on.
https://docs.flexsim.com/en/22.2/Reference/PropertiesPanels/ConveyorPanels/Conveyor/Conveyor.html
2022-08-07T19:17:25
CC-MAIN-2022-33
1659882570692.22
[array(['Images/Properties.png', None], dtype=object)]
docs.flexsim.com
Lockfiles¶ Warning This is an experimental feature subject to breaking changes in future releases. Lockfiles are files that store the information of a dependency graph, including the exact versions, revisions, options, and configuration of that dependency graph. These files allow for later achieving reproducible results, and installing or using the exact same dependencies even when the requirements are not fully reproducible, for example when using version ranges or using package revisions. - Introduction - Multiple configurations - Evolving lockfiles - Build order in lockfiles - Lockfiles in Continuous Integration
https://docs.conan.io/en/1.31/versioning/lockfiles.html
2022-08-07T18:18:40
CC-MAIN-2022-33
1659882570692.22
[]
docs.conan.io
This gives you the opportunity to show a help text underneath the question. This could be a specification of the question or showing a range wherein the answers should be. How to add a Help Text: - Go to the question of interest. - Click Advanced options - Enable Show help text - Specify details for the Help text. - Click Save Form. The Help Text is limited to 1024 characters. Related pages: Can't find the answer you're looking for? Don't hesitate to raise a support request via [email protected].
https://docs.medei.co/pages/viewpage.action?pageId=22511825
2022-08-07T18:18:49
CC-MAIN-2022-33
1659882570692.22
[array(['/download/attachments/22511825/Screenshot%202019-07-08%20at%2013.17.06.png?version=1&modificationDate=1562584833978&api=v2', None], dtype=object) ]
docs.medei.co
Regions¶ Regions are geographical areas arranged in a tree-like structure with “Earth” being the root region. Countries are regions with a non-empty country_code. Carriers can offer their services in any region. Shipment addresses should always contain a country_code from a region defined as a country. Retrieve regions¶ You can filter the regions on a parent, country_code, region_code, name and postal_code. If you add these filters the request would look something like this: GET /regions?filter[country_code]=GB&filter[region_code]=SCT HTTP/1.1 Note If a postal code exists in more than one country, multiple regions are returned. The can be used in combination with the country_code filter to get more specific results.
https://docs.myparcel.com/api/resources/regions.html
2022-08-07T19:35:35
CC-MAIN-2022-33
1659882570692.22
[]
docs.myparcel.com
Scaling¶ Citus Cloud provides self-service scaling to deal with increased load. The web interface makes it easy to either add new worker nodes or increase existing nodes’ memory and CPU capacity. For most cases either approach to scaling is fine and will improve performance. However there are times you may want to choose one over the other. If the cluster is reaching its disk limit then adding more nodes is the best choice. Alternately, if there is still a lot of headroom on disk but performance is suffering, then scaling node RAM and processing power is the way to go. Both adjustments are available in the formation configuration panel of the settings tab: The. A maintenance window specifies a preferred time for any maintenance tasks to be performed on your formation. When a window is set, changes to the formation (e.g. changing to a different worker size) will by default occur within this window, unless manually adjusted by Citus Cloud support. In addition, when a maintenance window is set, base backups on the node will start during the window. Citus Cloud will display a popup message in the console while scaling actions have begun or are scheduled. The message will disappear when the action completes. For instance, when adding nodes: Or when waiting for node resize to begin in the maintenance window: Scaling Up (increasing node size)¶ Resizing node size works by creating a PostgreSQL follower for each node, where the followers are provisioned with the desired amount of RAM and CPU cores. It takes an average of forty minutes per hundred gigabytes of data for the primary nodes’ data to be fully synchronized on the followers. After the synchronization is complete, Citus Cloud does a quick switchover from the existing primary nodes to their followers which takes about two minutes. The creation and switchover process uses the same well-tested replication mechanism that powers Cloud’s Backup, Availability, and Replication feature. During the switchover period clients may experience errors and have to retry queries, especially cross-tenant queries hitting multiple nodes. Scaling Out (adding new nodes)¶ Node addition completes in five to ten minutes, which is faster than node resizing because the new nodes are created without data. To take advantage of the new nodes you still must adjust manually rebalance the shards, meaning move some shards from existing nodes to the new ones. Rebalancing¶ance Shards without Downtime. Citus will output progress in both psql (saying which shards are moving) and graphically in the Cloud console: The rebalance progress is also queryable in SQL with the get_rebalance_progress() function.';
http://docs.citusdata.com/en/v8.2/cloud/performance.html
2019-05-19T17:11:29
CC-MAIN-2019-22
1558232255071.27
[array(['../_images/cloud-nodes-slider.png', '../_images/cloud-nodes-slider.png'], dtype=object) array(['../_images/cloud-nodes-slider-2.png', '../_images/cloud-nodes-slider-2.png'], dtype=object) array(['../_images/cloud-maintenance-window.png', '../_images/cloud-maintenance-window.png'], dtype=object) array(['../_images/cloud-scaling-out.png', '../_images/cloud-scaling-out.png'], dtype=object) array(['../_images/cloud-scaling-up.png', '../_images/cloud-scaling-up.png'], dtype=object) array(['../_images/cloud-rebalance-unnecessary.png', '../_images/cloud-rebalance-unnecessary.png'], dtype=object) array(['../_images/cloud-rebalance-recommended.png', '../_images/cloud-rebalance-recommended.png'], dtype=object) array(['../_images/cloud-rebalancer-gui.gif', '../_images/cloud-rebalancer-gui.gif'], dtype=object)]
docs.citusdata.com
Installation and configuration of the development environment¶ Liferay Bundle Installation - Download Liferay Bundle 6.1.1-ce-ga2 for Glassfish from here - Unzip the Liferay Bundle in the folder you prefer. - set the LIFERAY_HOME env variable to the folder containg your Liferay Bundle: export LIFERAY_HOME=/Users/Macbook/Downloads/liferay-portal-6.1.1-ce-ga2 - set the executable permission to all binary file in glassfish bin folder: chmod +x glassfish-3.1.2/bin/* - start the domain using the following command: $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin start-domain domain1 You should have an output like the following: ----------------------------------------------------------------------------------------------------- RicMac:bin Macbook$ $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin start-domain domain1 Waiting for domain1 to start ...... Successfully started the domain : domain1 domain Location: /Users/Macbook/Downloads/liferay-portal-6.1.1-ce-ga2/glassfish-3.1.2/domains/domain1 Log File: /Users/Macbook/Downloads/liferay-portal-6.1.1-ce-ga2/glassfish-3.1.2/domains/domain1/logs/server.log Admin Port: 4848 Command start-domain executed successfully. ---------------------------------------------------------------------------------------------------- - Open a browser window to This procedure will take a while during the first connection. At the end you should get the following interface: Press the ‘Finish Configuration’ button; it generates the portal’ configuration file: /Users/Macbook/Downloads/liferay-portal-6.1.1-ce-ga2/portal-setup-wizard.properties Press the ‘Go My Portal’ button, agree the conditions, set the new password and password retrival questions. After then you’ll be redirected to the Liferay home page. To check the Liferay log file: tail -f $LIFERAY_HOME/glassfish-3.1.2/domains/domain1/logs/server.log MySQL - Installation and Configuration In case you have alreadya MySQL server in your system, you can skip this step just verifying that your version is < 5.6 due to an incompatibility issue between newer MySQL versions and the jdbc-connector.jar library provided with the current version of Liferay bundle. You could skip the subscription to the ORACLE Web Login. DB_MACOSX: Instructions are available inside the README.txt file. Select the DMG file and execute the two pkgs icons from the terminal.app execute: sudo /Library/StartupItems/MySQLCOM/MySQLCOM start (your password will be requested) Add the PATH to the .profile: export PATH=$PATH:/usr/local/mysql/bin Start the service RicMac:liferay-portal-6.1.1-ce-ga2 Macbook$ sudo /Library/StartupItems/MySQLCOM/MySQLCOM start Password: Starting MySQL database server DB_LINUX: On L5/6 it is possible to install MySQL with: yum install mysql-server Then the follow commands will enable mysql to start at boot and startup the mysql daemon process # chkconfig mysqld on # /etc/init.d/mysqld start - generate the portal-ext.properties file: cat <<EOF > $LIFERAY_HOME/portal-ext.properties=liferayadmin jdbc.default.password=liferayadmin EOF - create Liferay database mysql -u root CREATE USER 'liferayadmin' IDENTIFIED BY 'liferayadmin'; CREATE DATABASE lportal; GRANT ALL PRIVILEGES ON lportal.* TO 'liferayadmin'@'localhost' IDENTIFIED BY 'liferayadmin'; - Download the mysql-connector from here and copy it in $LIFERAY_HOME/glassfish-3.1.2/domains/domain1/lib/ *! Restart Liferay; this will cause Liferay to identify the DB and create new tables and data. $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin stop-domain domain1 && \ $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin start-domain domain1 Liferay Plugins SDK Download the SDK from here (Liferay Plugins SDK 6.1 GA 2). Open the file LIFERAY_SDK_HOME/build.properties, uncomment ‘glassfish’ settings and setup the proper file path values. Comment out the default enabled tomcat settings. Pay attention that in LIFERAY_SDK_HOME/build.properties there are also settings to specify which java compiler will be used by ant; in case of troubles try to setup properly the ‘javac.compiler’ option; for instance switchin to ‘modern’ value. Be sure your system has installed ‘ant’ and ‘ecj’ orherwise install them. A small test could be the use of: cd $LIFERAY_SDK_HOME/portlets/ ./create.sh hello-world "Hello-World" Pay attention that the create.sh file normally does not have enabled the execution permission chmod +x ./create.sh - This should create the ‘hello-world’ portlet folder. - Enter in hello-world-portlet folder: cd hello-world-portlet - Excute deploy command ant deploy Liferay log file should contain some lines like this: Successfully autodeployed : LIFERAY_HOME/glassfish-3.1.2/domains/domain1/autodeploy/hello-world-portlet.|#] Grid Engine Stop Liferay $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin stop-domain domain1 - To create the database and the tables; mysql -u root < UsersTrackingDB/UsersTrackingDB.sql In case the users tracking database already exists, uncomment the line: -- drop database userstracking; Pay attention the line above will destroy the existing database. - Download Grid Engine and JSAGA libraries from sourceforge and copy them in temporary folder: # # Use curl <namefile> > <namefile> in case you do not have wget # wget - Unzip the GridEngine_v1.5.9.zip inside the temporary folder: unzip GridEngine_v1.5.9.zip - Move the config file from the temporary folder to the Liferay config folder: mv <temp folder path>/GridEngine_v1.5.9/GridEngineLogConfig.xml $LIFERAY_HOME/glassfish-3.1.2/domains/domain1/config - Move all the other files to the Liferay lib folder mv <temp folder path>/GridEngine_v1.5.9/* $LIFERAY_HOME/glassfish-3.1.2/domains/domain1/lib - Startup liferay $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin start-domain domain1 - If you are using a virtual machine, be aware that Glassfish control panel access normally is forbidden from remote. Following commands are necessary to enable it: $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin --host localhost --port 4848 change-admin-password $LIFERAY_HOME/glassfish-3.1.2/bin/asadmin enable-secure-admin Please refer to the Glassfish Administration Guide for more details EUGRIDPMA and VOMSDIR Each access to any distributed infrastructure requires well defined authentication and authorization mechanisms. Most of Grid infrastructures are making use of the GSI. This security mechanism relies on X509 digital certificates provided by entities named Certification Authorities which themselves are using X509 certificates. The CAs are normally registered by the IGTF a body to establish common policies and guidelines between its Policy Management Authorities (PMAs). The CAs act as an independent trusted third party for both subscribers and relying parties within the infrastructure. In order to setup CA certificates, it is necessary to perform one of the following instructions. RPM based Linux distributions may try the first approach (Linux systems); the othe platforms must use the second approach (Other systems). - Linux systems On linux systems it is possible to install the IGTF CA certificates executing the following steps: - Other systems (MacOSx): Execute the following instructions to create the /etc/grid-security/certificates and /etc/grid-security/vomsdir folders: sudo mkdir -p /etc/grid-security curl > grid_settings.tar.gz sudo tar xvfz grid_settings.tar.gz -C /etc/grid-security/ (!) Archives below will expire timely so that they should be kept updated (!!) vomsdir must be updated with VO you are going to support VPN Setup to get the access to the eTokenserver The eToken server is the responsible to deliver grid proxy certificate to the GridEngine starting form Robot Certificates stored into an eToken USB key. For security purposes is not possible to access directly the eTokenServer. For porltet developers it is possible to open a VPN connection. In order to get the necessary certificates you have to send us a request The VPN connection information will be released in OpenVPN format, together with the necessary certificate and a password. For Mac users we may suggest Tunnelblick for MacOSX platforms. There is also this video showing how to setup the VPN from the configuration files sent by us. For other platforms like Linux we suggest to install OpenVPN client and then execute from the same directory holding the certificate: openvpn --config <received_conf_file>.ovpn Please notice that on CentOS7 VPN will not work by default since provided VPN certificates are encrypted using MD5 and SHA1 which are no longer supported on CentOS 7. To be able to use the VPN certificate anyway it is possible to enable Md5 support on CentOS7; just executing as root: cat >> /usr/lib/systemd/system/NetworkManager.service <<EOF [Service] Environment="OPENSSL_ENABLE_MD5_VERIFY=1 NSS_HASH_ALG_SUPPORT=+MD5" EOF systemctl daemon-reload systemctl restart NetworkManager.service Further details about this issue are available here (Thanks to Manuel Rodriguez Pascual) Development WARNING For architectural reasons the constructor of GridEngine object must be declared differently than the portlet code written for the production environment The constructor must be created with: MultiInfrastructureJobSubmission multiInfrastructureJobSubmission = new MultiInfrastructureJobSubmission ("jdbc:mysql://localhost/userstracking","tracking_user","usertracking"); In the portlet examples the constructor call lies inside the submitJob method Integrated Development Environment (IDE) We recommend NetBeans as IDE to develop portlets and other Liferay plugins. In order to create Liferay plugins you can use the Plugin Portal Pack extension of NetBeans or configure the plugin to use the Liferay SDK References Liferay Plugin SDK - How to
https://csgf.readthedocs.io/en/latest/training-materials/docs/development-enviroment.html
2019-05-19T17:09:34
CC-MAIN-2019-22
1558232255071.27
[array(['../../_images/figure16.png', '../../_images/figure16.png'], dtype=object) ]
csgf.readthedocs.io
MicroApps MicroApps are pre-built collections of callflow templates, configuration screens, and scripts built to industry best practices. They allow for rapid deployment of commonly-required functions within a self-service system and can be used across all channels supported by Intelligent Automation. All available MicroApps are listed in the table below. This page was last modified on April 9, 2019, at 06:01. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GAAP/latest/iaHelp/MicroApps
2019-05-19T16:50:21
CC-MAIN-2019-22
1558232255071.27
[]
docs.genesys.com
Our official language pack can be found in our download page. Please read also [Language Installation Guide](../../languages/Language Installation). Kunena project uses Transifex to maintain all available translations. Transifex provides easy to use online tools, which can be used to update translations easily and without any technical knowledge. Alternatively Transifex allows you to download the current translation and update it with your favourite tool. It also allows anyone to download the most current version of any translated file. Downloaded files can be manually installed to your Joomla site. Please read Language files for more information. Having your language in Transifex is the only way to get it into Kunena distribution. For information how to use transifex read the wiki article about Transifex. There are some bugs in Transifex Joomla support, so please check these: \n), not Windows ( \r\n) or Mac ( \r) before uploading the files back to Transifex. To create installable language file, you just have to follow these steps: The translated string has to be encosed by double quotes: COM_KUNENA_A_TEMPLATE_MANAGER_COULD_NOT_WRITABLE=”Could not make the template parameter file writable” Do not split translations into multiple lines (only first line will show up): COM_KUNENA_A_TEMPLATE_MANAGER_COULD_NOT_WRITABLE=”Could not make the template parameter file writable” If you want a new line, write: <br /> For example: COM_KUNENA_A_TEMPLATE_MANAGER_COULD_NOT_WRITABLE=”Could not make the <br /> template parameter file writable” Make your translations to be valid XHTML Not <br>, not <br/> - only <br /> is valid. If unsure, please use the same markup as we do. Files must be saved in UTF-8 Please do not use Double Quotes (“) in your translation, it will break the translation. Found errors? Think you can improve this documentation? edit this page
https://docs.kunena.org/en/languages/translating-kunena
2019-05-19T16:37:09
CC-MAIN-2019-22
1558232255071.27
[]
docs.kunena.org
Want to see a live demo and learn what this Connector can do? Join our jam sessions! If you are looking for documentation for the new Salesforce & JIRA Cloud Connector, click here. This is a comprehensive user guide which documents all features intended for normal users of the Connector. This User Guide assumes that both JIRA and Salesforce instances have been set up according to the Administrator Guide. Want help getting the Connector setup and configured in your environment? Let our team do the work.
https://docs.servicerocket.com/display/CFSJ/User+Guide
2019-05-19T16:59:34
CC-MAIN-2019-22
1558232255071.27
[]
docs.servicerocket.com
Are you planning on completing your first installation? Have you followed the Appendix B, Prerequisites? Have you chosen your deployment type? See Chapter 2, Deployment Overview? Would you like to understand the different types of installation? There are two installation methods available in tpm, INI and Staging. A comparison of the two methods is at Section 9.1, “Comparing Staging and INI tpm Methods”. Do you want to upgrade to the latest version? See Section 9.5.16, “tpm update Command”. Are you trying to update or change the configuration of your system? See Section 9.5.16, “tpm update Command”. Would you like to perform database or operating system maintenance? See Section 7.14, “Performing Database or OS Maintenance”. Do you need to backup or restore your system? For backup instructions, see Section 7.7, “Creating a Backup”, and to restore a previously made backup, see Section 7.8, “Restoring a Backup”.
http://docs.continuent.com/tungsten-replicator-5.2/preface-quickstart.html
2019-05-19T17:07:13
CC-MAIN-2019-22
1558232255071.27
[]
docs.continuent.com
Boot from VMware VMware Fusion These instructions are for setting up netboot.xyz in a VM on VMware's Fusion for MacOS. Create the VM - Add a new virtual machine. - Select "Install from disc or image". - Click on "Use another disk or disc image...". - Download and select the netboot.xyz ISO. - On the Choose Operating System Screen, select the OS type you are planning on installing. If you plan on testing multiple types of installs, you can just choose a CentOS 64-bit OS. - Click the "Customize Settings" and give the VM a name, like "netboot.xyz". This will create your VM. Running the VM You'll need to adjust the memory settings of the VM to ensure you'll have enough memory to run the OS installers in memory. Typically it's good to bump the memory up to 2GB to 4GB. - Click the wrench icon and click on Processors & Memory and bump up the memory to the desired amount of memory. - Start the VM up and you should see the netboot.xyz loader. - If you determine you no longer want to boot from netboot.xyz, you can either change the boot order to boot from the hard drive by default or delete the ISO from the VM.
http://netbootxyz.readthedocs.io/en/latest/boot-vmware/
2017-03-23T00:12:27
CC-MAIN-2017-13
1490218186530.52
[]
netbootxyz.readthedocs.io
OpenStreetMap¶ Nominatim (from the Latin, ‘by name’) is a tool to search OSM data by name and address and to generate synthetic addresses of OSM points (reverse geocoding). Using Geocoder you can retrieve OSM’s geocoded data from Nominatim. Nominatim Server¶ Setting up your own offline Nominatim server is possible, using Ubuntu 14.04 as your OS and following the Nominatim Install instructions. This enables you to request as much geocoding as your little heart desires! >>>>>>> g = geocoder.osm("New York City", url=url) >>> g.json ... OSM Addresses¶ The addr tag is the prefix for several `addr:`* keys to describe addresses. This format is meant to be saved as a CSV and imported into JOSM. >>> g = geocoder.osm('11 Wall Street, New York') >>> g.osm { "x": -74.010865, "y": 40.7071407, "addr:country": "United States of America", "addr:state": "New York", "addr:housenumber": "11", "addr:postal": "10005", "addr:city": "NYC", "addr:street": "Wall Street" } Command Line Interface¶ $ geocode 'New York city' --provider osm --out geojson | jq . $ geocode 'New York city' -p osm -o osm $ geocode 'New York city' -p osm --url localhost Parameters¶ location: Your search location you want geocoded. url: Custom OSM Server (ex: localhost) method: (default=geocode) Use the following: - geocode
http://geocoder.readthedocs.io/providers/OpenStreetMap.html
2017-03-23T00:21:16
CC-MAIN-2017-13
1490218186530.52
[]
geocoder.readthedocs.io
queue, and match queued records with existing catalog records. In this example, an acquisitions librarian has received a batch of MARC records from a vendor. She will add the records to a selection list and a Vandelay record queue. A cataloger will later view the queue, edit the records, and import them into the catalog. Browse your computer to find the MARC file, and click Upload. The processed items appear at the bottom of the screen. Look at the first line item. The line item has not yet been linked to the catalog, but it is linked to a record import queue. Click the link to the queue to examine the MARC record. Example 2: Using the Acquisitions MARC Batch Load interface, upload MARC records to a selection list, and use the Vandelay options to import the records directly into the catalog. The Vandelay options will enable you to match incoming records with existing catalog records. In this example, a librarian will add MARC records to a selection list, create criteria for matching incoming and existing records, and import the matching and non-matching records into the catalog. Browse your computer to find the MARC file, and click Upload. Click the link to View Selection List Line items that do not match existing catalog records on title and ISBN contain the link, link to catalog. This link indicates that you could link the line item to a catalog record, but currently, no match exists between the line item and catalog records. Line items that do have matching records in the catalog contain the link, catalog. Permissions to use this Feature IMPORT_MARC - Using.
http://docs.evergreen-ils.org/2.11/_use_cases_for_marc_order_upload_form.html
2017-03-23T00:21:32
CC-MAIN-2017-13
1490218186530.52
[]
docs.evergreen-ils.org
AWS::EC2::Volume The AWS::EC2::Volume type creates a new Amazon Elastic Block Store (Amazon EBS) volume. To control how AWS CloudFormation handles the volume when the stack is deleted, set a deletion policy for your volume. You can choose to retain the volume, to delete the volume, or to create a snapshot of the volume. For more information, see DeletionPolicy Attribute. Note If you set a deletion policy that creates a snapshot, all tags on the volume are included in the snapshot. Syntax To declare this entity in your AWS CloudFormation template, use the following syntax: JSON Copy to clipboard { "Type":"AWS::EC2::Volume", "Properties" : { " AutoEnableIO" : Boolean, "AvailabilityZone" : String, " Encrypted" : Boolean, "Iops" : Number, " KmsKeyId" : String, "Size" : Integer, "SnapshotId" : String, "Tags" : [ Resource Tag, ...], "VolumeType" : String} } YAML Copy to clipboard Type: "AWS::EC2::Volume" Properties: AutoEnableIO: BooleanAvailabilityZone: String Encrypted: BooleanIops: Number KmsKeyId: StringSize: IntegerSnapshotId: StringTags: - Resource TagVolumeType: String Properties AutoEnableIO Indicates whether the volume is auto-enabled for I/O operations. By default, Amazon EBS disables I/O to the volume from attached EC2 instances when it determines that a volume's data is potentially inconsistent. If the consistency of the volume is not a concern, and you prefer that the volume be made available immediately if it's impaired, you can configure the volume to automatically enable I/O. For more information, see Working with the AutoEnableIO Volume Attribute in the Amazon EC2 User Guide for Linux Instances. Required: No Type: Boolean Update requires: No interruption AvailabilityZone The Availability Zone in which to create the new volume. Required: Yes Type: String Update requires: Updates are not supported. Encrypted Indicates whether the volume is encrypted. You can attach encrypted Amazon EBS volumes only to instance types that support Amazon EBS encryption. Volumes that are created from encrypted snapshots are automatically encrypted. You can't create an encrypted volume from an unencrypted snapshot, or vice versa. If your AMI uses encrypted volumes, you can launch the AMI only on supported instance types. For more information, see Amazon EBS encryption in the Amazon EC2 User Guide for Linux Instances. Required: Conditional. If you specify the KmsKeyIdproperty, you must enable encryption. Type: Boolean Update requires: Updates are not supported. Iops The number of I/O operations per second (IOPS) that the volume supports. For more information about the valid sizes for each volume type, see the Iopsparameter for the CreateVolumeaction in the Amazon EC2 API Reference. Required: Conditional. Required when the volume type is io1; not used with other volume types. Type: Number Update requires: Updates are not supported. KmsKeyId The Amazon Resource Name (ARN) of the AWS Key Management Service master key that is used to create the encrypted volume, such as arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef. If you create an encrypted volume and don't specify this property, AWS CloudFormation uses the default master key. Required: No Type: String Update requires: Updates are not supported. Size The size of the volume, in gibibytes (GiBs). For more information about the valid sizes for each volume type, see the Sizeparameter for the CreateVolumeaction in the Amazon EC2 API Reference. If you specify the SnapshotIdproperty, specify a size that is equal to or greater than the size of the snapshot. If you don't specify a size, Amazon EC2 uses the size of the snapshot as the volume size. Required: Conditional. If you don't specify a value for the SnapshotIdproperty, you must specify this property. Type: Integer Update requires: Updates are not supported. SnapshotId The snapshot from which to create the new volume. Required: No Type: String Update requires: Updates are not supported. An arbitrary set of tags (key–value pairs) for this volume. Required: No Type: AWS CloudFormation Resource Tags Update requires: No interruption VolumeType The volume type. If you set the type to io1, you must also set the Iopsproperty. For valid values, see the VolumeTypeparameter for the CreateVolume action in the Amazon EC2 API Reference. Required: No Type: String Update requires: Updates are not supported. Return Values Ref When you specify an AWS::EC2::Volume type as an argument to the Ref function, AWS CloudFormation returns the volume's physical ID. For example: vol-5cb85026. For more information about using the Ref function, see Ref. Examples Example Encrypted Amazon EBS Volume with DeletionPolicy to Make a Snapshot on Copy to clipboard "NewVolume" : { "Type" : "AWS::EC2::Volume", "Properties" : { "Size" : "100", "Encrypted" : "true", "AvailabilityZone" : { "Fn::GetAtt" : [ "Ec2Instance", "AvailabilityZone" ] }, "Tags" : [ { "Key" : "MyTag", "Value" : "TagValue" } ] }, "DeletionPolicy" : "Snapshot" } Example Amazon EBS Volume with 100 Provisioned IOPS Copy to clipboard "NewVolume" : { "Type" : "AWS::EC2::Volume", "Properties" : { "Size" : "100", "VolumeType" : "io1", "Iops" : "100", "AvailabilityZone" : { "Fn::GetAtt" : [ "EC2Instance", "AvailabilityZone" ] } } } More Info CreateVolume in the Amazon Elastic Compute Cloud API Reference -
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html
2017-03-23T00:16:26
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
Geocoder.ca¶ Geocoder.ca - A Canadian and US location geocoder. Using Geocoder you can retrieve Geolytica’s geocoded data from Geocoder.ca. Geocoding¶ >>> import geocoder >>> g = geocoder.geolytica('Ottawa, ON') >>> g.json ... Parameters¶ - location: Your search location you want geocoded. - auth: The authentication code for unthrottled service. - strictmode: Optionally you can prevent geocoder from making guesses on your input. - strict: Optional Parameter for enabling strict parsing of free form location input. - method: (default=geocode) Use the following: - geocode - auth: (optional) The authentication code for unthrottled service (premium API)
http://geocoder.readthedocs.io/providers/Geocoder-ca.html
2017-03-23T00:22:41
CC-MAIN-2017-13
1490218186530.52
[]
geocoder.readthedocs.io
Replication Data replication is a core feature of Riak’s basic architecture. Riak was designed to operate as a clustered system containing multiple Riak nodes, which allows data to live on multiple machines at once in case a node in the cluster goes down. Replication is fundamental and automatic in Riak, providing security that your data will still be there if a node in your Riak cluster goes down. All data stored in Riak will be replicated to a number of nodes in the cluster according to the N value ( n_val) property set in a bucket’s bucket type. Note: Replication across clusters If you’re interested in replication not just within a cluster but across multiple clusters, we recommend checking out our documentation on Riak’s Multi-Datacenter Replications capabilities. Selecting an N value ( n_val) By default, Riak chooses an n_val of 3 default. This means that data stored in any bucket will be replicated to 3 different nodes. For this to be effective, you need at least 3 nodes in your cluster. The ideal value for N depends largely on your application and the shape of your data. If your data is highly transient and can be reconstructed easily by the application, choosing a lower N value will provide greater performance. However, if you need high assurance that data is available even after node failure, increasing the N value will help protect against loss. How many nodes do you expect will fail at any one time? Choose an N value larger than that and your data will still be accessible when they go down. The N value also affects the behavior of read (GET) and write (PUT) requests. The tunable parameters you can submit with requests are bound by the N value. For example, if N=3, the maximum read quorum (known as “R”) you can request is also 3. If some nodes containing the data you are requesting are down, an R value larger than the number of available nodes with the data will cause the read to fail. Setting the N value ( n_val) To change the N value for a bucket, you need to create a bucket type with n_val set to your desired value and then make sure that the bucket bears that type. In this example, we’ll set N to 2. First, we’ll create the bucket type and call it n_val_of_2 and then activate that type: riak-admin bucket-type create n_val_of_2 '{"props":{"n_val":2}}' riak-admin bucket-type activate n_val_of_2 Now, any bucket that bears the type n_val_of_2 will propagate objects to 2 nodes. Note on changing the value of N Changing the N value after a bucket has data in it is not recommended. If you do change the value, especially if you increase it, you might need to force read repair (more on that below). Overwritten objects and newly stored objects will automatically be replicated to the correct number of nodes. Changing the N value ( n_val) While raising the value of N for a bucket or object shouldn’t cause problems, it’s important that you never lower N. If you do so, you can wind up with dead, i.e. unreachable data. This can happen because objects’ preflists, i.e. lists of vnodes responsible for the object, can end up Unreachable data is a problem because it can negatively impact coverage queries, e.g. secondary index and MapReduce queries. Lowering an object or bucket’s n_val will likely mean that objects that you would expect to be returned from those queries will no longer be returned. Active Anti-Entropy Riak’s active anti-entropy (AAE) subsystem is a continuous background process that compares and repairs any divergent or missing object replicas. For more information on AAE, see the following documents: Read Repair Read repair occurs when a successful read occurs—i.e. when the target number of nodes have responded, as determined by R—but not all replicas of the object agree on the value. There are two possibilities here for the errant nodes: - The node responded with a not foundfor the object, meaning that it doesn’t have a copy. - The node responded with a vector clock that is an ancestor of the vector clock of the successful read. When this situation occurs, Riak will force the errant nodes to update the object’s value based on the value of the successful read. Forcing Read Repair When you increase the n_val of a bucket, you may start to see failed read operations, especially if the R value you use is larger than the number of replicas that originally stored the object. Forcing read repair will solve this issue. Or if you have active anti-entropy enabled, your values will eventually replicate as a background task. For each object that fails read (or the whole bucket, if you like), read the object using an R value less than or equal to the original number of replicas. For example, if your original n_val was 3 and you increased it to 5, perform your read operations with R=3 or less. This will cause the nodes that do not have the object(s) yet to respond with not found, invoking read repair. So what does N=3 really mean? N=3 simply means that three copies of each piece of data will be stored in the cluster. That is, three different partitions/vnodes will receive copies of the data. There are no guarantees that the three replicas will go to three separate physical nodes; however, the built-in functions for determining where replicas go attempts to distribute the data evenly. As nodes are added and removed from the cluster, the ownership of partitions changes and may result in an uneven distribution of the data. On some rare occasions, Riak will also aggressively reshuffle ownership of the partitions to achieve a more even balance. For cases where the number of nodes is less than the N value, data will likely be duplicated on some nodes. For example, with N=3 and 2 nodes in the cluster, one node will likely have one replica, and the other node will have two replicas. Understanding replication by example To better understand how data is replicated in Riak let’s take a look at a put request for the bucket/key pair my_bucket/ my_key. Specifically we’ll focus on two parts of the request: routing an object to a set of partitions and storing an object on a partition. Routing an object to a set of partitions - Assume we have 3 nodes - Assume we store 3 replicas per object (N=3) - Assume we have 8 partitions in our ring (ring_creation_size=8) Note: It is not recommended that you use such a small ring size. This is for demonstration purposes only. With only 8 partitions our ring will look approximately as follows (response from riak_core_ring_manager:get_my_ring/0 truncated for clarity): ([email protected])3> {ok,Ring} = riak_core_ring_manager:get_my_ring(). ['}] The node handling this request hashes the bucket/key combination: ([email protected])4> DocIdx = riak_core_util:chash_key({<<"my_bucket">>, <<"my_key">>}). <<183,28,67,173,80,128,26,94,190,198,65,15,27,243,135,127,121,101,255,96>> The DocIdx hash is a 160-bit integer: ([email protected])5> <<I:160/integer>> = DocIdx. <<183,28,67,173,80,128,26,94,190,198,65,15,27,243,135,127,121,101,255,96>> ([email protected])6> I. 1045375627425331784151332358177649483819648417632 The node looks up the hashed key in the ring, which returns a list of preferred partitions for the given key. ([email protected])> Preflist = riak_core_ring:preflist(DocIdx, Ring). [ node chooses the first N partitions from the list. The remaining partitions of the “preferred” list are retained as fallbacks to use if any of the target partitions are unavailable. ([email protected])9> {Targets, Fallbacks} = lists:split(N, Prefl partition information returned from the ring contains a partition identifier and the parent node of that partition: {1096126227998177188652763624537212264741949407232, '[email protected]'} The requesting node sends a message to each parent node with the object and partition identifier (pseudocode for clarity): '[email protected]' ! {put, Object, 1096126227998177188652763624537212264741949407232} '[email protected]' ! {put, Object, 1278813932664540053428224228626747642198940975104} '[email protected]' ! {put, Object, 0} If any of the target partitions fail, the node sends the object to one of the fallbacks. When the message is sent to the fallback node, the message references the object and original partition identifier. For example, if [email protected] were unavailable, the requesting node would then try each of the fallbacks. The fallbacks in this example'} The next available fallback node would be [email protected]. The requesting node would send a message to the fallback node with the object and original partition identifier: '[email protected]' ! {put, Object, 1278813932664540053428224228626747642198940975104} Note that the partition identifier in the message is the same that was originally sent to [email protected] only this time it is being sent to [email protected]. Even though [email protected] is not the parent node of that partition, it is smart enough to hold on to the object until [email protected] returns to the cluster. Processing partition requests Processing requests per partition is fairly simple. Each node runs a single process ( riak_kv_vnode_master) that distributes requests to individual partition processes ( riak_kv_vnode). The riak_kv_vnode_master process maintains a list of partition identifiers and corresponding partition processes. If a process does not exist for a given partition identifier a new process is spawned to manage that partition. The riak_kv_vnode_master process treats all requests the same and spawns partition processes as needed even when nodes receive requests for partitions they do not own. When a partition’s parent node is unavailable, requests are sent to fallback nodes (handoff). The riak_kv_vnode_master process on the fallback node spawns a process to manage the partition even though the partition does not belong to the fallback node. The individual partition processes perform hometests throughout the life of the process. The hometest checks if the current node ( node/0) matches the parent node of the partition as defined in the ring. If the process determines that the partition it is managing belongs on another node (the parent node), it will attempt to contact that node. If that parent node responds, the process will hand off any objects it has processed for that partition and shut down. If that parent node does not respond, the process will continue to manage that partition and check the parent node again after a delay. The hometest is also run by partition processes to account for changes in the ring, such as the addition or removal of nodes to the cluster.
http://docs.basho.com/riak/kv/2.2.1/learn/concepts/replication/
2017-03-23T00:25:12
CC-MAIN-2017-13
1490218186530.52
[]
docs.basho.com
Managing your code The Acquia Cloud Workflow page gives you powerful and easy-to-use tools for managing your website's code. Depending on the repository you use, Acquia Cloud creates a different directory structure. You can either choose your version control system on the Workflow page or migrate to a new repository. Subversion (SVN) Acquia Cloud creates two top-level directories in your repository when you first register: trunk and branches. The /trunk directory contains two subdirectories: /docroot and /library. The docroot directory is for...
https://docs.acquia.com/cloud/manage/code
2014-04-16T10:22:25
CC-MAIN-2014-15
1397609523265.25
[]
docs.acquia.com
Stop using BBM Video over a mobile network When you prevent BBM Video from connecting over the wireless network, you can still use BBM Video over a Wi-Fi network. - In BBM, swipe down from the top of the screen. - Tap . - Set the Use BBM Video and Voice over Mobile Networks switch to Off. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/47561/amc1343270029830.jsp
2014-04-16T10:53:36
CC-MAIN-2014-15
1397609523265.25
[]
docs.blackberry.com
As of CARGO 1.0.3, the way CARGO supports remote deployments Based on the JBoss Application Server has drastically evolved. This document explains how to configure this support.You can version of JBoss / WildFly you are using, please use the quick links below to directly fo to the chapter regarding the JBoss version you're targeting: JBoss 4.0.x and 4.2.x ... uses , 7.1.x and WildFly 8.x use the cargo.jboss.management.portport. ...
http://docs.codehaus.org/pages/diffpages.action?originalId=231080274&pageId=231080275
2014-04-16T10:55:20
CC-MAIN-2014-15
1397609523265.25
[]
docs.codehaus.org
The query macro and where clause combine to give you full control over your query. Where is using a QueryBuilder that allows you to chain where clauses together to build up a complete query. posts = Post.where(published: true, author_id: User.first!.id) It supports different operators: Post.where(:created_at, :gt, Time.local - 7.days) Supported operators are :eq, :gteq, :lteq, :neq, :gt, :lt, :nlt, :ngt, :ltgt, :in, :nin, :like, :nlike Alternatively, #where, #and, and #or accept a raw SQL clause, with an optional placeholder ( ? for MySQL/SQLite, $ for Postgres) to avoid SQL Injection. # Example using Postgres adapterPost.where(:created_at, :gt, Time.local - 7.days).where("LOWER(author_name) = $", name).where("tags @> '{"Journal", "Book"}') # PG's array contains operator This is useful for building more sophisticated queries, including queries dependent on database specific features not supported by the operators above. However, clauses built with this method are not validated. Order is using the QueryBuilder and supports providing an ORDER BY clause: Post.order(:created_at) Direction Post.order(updated_at: :desc) Multiple fields Post.order([:created_at, :title]) With direction Post.order(created_at: :desc, title: :asc) Group is using the QueryBuilder and supports providing an GROUP BY clause: posts = Post.group_by(:published) Multiple fields Post.group_by([:published, :author_id]) Limit is using the QueryBuilder and provides the ability to limit the number of tuples returned: Post.limit(50) Offset is using the QueryBuilder and provides the ability to offset the results. This is used for pagination: Post.offset(100).limit(50) All is not using the QueryBuilder. It allows you to directly query the database using SQL. When using the all method, the selected fields will match the fields specified in the model unless the select macro was used to customize the SELECT. Always pass in parameters to avoid SQL Injection. Use a ? in your query as placeholder. Checkout the Crystal DB Driver for documentation of the drivers. Here are some examples: posts = Post.all("WHERE name LIKE ?", ["Joe%"])if postsposts.each do |post|puts post.nameendend# ORDER BY Exampleposts = Post.all("ORDER BY created_at DESC")# JOIN Exampleposts = Post.all("JOIN comments c ON c.post_id = post.idWHERE c.name = ?ORDER BY post.created_at DESC",["Joe"]) The select_statement macro allows you to customize the entire query, including the SELECT portion. This shouldn't be necessary in most cases, but allows you to craft more complex (i.e. cross-table) queries if needed: class CustomView < Granite::Baseconnection pgcolumn id : Int64, primary: truecolumn articlebody : Stringcolumn commentbody : Stringselect_statement <<-SQLSELECT articles.articlebody, comments.commentbodyFROM articlesJOIN commentsON comments.articleid = articles.idSQLend You can combine this with an argument to all or first for maximum flexibility: results = CustomView.all("WHERE articles.author = ?", ["Noah"]) The exists? class method returns true if a record exists in the table that matches the provided id or criteria, otherwise false. If passed a Number or String, it will attempt to find a record with that primary key. If passed a Hash or NamedTuple, it will find the record that matches that criteria, similar to find_by. # Assume a model named Post with a title fieldpost = Post.new(title: "My Post")post.savepost.id # => 1Post.exists? 1 # => truePost.exists? {"id" => 1, :title => "My Post"} # => truePost.exists? {id: 1, title: "Some Post"} # => false The exists? method can also be used with the query builder. Post.where(published: true, author_id: User.first!.id).exists?Post.where(:created_at, :gt, Time.local - 7.days).exists?
https://docs.amberframework.org/granite/docs/querying
2021-06-12T16:56:45
CC-MAIN-2021-25
1623487586239.2
[]
docs.amberframework.org
Build The C# SOAP Sample Client The Blackboard Learn sample code contains a fully-functional client that provides form-based interaction with all of the available SOAP-based Web Services. This allows a Developer to interact with any of the Web Services, see what data is required, try different combinations and permutations and inspect the code behind it. This is a great tool to use to assist in designing your Web Services integration, and even to troubleshoot an existing integration from release-to-release. This help article assumes that you have downloaded the Web Services Sample Code and that you have built the .NET Web Services Library. How to Build Building the .NET Sample Client is really quite simple, once you have generated and built the Web Services Library. The solution has already been created, and lives in the **_ Now simply click Debug->Build and build the project. This will place an executable called wsagent.exe in the _**
https://docs.blackboard.com/learn/soap/tutorials/build-sample-client-csharp
2021-06-12T17:21:40
CC-MAIN-2021-25
1623487586239.2
[]
docs.blackboard.com
Troubleshooting smart card log in If you have problems with smart card logon, Access Manager provides a command-line tool, sctool, which you can run to configure smart card logon, as well as to provide diagnostic information. See Understanding sctool or the sctool man page. Additional smart card diagnostic procedures are provided in Diagnosing smart card log in problems.
https://docs.centrify.com/Content/mac-admin/SmartCardLoginTroubleshoot.htm
2021-06-12T17:25:08
CC-MAIN-2021-25
1623487586239.2
[]
docs.centrify.com
Remote desktop connection is sometimes stuck on the Securing remote connection screen This article provides a solution to the issue in which the remote desktop connection stays in the connecting to status. Applies to: Windows 7 Service Pack 1, Windows Server 2012 R2 Original KB number: 2915774 Symptoms Assume a scenario in which you use a remote desktop connection for operating system Windows 7 or later versions. In this scenario, Remote desktop connection is stuck for several seconds when it displays the following texts: Remote Desktop Connection Connecting to: Securing remote connection... Cause Remote desktop connection uses the highest possible security level encryption method between the source and destination. In Windows 7 or later versions, the remote desktop connection uses the SSL (TLS 1.0) Protocol and the encryption is Certificate-based. To work around this behavior, use either of the following methods: Method 1 If you're using a self-signed certificate, import the certificate to the source. To do this, follow these steps on the destination: - Sign in as an administrator in the destination, select Start, enter mmc in the Search programs and files box and run Microsoft Management Console. - On the File menu, Console Route > Certificates (Local Computer) > Remote Desktop > Certificates. - Double-click the Certificate in the middle pane to open it. - On the Detail tab, select the Copy to File... button. - The Certificate Export Wizard will open. Leave the default settings and then save the file in any folder. - Copy the exported file to the source computer. Then follow these steps on the source: Sign in as an administrator in the source, select Start, enter mmc in the Search programs and files box, and run the mmc.exe. Select the File menu and then Certificates (Local Computer) > Trusted Root Certification Authorities > Certificates. Right-click to select All Tasks, and then select Import... from the menu. The Certificate Import Wizard will open. Follow the instructions in the wizard to start the import. In the Certificate file to import window, specify the file that was copied from the destination computer. In the Certificate store window, verify that: - Place all certificates in the following store is selected - Certificate Store lists Trusted Root Certification Authorities. Note By default, the self-signed certificate expires in six months. If it has expired, the certificate will be recreated. You must import the recreated certificate to the source again. Method 2 Deploy a Group Policy Object to the client to turn off Automatic Root Certificates Update. To do it, follow these steps on a Windows Server 2012 R2-based computer: - Open Group Policy Management Console. To do.
https://docs.microsoft.com/en-us/troubleshoot/windows-server/remote/rdc-stuck-on-src-screen
2021-06-12T18:40:26
CC-MAIN-2021-25
1623487586239.2
[array(['media/rdc-stuck-on-src-screen/remote-desktop-connection.jpg', 'Remote Desktop Connection'], dtype=object) ]
docs.microsoft.com
trimMulti Summary Removes whitespace from thje beginning and end of each line of the string. Usage public string trimMulti() Returns The string with whitespace removed from the beginning and end of each line. Description Trims leading and trailing whitespace from each line of the string without removing newline characters. Examples Basic Usage 1234import System; string text = "The quick brown fox\n jumped over the lazy dog."Console.log(text.trimMulti()); // "The quick brown fox\njumped over the lazy dog." Share HTML | BBCode | Direct Link
https://docs.onux.com/en-US/Developers/JavaScript-PP/Standard-Library/System/String/trimMulti
2021-06-12T17:18:07
CC-MAIN-2021-25
1623487586239.2
[]
docs.onux.com
Configure cloud credentials Credentials allow PX-Backup to authenticate with clusters for the purpose of taking backups and restoring to them, as well as with backup locations where backup objects are stored. Follow the tasks in this section associated with the cloud provider you want to add. Last edited: Tuesday, Sep 29, 2020 Questions? Visit the Portworx forum.
https://backup.docs.portworx.com/use-px-backup/credentials/
2021-06-12T17:07:00
CC-MAIN-2021-25
1623487586239.2
[]
backup.docs.portworx.com
Gnosis Safe Multisig on Binance Smart Chain Introduction First deployed in early 2017, Gnosis multi-signature wallet became the foundational infrastructure for storing funds on Ethereum. The Gnosis Safe is the most secure way to manage your crypto funds. Today, you can set up the Gnosis Safe Multisig on Binance Smart chain in less than 60 seconds, and you can use wallets including Ledger, Trezor, Wallet Connect, Torus, and browser wallets like Metamask as signing keys so that you can manage your crypto collectively and inter-operably. Advantages of Gnosis Safe contracts The Gnosis Safe is a smart contract wallet with multi-signature functionality at its core. It enables the following features: High Security Advanced execution logic Advanced access management Mainnet Deployment Address Set up your own Gnosis Safe Multisig Now Projects building with the Gnosis Safe Please read the list here User Guides Create Gnosis Safe - Connect wallet - Create Safe - Choose owners and confirmations - Review safe settings - Sign transactions to create your safe Great! You have created a Gnosis safe. Receive Funds for Multi-sig Wallet - Go to the home page - Click on "Receive" to get QR code for receiving funds Send Funds from Multi-sig Wallet - Go to the home page - Click on "Send" to create a transaction - Submit transaction - Another key person needs to connect with Gnosis to confirm this transaction - Approve Transaction - Sign transaction - Transaction will be sent after the second signature is sent - The others may reject the transaction, they only need to send a different transaction Load existing Safe - Select "Load Existing Safe" - Input address - Verify owner - Click load - Load Transactions API API to keep track of transactions sent via Gnosis Safe smart contracts
https://docs.binance.org/smart-chain/developer/gnosis.html
2021-06-12T17:18:41
CC-MAIN-2021-25
1623487586239.2
[]
docs.binance.org
Importing sudoers configuration files If you are currently managing privileges on Linux and UNIX computers using multiple sudoers configuration files, you can import that information and convert it into rights and role definitions that can then be assigned to Active Directory users and groups, local users and groups, or both. This
https://docs.centrify.com/Content/auth-admin-unix/SudoersIntro.htm
2021-06-12T18:31:18
CC-MAIN-2021-25
1623487586239.2
[]
docs.centrify.com
Framework Element. Min Height Property Definition Important Some information relates to pre-released product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets the minimum height constraint of a FrameworkElement. public: property double MinHeight { double get(); void set(double value); }; double MinHeight(); void MinHeight(double value); public double MinHeight { get; set; } Public Property MinHeight As Double <frameworkElement MinHeight="double"/> Property Value Height.
https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.frameworkelement.minheight?view=winui-3.0-preview
2021-06-12T18:25:23
CC-MAIN-2021-25
1623487586239.2
[]
docs.microsoft.com