content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
What's New in Progressive Web Apps The following sections list the updates to PWAs and Web Apps from the Microsoft Edge web apps team. To try new features review these announcements. To stay up to date with the latest and greatest features, download the Microsoft Edge preview channels. What's New in Microsoft Edge 96 New hub design for managing your installed web apps Microsoft Edge Canary reached version 96 on October 5, 2021. With a subset of our users, we're testing a new design to better manage your installed web apps. When you go to edge://apps in your browser, it now displays a redesigned hub that lists installed PWAs and websites as apps. You can sort your apps by any of the following: - Recently used. - Alphabetically, based on title. - Date of installation. You can also arrange apps in a list or grid view. Additionally, you can easily pin apps to the taskbar or Start menu. You can create a shortcut, and enable apps to run on user login. Also, there's now a way to easily access the following: - Permissions and privacy details for the associated origin. - More details about the application. What's New in Microsoft Edge 95 Microsoft Edge version 95 moved to Beta channel on September 28, 2021. The origin trials remain active for the following features: We expect the protocol handlers origin trial to end on October 21, 2021. What's New in Microsoft Edge 94 Microsoft Edge version 94 moved to Stable on September 23, 2021. This release cycle was short—just 3 weeks from Microsoft Edge 93 Stable to Microsoft Edge 94 Stable, as we snapped to the new 4-week release cycle. This new release cadence matches the new cadence of Chromium milestones, described in Speeding up Chrome's release cycle. Due to the shortened release cycle of Microsoft Edge version 94, we focused on stabilizing the release cycle logistics, and we shifted feature development to Microsoft Edge version 95. The origin trials remain active for the following features: We expect the protocol handlers origin trial to end with Microsoft Edge version 94 as we take final feedback and get ready to move the protocol handlers feature to Stable. In case you are enrolled in the origin trial for protocol handlers, we plan to end the trial period after Microsoft Edge version 94. We'll then determine when this feature will become Stable. What's New in Microsoft Edge 93 Microsoft Edge version 93 became the Stable channel of Microsoft Edge on September 2, 2021. This article lists updates we made to Progressive Web Apps (PWAs) from both a developer and consumer point of view. Measure usage of your Store-installed PWA Microsoft Edge now includes a referrer header with the request for the first navigation of your Microsoft Store-installed PWA. This feature was first introduced in Microsoft Edge version 91, and we shipped a bug fix in Microsoft Edge version 93. Learn more in Publish your Progressive Web App to the Microsoft Store. Window Controls Overlay origin trials To have more control over the title bar area that's currently displayed in standalone display mode, you may want to experiment with Window Controls Overlay. Window Controls Overlay (WCO) is a set of features that work together to provide just the essential controls needed for the app window. This layout frees up more space for the web content layer. WCO is available for installed desktop PWAs. Learn more about experimenting with Window Controls Overlay at Experimental features in Progressive Web Apps (PWAs). Register your origin for the Web App Window Controls Overlay trial at our Origin Trials Developer Console. URL Handlers origin trial Developers can now use the experimental feature Web App URL Handlers in origin trial. This feature allows the registration of an installed PWA to open links from other apps that refer to its scope. Learn more about experimenting with URL handlers at Experimental features in Progressive Web Apps (PWAs). Register your domain for the Web App URL Handlers trial at our Origin Trials Developer Console. Support for the Share API on macOS We have implemented support for the navigator.share API for macOS. The feature is rolling out to stable Microsoft Edge browsers on macOS over the coming weeks. Learn more about the navigator.share() API. What's New in Microsoft Edge 92 Microsoft Edge version 92 became the stable channel of Microsoft Edge on July 22, 2021. This article lists updates we made to Progressive Web Apps (PWAs) from both a developer and consumer point of view. Protocol handlers origin trial You can now register your PWA to handle specific protocols with the host operating system. The Windows trial for protocol handlers is now available. You can register your origin for the Web App Protocol Handler trial at the origin trial signup page. Learn more about using protocol handlers with your PWA at Experimental features in Progressive Web Apps (PWAs). Streamlined App Info menu When a user selects the ellipses (...) button in the app's title bar, the App info menu is displayed. We've updated the App info menu and streamlined the user experience in the following ways, to provide a user experience that's more like a desktop app than a browser UI: Moved the app Publisher information to the top level and made it the first thing a user sees. Moved the privacy information and controls into a dedicated 2nd-level Privacy menu. Moved content-related tools into a dedicated 2nd-level More tools menu. Post-install flyout dialog box After a PWA is installed from the Microsoft Edge browser on Windows, users can now select from four options to easily launch their apps: - Pin to taskbar - Pin to Start - Create Desktop shortcut - Auto-start on device login For convenience, this flyout dialog box is shown the first time the app is launched. This feature is being rolled out gradually to all users. In the meantime, if you'd like to use this feature, go to edge://flags and enable the flag Web Apps Post Install Dialog. Restore Web Apps Installed sites and PWAs that were running before an unexpected shutdown will now restore (that is, they will be restarted) when the system recovers. An unexpected shutdown can occur due to process failure, system restart, or power outage. Before this change, installed sites and PWAs had indeterminate behavior upon system restore.
https://docs.microsoft.com/en-us/microsoft-edge/progressive-web-apps-chromium/whats-new/pwa
2021-10-16T06:09:04
CC-MAIN-2021-43
1634323583423.96
[]
docs.microsoft.com
Newly created users are automatically granted the privileges needed to create and maintain objects in their space, but must be explicitly granted the privileges needed to CREATE USER/MODIFY USER or CREATE DATABASE (see also GRANT (SQL Form) in Teradata Vantage™ - SQL Data Control Language, B035-1149). Newly created users do not receive WITH GRANT OPTION privileges for any of the automatically granted privileges. The following privileges are automatically granted to a user when it is created. - CHECKPOINT - CREATE AUTHORIZATION - CREATE MACRO - CREATE TABLE - CREATE TRIGGER - CREATE VIEW - DELETE - DROP AUTHORIZATION - DROP FUNCTION - DROP MACRO - DROP PROCEDURE - DROP TABLE - DROP TRIGGER - DROP VIEW - DUMP - EXECUTE - INSERT - RESTORE
https://docs.teradata.com/r/76g1CuvvQlYBjb2WPIuk3g/Rx7A30h9xwR1OFksrAr4BA
2021-10-16T07:01:09
CC-MAIN-2021-43
1634323583423.96
[]
docs.teradata.com
Delete a cluster peer relationship Before removing the relationship, the command verifies that no resources depend on the relationship. For example, if any SnapMirror relationships exist, the command denies the request to delete the peering relationship. You must remove all dependencies for the deletion to succeed. The cluster peer delete command removes only the local instance of the peer relationship. An administrator in the peer cluster must use the cluster peer delete command there as well to completely remove the relationship. cluster2::> cluster peer delete -cluster cluster1 Error: command failed: Unable to delete peer relationship. Reason: A SnapMirror source exists in this cluster
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-991/cluster__peer__delete.html
2021-10-16T05:45:36
CC-MAIN-2021-43
1634323583423.96
[]
docs.netapp.com
Apache Slider lets you deploy distributed applications across a Hadoop cluster. Slider leverages the YARN ResourceManager to allocate and distribute components of an application across a cluster. Key Slider features: Run applications on YARN without changing the application code (as long as the application follows Slider developer guidelines). There is no need to develop a custom Application Master or other YARN code. Use the application registry for publishing and discovery of dynamic metadata. Run multiple instances of applications with different configurations or versions in one Hadoop cluster. Expand or shrink application component instances while an application is running. Transparently deploy applications in secure Kerberos clusters. Aggregate application logs from different containers. Run applications on a subset of cluster nodes using YARN node labels. Manage application, component, and container failures. Slider leverages YARN capabilities to manage: Application recovery in cases of container failure Resource allocation (adding and removing containers)
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.0/bk_yarn-resource-management/content/ch_slider.html
2021-10-16T05:28:53
CC-MAIN-2021-43
1634323583423.96
[]
docs.cloudera.com
Before.) use ( M5Stack-STAMP-PICO). Project-> Load Library-> Library Management... STAMP-PICOand install it, as shown in the figure below. When downloading, please follow the pop-up prompts to install related dependent libraries. Tools-> Port-> COMx), click the upload button ( ->) on the menu bar, The program will be automatically compiled and uploaded to the device. The program will light up the LED lights on STAMP-PICO. // How many leds in your strip? // Define the array of leds CRGB leds[NUM_LEDS]; /* After STAMP-PICO is started or reset the program in the setUp () function will be run, and this part will only be run once. After STAMP-PICO is started or reset, the program in the setup() function will be executed, and this part will only be executed once. */ void setup() { FastLED.addLeds<SK6812, DATA_PIN, RGB>(leds, NUM_LEDS); // GRB ordering is typical } /*() { // Turn the LED on, then pause leds[0] = 0xf00000; FastLED.show(); delay(500); // Now turn the LED off, then pause leds[0] = 0x00f000; FastLED.show(); delay(500); }
https://docs.m5stack.com/en/quick_start/stamp_pico/arduino
2021-10-16T05:43:23
CC-MAIN-2021-43
1634323583423.96
[]
docs.m5stack.com
Adding Devices to Mudmap This section assumes two things: - You have registered with Mudmap, if not see Register an Account. - The device you're about to add has been prepared, if not see Preparing your Device #Overview Before you can integrate your firewall into Mudmap it needs to be added to the list of devices and then registered within the application. The next two sections cover this process in detail. The broad scheme is like so: - Add a new device to Mudmaps database (this page) - Register an added device, meaning make the initail connection, install the agent and test connectivity (see Registering Devices for more info) #Add the device Adding your device to Mudmap is as simple as providing some information about the firewall and submitting that in a form. What information does Mudmap require? - A label for the firewall, this is just for convenience - Host Address, must be a IPv4 or IPv6 address - SSH Port that is publicly available - Graphical User Interface port - the port used to access the pfSense user interface, defaults to 443 why do you need the pfSense user interface port? This is for the API only - it uses this port to interact with pfSense. This does not need to be internet facing (highly recommend against that practice). - pfSense version; possible combinations 2.4, 2.5 and 2.6 The Device Registration page has two components; Register Device and Devices. We only care about Register Devices, which should look the same as the image below. After filling out the required information click the Add button. #Devices are unique If you've already added this device, the step will fail with an alert notification. Otherwise, you should get a success notification and see the newly added device appear in the Devices table to the right. #Next steps If you've made it this far, then congratulations, your device is now in Mudmap's database ready to be registered into the system.
https://docs.mudmap.io/adding-the-device
2021-10-16T06:21:26
CC-MAIN-2021-43
1634323583423.96
[array(['/img/register-device-docs.png', 'Adding your pfSense firewall - Register the Device'], dtype=object)]
docs.mudmap.io
Luna Elements Configurable fields are editable items within each Elements template, than allow you to fully customise the content and experience of your playable ad. Each specific template comes with a number of specific fields which are relevant for that design, and are listed in the page individual template pages. All of the general fields which exist in all templates are detailed below. #Video Video fields enable you control the content as well as size controls of the video. #Video Anchor example - Top Middle - Bottom Middle #Video Fit/Fill example - Fit (0) - Fill (1) #Background It's likely that your choices for scaling and anchoring your video will leave unfilled edges in some resolutions. The background options allow you to control what is used to fill such areas. #Background Image Scaling example - Keep aspect, fill - Keep aspect, fit - Ignore aspect, fill #Hint/Tap Hint The Hint sections allows you to control image, size and position of the hint or hints to be used. You can also add optional hint text and relevant options (size, color). Click here to see where on the color bar to set the Alpha value (Opacity). #Overlay Color Opacity example - Alpha set to 30 - Alpha set to 80 #End Card This section controls all the major features of the end card, which shows at various stages of the playable depending on your choice of template. Some templates like End Card Overlay will not contain this section, and others like Static End Card will contain this section but not all the fields. Don't worry this is intentional. #End Card Alignment example - Top - Middle - Bottom #Banner This section controls a marketing banner which can be placed in various position in the playable. #App Store Controls The App Store Controls allow you to customise when the user is directed to the app tore after engaging with your playable. #Soundtrack This section handles the inclusion of any audio you wish to have play in your creative, as well as the option to mute it or not. Note that there are a few fields here which may not be used for every Elements template. #Advanced Settings These settings provide additional controls which allow you to fine-tune the playable experience. Note that there are a number of fields here which may not be used for every Elements template. #End Card Icon Corner Radius example - Icon with 0 Radius - Icon with 50 Radius
https://staging.docs.lunalabs.uk/docs/playable/elements/configurable-fields/
2021-10-16T06:17:39
CC-MAIN-2021-43
1634323583423.96
[array(['/assets/elements/top-middle-example.png', 'images-small'], dtype=object) array(['/assets/elements/bottom-middle-example.png', 'images-small'], dtype=object) array(['/assets/elements/fit-example.png', 'images-xsmall'], dtype=object) array(['/assets/elements/fill-example.png', 'images-xsmall'], dtype=object) array(['/assets/elements/background-keep-aspect-fill.png', 'images-xsmall'], dtype=object) array(['/assets/elements/background-keep-aspect-fit.png', 'images-xsmall'], dtype=object) array(['/assets/elements/background-ignore-aspect-fill.png', 'images-xsmall'], dtype=object) array(['/assets/elements/color_menu_alpha.png', 'images-small'], dtype=object) array(['/assets/elements/mask-opacity30.gif', 'images-xsmall'], dtype=object) array(['/assets/elements/mask-opacity80.gif', 'images-xsmall'], dtype=object) array(['/assets/elements/end-card-top.png', 'images-xsmall'], dtype=object) array(['/assets/elements/end-card-middle.png', 'images-xsmall'], dtype=object) array(['/assets/elements/end-card-bottom.png', 'images-xsmall'], dtype=object) array(['/assets/elements/icon-0.png', 'images-xsmall'], dtype=object) array(['/assets/elements/icon-50.png', 'images-xsmall'], dtype=object)]
staging.docs.lunalabs.uk
This guide explains how to implement LDAP authentication using an external server. User authentication will fall back to built-in Django users in the event of a failure. Requirements¶ Install openldap-devel¶ On Ubuntu: On CentOS: Install django-auth-ldap¶ Configuration¶ Create a file in the same directory as configuration.py (typically netbox/netbox/) named ldap_config.py. Define all of the parameters required below in ldap_config.py. General Server Configuration¶ Info When using Windows Server 2012 you may need to specify a port on AUTH_LDAP_SERVER_URI. Use 3269 for secure, or 3268 for non-secure. User Authentication¶ Info When using Windows Server, 2012 AUTH_LDAP_USER_DN_TEMPLATE should be set to None. User Groups for Permissions¶ is_active- All users must be mapped to at least this group to enable authentication. Without this, users cannot log in. is_staff- Users mapped to this group are enabled for access to the administration tools; this is the equivalent of checking the "staff status" box on a manually created user. This doesn't grant any specific permissions. is_superuser- Users mapped to this group will be granted superuser status. Superusers are implicitly granted all permissions. It is also possible map user attributes to Django attributes:
https://atomdocs.pluginthefuture.eu/all_atoms/netbox/docs/installation/ldap/
2021-10-16T05:18:26
CC-MAIN-2021-43
1634323583423.96
[]
atomdocs.pluginthefuture.eu
Jobs Jobs can be scheduled to execute commands on the device and are configured from the AutoPi Cloud. The command results can then be uploaded to the AutoPi Cloud or other system by using returners. SCHEDULING Job execution is scheduled with standard cron expressions for the ease of use and flexibility. Simply put, cron is a basic utility available on Linux systems. It enables users to schedule tasks to run periodically at a specified date/time or interval. tip Like any AutoPi cloud functionality, jobs can be managed programmatically through the AutoPi REST API. For more information see:
https://docs.autopi.io/cloud/cloud-jobs/
2021-10-16T05:57:17
CC-MAIN-2021-43
1634323583423.96
[]
docs.autopi.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the ListCustomMetrics operation. Lists your Device Defender detect custom metrics. Requires permission to access the ListCustomMetrics action. Namespace: Amazon.IoT.Model Assembly: AWSSDK.IoT.dll Version: 3.x.y.z The ListCustomMetricsRequest type exposes the following members .NET Core App: Supported in: 3.1 .NET Standard: Supported in: 2.0 .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TListCustomMetricsRequest.html
2021-10-16T07:19:14
CC-MAIN-2021-43
1634323583423.96
[]
docs.aws.amazon.com
Stick-C-Plus). Project-> Load Library-> Library Management... M5StickCPlusand install it, as shown in the figure below. When downloading, please follow the pop-up prompts to install related dependent libraries. Tools-> Port-> COMx), click the upload button ( ->) on the menu bar, and program It will be automatically compiled and uploaded to the device. The program will print the "Hello World" string on the screen of M5StickC Plus. /* After M5StickC is started or reset the program in the setUp () function will be run, and this part will only be run once. After M5StickCPlus is started or reset, the program in the setup() function will be executed, and this part will only be executed once. */ void setup(){ // Initialize the M5StickCPlus object. Initialize the M5StickCPlus object M5.begin(); // LCD display. LCd display M5.Lcd.print("Hello World"); } /*
https://docs.m5stack.com/en/quick_start/m5stickc_plus/arduino
2021-10-16T04:45:01
CC-MAIN-2021-43
1634323583423.96
[]
docs.m5stack.com
Dynamic Delivery - Unified Origin¶ Table of Contents Using a server manifest that references the remixed MP4, Unified Origin VOD can dynamically generate and output DASH, fMP4 HLS and HLS TS with all the SCTE 35 signaling necessary to support a third-party ad insertion workflow. Note Output of Adobe HDS and Microsoft Smooth Streaming will keep working in a Remix AVOD workflow with Origin, but it will not contain Timed Metadata (and so will not be suitable for a third-party ad insertion workflow) Creating a VOD server manifest for a Remix AVOD worklfow¶ When creating a server manifest with a remixed MP4 as input, make sure that --timed_metadata is enabled, and if content splicing is necessary --splice_media as well: #!/bin/bash mp4split -o manifest.ism \ --timed_metadata \ --splice_media \ --hls.client_manifest_version=4 \ --hls.no_audio_only \ --hls.minimum_fragment_length=4004/1000 \ remixed.mp4 When you have created the server manifest, streaming the media works the same as any regular Unified Origin VOD setup (except that you may want to add a third-party ad insertion service to your workflow). For more info on Origin VOD, please see the relevant documentation: Unified Origin - VOD. To check the Timed Metadata in the output, either request an MPD for DASH, or one the HLS Media Playlists (the Timed Metadata will not be present in the Master Playlist).
https://docs.unified-streaming.com/documentation/remix/avod/origin.html
2021-10-16T05:40:10
CC-MAIN-2021-43
1634323583423.96
[]
docs.unified-streaming.com
The LifeKeeper Oracle Recovery Kit installation creates 3 registry entry variables stored in the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\SIOS\LifeKeeper\RK\ORAapp MAXWAIT is a decimal integer that specifies the number of seconds that the recovery kit will wait for a single Oracle service to start or stop. If the service has not started within the specified time frame, LifeKeeper will mark the resource as failed. The default value for MAXWAIT is 300; however, it is possible that for extremely large databases, 300 seconds might not be enough time for the database services to reach the STARTED or STOPPED state. If this is the case, change this registry entry to a reasonable value. RESTORE_DEEPCHK_MAX_RETRY is a decimal integer that allows multiple attempts to verify the Oracle service state during a restore or local recovery operation. On a server that is unexpectedly heavily loaded, the default service state check time may not always be sufficient to verify that protected Oracle services are in the RUNNING state. The default value for this variable is 0 and normally only 1 Oracle service state check attempt is performed for each service. This value can be changed if extra attempts may be needed to verify the Oracle service state. RESTORE_DEEPCHK_SLEEP is a decimal integer, measured in seconds, to insert sleep intervals between each extra attempt to verify the Oracle service state during a restore or local recovery operation. This option is enabled if the RESTORE_DEEPCHK_MAX_RETRY option described above is used. The default value for this variable is 0 and normally no sleep times are inserted between extra Oracle service state check attempts. If the RESTORE_DEEPCHK_MAX_RETRY variable is set, it is highly recommended that the RESTORE_DEEPCHK_SLEEP variable be set as well to improve the reliability and performance of Oracle service state checks. Post your comment on this topic.
https://docs.us.sios.com/sps/8.7.1/en/topic/lifekeeper-oracle-recovery-kit-recovery-variables
2021-10-16T06:55:35
CC-MAIN-2021-43
1634323583423.96
[]
docs.us.sios.com
Where can I find an invoice for my purchase? To create an invoice for your purchase, sign in to your Vimeography account on the account page here: Once you're signed in, click the Purchase History tab: Next, find the purchase that you'd like to retrieve an invoice for in the list and click on Generate Invoice You will be taken to a page where you can fill out your billing details and add any additional notes that you'd like to be included with your invoice. Once finished, click Save and you will be presented with an invoice that you can download or print.
https://docs.vimeography.com/article/60-where-can-i-find-an-invoice-for-my-purchase
2021-10-16T04:41:51
CC-MAIN-2021-43
1634323583423.96
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58653469c697915403a07db6/images/60ed8a119e87cb3d0124cbfb/file-6V4isemvYc.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58653469c697915403a07db6/images/60ed8a548556b07a2884f155/file-LIOgx593UK.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58653469c697915403a07db6/images/60ed8b0c9e87cb3d0124cc00/file-4zYeEUuFkJ.png', None], dtype=object) ]
docs.vimeography.com
Character Controller The character controller is an actor type used for the player objects to provide collision-based physics but also to allow for more customizations dedicated to game characters (player or NPCs). It's a common choice for first-person and third-person games. Character volume The character uses a capsule, defined by a center position, a vertical height and a radius. The height is the distance between the two sphere centers at the end of the capsule. For example, the capsule has better behavior when climbing stairs. Auto stepping Without auto-stepping it is easy for a character to get stuck against slight elevation changes in a ground mesh. It feels unnatural because in the real world a person would just cross over these small obstacles. You can adjust auto-stepping behaviour by using CharacterController.SlopeLimit and CharacterController.StepOffset properties. Properties
https://docs.flaxengine.com/manual/physics/character-controller.html
2021-10-16T05:37:45
CC-MAIN-2021-43
1634323583423.96
[array(['media/physics4.gif', 'Character Controller'], dtype=object) array(['media/cc-capsule.png', 'Character Volume'], dtype=object) array(['media/cc-properties.jpg', 'Properties'], dtype=object)]
docs.flaxengine.com
Snapshot - GET /api/1/snapshot/(snapshot_id)/ Get information about a snapshot in the archive. A snapshot is a set of named branches, which are pointers to objects at any level of the Software Heritage DAG. It represents a full picture of an origin at a given time. As well as pointing to other objects in the Software Heritage DAG, branches can also be aliases, in which case their target is the name of another branch in the same snapshot, or dangling, in which case the target is unknown. A snapshot identifier is a salted sha1. See swh.model.identifiers.snapshot_identifier()in our data model module for details about how they are computed. - Parameters snapshot_id (sha1) – a snapshot identifier - Query Parameters branches_from (str) – optional parameter used to skip branches whose name is lesser than it before returning them branches_count (int) – optional parameter used to restrain the amount of returned branches (default to 1000) target_types (str) – optional comma separated list parameter used to filter the target types of branch to return (possible values that can be contained in that list are content, directory, revision, release, snapshotor alias) - Request Headers - - Response Headers Content-Type – this depends on Accept header of request Link – indicates that a subsequent result page is available and contains the url pointing to it - Response JSON Object branches (object) – object containing all branches associated to the snapshot,for each of them the associated target type and id are given but also a link to get information about that target id (string) – the unique identifier of the snapshot - Status Codes - 400 Bad Request – an invalid snapshot identifier has been provided 404 Not Found – requested snapshot can not be found in the archive Example:
https://docs.softwareheritage.org/devel/swh-web/uri-scheme-api-snapshot.html
2021-10-16T04:42:03
CC-MAIN-2021-43
1634323583423.96
[]
docs.softwareheritage.org
Bókun Pay Pricing Get information on Bókun Pay Pricing, Bókun's payment provider. #What is Bókun Pay? Bókun Pay is Bókun's payment provider, powered by Trust My Travel. Bókun Pay can be added to your online booking channels allowing you to accept online payments from your customers. #What are the fees for Bókun Pay? Bókun Pay charges a 2.5% fee + fixed fee on all transactions. #What are the fixed fees? The fixed fee per transaction is displayed in your default currency, if it's a supported currency for Bókun Pay. Otherwise, the fee is displayed in Euro. This fee is fixed, which means it does not fluctuate with exchange rates. Below you can view the fixed fees for each individual currency. #Related articles Bókun Pay FAQ How to set up Bókun Pay
https://docs.bokun.io/docs/widgets-and-online-sales/payment-providers/bokun-pay-pricing
2021-10-16T05:58:56
CC-MAIN-2021-43
1634323583423.96
[array(['/assets/images/612198ea753668405734aebfb68a46fb-801710d7856d642e7fcfd5c4376f2436.jpeg', None], dtype=object) ]
docs.bokun.io
SphereMesh¶ Inherits: PrimitiveMesh < Mesh < Resource < Reference < Object Class representing a spherical PrimitiveMesh. Description¶ Class representing a spherical PrimitiveMesh. Property Descriptions¶ Full height of the sphere. If true, a hemisphere is created rather than a full sphere. Note: To get a regular hemisphere, the height and radius of the sphere must be equal. Number of radial segments on the sphere. Radius of sphere. Number of segments along the height of the sphere.
https://docs.godotengine.org/zh_CN/latest/classes/class_spheremesh.html
2021-10-16T05:04:03
CC-MAIN-2021-43
1634323583423.96
[]
docs.godotengine.org
User walkthrough¶ The steps an user must follow to use and consume an atom. Even how the user could create and propose a ToolBox Prerequisites¶ To be able to work in the Plug'In Playground, the user must have some knowledge of Unix/Linuxbase commands : files related (ls, cd, mv, cp...), pipes, find and filter text... gitbase commands: clone, checkout, branch, pull, merge, config... dockerbase commands with their options: pull, build, run Dockerfilestructure and commands Quest¶ The user quest can be summarized graphically as follows: A better-looking drawing can be found in Plazza TL;DR¶ Available atoms are listed in Atom Store. In the atom's page, you will find the links to its documentation (Atom Docs) and source code (GitLab). Build and/or deploy in Playground. Repeat with some more atoms and build a Toolbox Share your experience in the Lucy Wall. Search and find an atom¶ User visits Atom Store to find an atom that fits his/her needs or covers a research subject. Learning about the atom¶ User learns by him/herself about the features and how-to-use the atom reading : - The documentation in Atom Docs - The source code in the Atom homepage Also he/she can go to the Lucy Wall to look for examples of use cases or learn how to use the atom with more examples than in the developer documentation. Run, test and hack the atom¶ User opens the playground, starts an instance and pulls the image if it exists in Artifactory. If it is not available in Artifactory, the user clones the project from its GitLab repository and builds the image following the atom documentation for the instructions. Play with the atom, experiment, hack it, tune it, plug it with other atoms, code some calling functions between them, bundle a group of them to create a Toolbox. Share your experience in the Lucy Wall.
https://atomdocs.pluginthefuture.eu/all_atoms/plugin-guides/docs/user-quest/
2021-10-16T04:41:09
CC-MAIN-2021-43
1634323583423.96
[]
atomdocs.pluginthefuture.eu
Date: Sun, 14 Oct 2012 17:14:09 -0500 From: Joseph a Nagy Jr <[email protected]> To: Polytropon <[email protected]>, FreeBSD Questions <[email protected]> Subject: Re: Graphiz broke because of swig Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig17DC97421D2ACC260B437B6A Content-Type: multipart/mixed; step, depending in what currently is installed on your system. Thanks, I hate seeming like a noob but its been a while since I've had to get under the hood, so to speak. > Note that using port management tools might be an easier approach > here, but utilizing the power of "bare bone ports" could lead to > better diagnostic messages. I was just following the handbook's suggestion of installing subversion (which on a bare system led to hours of compilations that broke because I didn't know swig was a program to handle what I was optioning in for graphviz or that it was broken and not used). > Anyway, always consult /usr/ports/UPDATING for news. You would > (for example) find something like this: >=20 > 20080507: > AFFECTS: Perl interface users of audio/gramofile > AUTHOR: [email protected] >=20 > Perl support is removed due to devel/swig11 removal in ports. If you= use > the Perl interface, you are encouraged to use the new Audio::Gramofil= e > found on CPAN (contact me for the ports). >=20 > Note that this is a quite old message, quoted as an example only > because it relates to swig. Thanks, I'll definitely do so next time. (: >> I'd rather not reinstall the entire >> system. Thanks. >=20 > The system is managed independently from the installed software, > so actually don't fear: no need to do this. >=20 >=20 Haha, you have no idea what sort of troubles I sometimes cause for myself in this regard. ;) When I muck something up, I muck it up good! --=20 Yours in Christ, Joseph A Nagy Jr "Whoever loves instruction loves knowledge, But he who hates correction is stupid." -- Proverbs 12:1 Emails are not formal business letters, whatever businesses may want. Original content CopyFree (F) under the OWL --------------000305030407000103050801-- --------------enig17DC97421D2ACC260B437B6AFAlB7OTEACgkQ/yysspbAMDeC2wCglfavKQn8AZhLd46MfI9dd83s OTwAnRkmdYz3MUKyrI7eSR760TTtX+D4 =34hZ -----END PGP SIGNATURE----- --------------enig17DC97421D2ACC260B437B6A-- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=73567+0+/usr/local/www/mailindex/archive/2012/freebsd-questions/20121021.freebsd-questions
2021-10-16T05:13:17
CC-MAIN-2021-43
1634323583423.96
[]
docs.freebsd.org
SimpleSearch Last updated Dec 11th, 2019 | Page history | Improve this page | Report an issue SimpleSearch Snippet¶ This snippet displays search results based on the search criteria sent. Usage¶ Simply place the snippet in the Resource you would like to display search results in. [[!SimpleSearch]] Available Properties¶ SimpleSearch Chunks¶ There are 4 chunks that are processed in SimpleSearch. Their corresponding SimpleSearch parameters are: - tpl - The Chunk to use for each result displayed. - containerTpl - The Chunk that will be used to wrap all the search results, pagination and message. - pageTpl - The Chunk to use for a pagination link. - currentPageTpl - The Chunk to use for the current pagination link. Searching Custom Tables¶ Searching custom tables is available in SimpleSearch using the &customPackages property; however, you must have a custom package built for it. The format is: className:fieldName(s):packageName:packagePath:joinCriteria||class2Name:fieldName(s):package2Name:package2Path:join2Criteria In other words, each custom package is separated by ||. Then, each part of it is separated by colons (:). An example to search Quip comments: &customPackages=`quipComment:body:quip:{core_path}components/quip/model/:quipComment.resource = modResource.id` Let's break down each part: - className - The class name of the table you want to search. Here, it's QuipComment. - fieldName(s) - A comma-separated list of column names to search. We did 'body', you could also have done 'body,email' or whatever. - packageName - The name of the schema Package to add. This one is called quip. - packagePath - The path to the model/ directory where the package is located. - joinCriteria - The SQL to join the table you want to search and the modResource table. Your table must have some connection to the Resource it's on (otherwise SimpleSearch won't know how to load a URL for it!) Once you've added it, it will search those fields as well for data. If it finds it in that table, it will display the result as a link to the Resource you specified in your joinCriteria. In our example, that would be the resource the Quip comment is located on. Examples¶ These examples assume you've already sent the search query with the SimpleSearchForm snippet. Display results, but just show their titles: [[!SimpleSearch? &showExtract=`0`]] Display all results but only in Resources 1, 3, or 4 - or below those Resources - and highlight tags with a 'strong' tag: [[!SimpleSearch? &ids=`1,3,4` &highlightTag=`strong`]] Only find search results that use all the words in the query string, and set the results to the placeholder 'results': [[!SimpleSearch? &useAllWords=`1` &toPlaceholder=`results`]]
https://docs.modx.com/3.x/en/extras/simplesearch/simplesearch
2021-10-16T06:32:13
CC-MAIN-2021-43
1634323583423.96
[]
docs.modx.com
JDBC Driver Diagnostic Service¶ To aid Snowflake Support in diagnosing customer incidents, the Snowflake JDBC driver utilizes a diagnostic service that runs in the background. When the driver encounters an issue that prevents it from performing normally, the diagnostic service records information about the issue in a pair of compressed dump files located in the /tmp/snowflake_dumps folder: sf_incident_<incident_number>.dmp.gz sf_log_<incident_number>.dmp.gz Important The dump files may contain sensitive information (such as IP addresses) to further assist in solving the issue. Note that these files are only stored locally; they are not sent to Snowflake. You must choose to share the files, such as when diagnosing issues with Snowflake Support. If you wish to prevent the creation of these dump files by the drivers, set the snowflake.disable_debug_dumps=true system property. When the driver encounters an issue, the service may also send diagnostic information to Snowflake to help fix the problem. This information includes: Driver version information. A generic description of the issue. Stack traces for the driver that pertain to the issue. Other than the account identifier, these stack traces include no customer information.
https://docs.snowflake.com/en/user-guide/jdbc-diagnostic-service.html
2021-10-16T06:07:41
CC-MAIN-2021-43
1634323583423.96
[]
docs.snowflake.com
Concepts¶ Few concepts are extensively used in Mamba and in this documentation as well. You should start by getting familiar with those as a starting point. Prefix/Environment¶ In Unix-like platforms, installing a software consists in placing files in subdirectories of an “installation prefix”: no file is placed outside of the installation prefix dependencies must be installed in the same prefix (or standard system prefixes with lower precedence) Note Examples on Unix: the root of the filesystem, the /usr/ and /usr/local/ directories. An environment is just another way to call a target prefix. Mamba’s environments are similar to virtual environments known from Python’s virtualenv and similar software, but more powerful since Mamba also manages native dependencies and generalizes the virtual environment concept to many programming languages. Root prefix¶ When downloading for the first time the index of packages for resolution of the environment, or the packages themselves, a cache is generated to speed-up next operations: the index has a configurable time-to-live (TTL) during which it will be considered as valid the packages are preferentially hard-linked to the cache location This cache is shared by all environments or target prefixes based on the same root prefix. Basically, that cache directory is a subdirectory located at $root_prefix/pkgs/. The root prefix also provide a convenient structure to store environments $root_prefix/envs/, even if you are free to create an environment elsewhere. Base environment¶ The base environment is the environment located at the root prefix. condaimplementation still heavily used. condaand mambainstallation alongside a Python installation (since mambaand condarequire Python to run) mambaand condabeing themselves Python packages, they are installed in base environment, making the CLIs available in all activated environment based on this base environement. Note You can’t create base environment because it’s already part of the root prefix structure, directly install in base instead. Activation/Deactivation¶ Activation¶ The activation of an environment makes all its contents available to your shell. It mainly adds target prefix subdirectories to your $PATH environment variable. Note activation implementation is platform dependent. Deactivation¶ The deactivation is the opposite operation of activation, removing from your shell what makes the environment content accessible.
https://mamba.readthedocs.io/en/latest/user_guide/concepts.html
2021-10-16T04:43:06
CC-MAIN-2021-43
1634323583423.96
[array(['../_images/prefix.png', '../_images/prefix.png'], dtype=object)]
mamba.readthedocs.io
JSON Functions DECODE_JSON(expression) Unmarshals the JSON-encoded string into a N1QL value. The empty string is MISSING. ENCODE_JSON(expression) Marshals the N1QL value into a JSON-encoded string. MISSING becomes the empty string. ENCODED_SIZE(expression) Number of bytes in an uncompressed JSON encoding of the value. The exact size is implementation-dependent. Always returns an integer, and never MISSING or NULL. Returns 0 for MISSING. POLY_LENGTH(expression) Returns length of the value after evaluating the expression. The exact meaning of length depends on the type of the value: MISSING: MISSING NULL: NULL String: The length of the string. Array: The number of elements in the array. Object: The number of name/value pairs in the object Any other value: NULL
https://docs.couchbase.com/server/6.0/n1ql/n1ql-language-reference/jsonfun.html
2019-06-16T04:29:12
CC-MAIN-2019-26
1560627997731.69
[]
docs.couchbase.com
Configured List of Pairs Data Source¶ Creating an XML file for the data source¶ The Configured List of Pairs Data Source uses xml files to get the list of pairs that are going to be used. You can create your own list and save it into the repository at “/cstudio/config/sites/{SITE_NAME}/form-control-config/configured-lists”
https://docs.craftercms.org/en/2.5/developers/form-sources/form-source-list-pairs.html
2019-06-16T05:50:05
CC-MAIN-2019-26
1560627997731.69
[array(['../../_images/form-source-list-pairs.png', 'Source Control Configured List of Pairs'], dtype=object)]
docs.craftercms.org
Spring Python is an extension of the Java-based Spring Framework and Spring Security Framework, targeted for Python. It is not a straight port, but instead an extension of the same concepts that need solutions applied in Python. This document provides a reference guide to Spring's features. Since this document is still to be considered very much work-in-progress, if you have any requests or comments, please post them on the user mailing list or on the Spring Python support forums. Before we go on, a few words of gratitude are due to the SpringSource team for putting together a framework for writing this reference documentation.
https://docs.spring.io/spring-python/1.1.x/reference/html/preface.html
2019-06-16T04:56:23
CC-MAIN-2019-26
1560627997731.69
[]
docs.spring.io
Feature: #85164 - Available languages respects site configuration settings¶ See Issue #85164 Description¶ When the backend shows the list of available languages - for instance in the page module language selector, when editing records and in the list module - the list of languages is now restricted to those defined by the site module. If there are for instance five language records in the system, but a site configures only three of them for a page tree, only those three are considered when rendering language drop downs. In case no site configuration has been created for a tree, all language records are shown. In this case the Page TSconfig options mod.SHARED.defaultLanguageFlag, mod.SHARED.defaultLanguageLabel and mod.SHARED.disableLanguages settings are also considered - those are obsolete if a site configuration exists.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.4/Feature-85164-AvailableLanguagesRespectsSiteConfigurationSettings.html
2019-06-16T05:57:09
CC-MAIN-2019-26
1560627997731.69
[]
docs.typo3.org
On this page: Works with: Related pages: Watch the video: What is a Service Endpoint? A particular service in a monitored environment may be used by multiple business transactions in a business application. Service endpoints in the AppDynamics application model are like business transactions. However, while business transactions give you the view of a transaction as processed by possibly many distributed services, service endpoints provide a view of performance that is focused on the service. Service endpoints give you key performance indicators, metrics, and snapshots for the service independent of business transactions, without the downstream performance indicators available in business transactions. Service endpoints are useful for users who want to monitor a given service or set of services regardless of the end-to-end business transactions that use them. Using Service Endpoint AppDynamics discovers and generates service endpoints automatically. You can view, configure, and remove existing service endpoints by clicking the Service Endpoints link in the application navigation menu. To configure how service endpoints are created, use the Configuration > Instrumentation page. You configure Service Endpoints in a similar way as business transactions. In either case, you can can modify the automatic detection rules or add custom ones. Any service that is detectable as business transaction entry points can be automatically detected as service endpoints. A service endpoint adds a small amount of overhead to the system. Agents capture approximately three metrics per service endpoint, so each service endpoint results in the additional metric traffic resulting from three metrics. Diagnostic sessions are not intended to run in the context of a service endpoint alone, so while you can't directly invoke a diagnostic session by service endpoint, you can achieve the same effect by running them on the business transactions that include calls to the service endpoint. These appear in the dashboard for the service endpoint. Note that custom metrics are not available for service endpoints. Service Endpoint Limits The Controller and agent configurations apply limits on the number of service endpoints that can be registered. This prevents the possibility of a boundless expansion of the number of service endpoints. The limits are as follows: For each App Agent, the limit is 25 service endpoints per entry point type and total of 50 per agent. These limits are controlled by the max-service-end-points-per-entry-point-type and max-service-end-points-per-agent node properties respectively. See their descriptions in the App Agent Node Properties Reference page for details. - Nodes are limited to 100 service endpoints. See the description of max-service-end-points-per-node in the node properties reference for details. - For a Controller account, the limit is 4000. For an on-premise Controller, the limit is configurable using the sep.ADD.registration.limitconfiguration property accessible in the Administration Console. - Each thread of execution can only have one service endpoint. See the description of max-service-end-points-per-thread in the node properties reference. Monitor Service Endpoint Performance Service endpoint metrics appear under the Service End Points node in the metric browser. Service endpoint metrics are subject to the same operations as other metrics, including those around metric registration, metric rollups for tiers, and limits on number of metrics. The Metric Browser tree includes a branch for service endpoints. Service endpoint metrics follow all the rules for normal metric operations including metric registration and rollup for tiers, limits on number of metrics, and other standard operations. Service endpoints only include entry point metrics. Custom metrics are not supported. You can view performance by service endpoint by clicking the Service Endpoints link from the applications tree. The page lists the service endpoints for the application and performance metrics for the selected time range, including number of calls and errors. Configure Service Endpoints To configure service endpoints your user account must have "Configure Service Endpoints" permissions for the business application. See Roles and Permissions. Configure existing service endpoint detection settings or add new ones in the Service Endpoints page, which you can access by clicking Configuration > Instrumentation > Service Endpoints in the application menu. For existing service endpoint detection rules, you can enable or disable the detection rule or modify the naming rule (for Servlet-based service endpoints). To add a new service endpoint detection rule - Click the Custom Service Endpoint subtab. - Choose the tier on which the service runs and click the plus icon to add a service endpoint configuration. The configuration settings are similar to business transaction entry point configuration settings, as described in Configure Custom Match Rules..
https://docs.appdynamics.com/display/PRO41/Service+Endpoints
2019-06-16T05:08:23
CC-MAIN-2019-26
1560627997731.69
[]
docs.appdynamics.com
Advanced settings You can use specify eazyBI advanced settings in the config/eazybi.toml configuration file. The configuration file uses TOML format. Please see comments and commented examples for each section in this file. On this page: change the default 60 to a different value: [mondrian] "mondrian.rolap.queryTimeout" = 120 Increase concurrent report queries The default number of max concurrent eazyBI report queries is 10. If more MDX queries are made simultaneously, then new queries will wait until previous will finish. If you have a powerful server with many CPU cores and you would like to allow more concurrent MDX queries then you can increase max queries value in eazyBI advanced settings with: [mondrian] "mondrian.rolap.maxQueryThreads" = 20 If you change several Mondrian settings then use just the one [mondrian] section. Enable Mondrian the system admins), owner (only account owners and system admins), report_admin (only account report admins and system admins), user (any user who can create reports). Then, go to the Analyze tab and Enable profiling in the other report actions drop down. After that, every next request execution will be profiled and you can view the last profiling result with Show profiling result. Please send the report definition and profiling result to eazyBI support if you need help with report performance optimization. Currently Mondrian request profiling for reports in accounts with a custom schema will not include SQL queries that are generated by Mondrian. In other accounts SQL queries are filtered by standard table schema prefixes but in custom schemas currently there is no standard way how to identify these Mondrian queries. SSRF protection Available from the eazyBI version 4.6.0. SSRF (Server Side Request Forgery) protection allows to prevent eazyBI REST API and SQL import from other hosts in the same local network where the eazyBI setting."]
https://docs.eazybi.com/eazybiprivate/set-up-and-administer/system-administration/advanced-settings
2019-06-16T05:30:33
CC-MAIN-2019-26
1560627997731.69
[]
docs.eazybi.com
This topic covers basic steps and operations to test a CORBA (Common Object Request Broker Architecture) server. There are several ways to ensure the correct functionality of a CORBA server; the following are a few examples and simple exercises to help you better understand how SOAtest can simplify the process of server testing. Different scenarios will show how SOAtest can be incorporated into the testing of non-SOAP servers. Sections include: Scenario 1: CORBA Client Has Not Yet Been Implemented Note: Continue to Scenario 2 if you already have a Java client created. To use the interfaces/IDL offered by the server, you need to generate the java stubs on the client side. In this section we are going to cover simple IDL to Java conversions. A sample Calculator.idl file for the following exercise is included in <SOAtest installation directory/<version>/eclipse/plugins/com.parasoft.xtest.libs.web_<version-date>/root/build/examples/CORBA. In order to use IDLJ, make sure you have J2SDK installed and set the PATH variable so you can access the J2SDK’s executables from any directory. To convert IDL to Java using IDLJ, complete the following: - In command prompt, change the current directory to the folder that contains Calculator.idl (In this example C:\Program Files\Parasoft\SOAtest\[SOAtest version number]\eclipse\plugins\com.parasoft.xtest.libs.web_9.6.0.20130917\root\build\examples\CORBA) - Type: “ idlj –pkgTranslate Persistent examples.CORBA –fall Calculator.idl” to automatically generate packages with correct paths. - Compile the java files by typing: javac/examples/CORBA/*.java. Now you have the necessary class files needed to communicate with the server. Please continue on to Scenario 2 to interface SOAtest with an existing java client. For more information on IDLJ see the Oracle Java documentation. Scenario 2: Interfacing SOAtest with an Existing Java Client In this section we will demonstrate how to invoke Java services from a CORBA server by using SOAtest’s Extension tool. - Create an Extension tool by right-clicking on the test suite and select Add Test> Standard Test> New Tool> Extension. Select the Extension tool node, in the right GUI panel select the appropriate language from the Language drop-down menu to access your CORBA Java Client. For example, for Jython you can enter something similar to the following in the Text field: # In our example, examples.CORBA.PersistentClient is our CORBA Java Client from examples.CORBA import * from java.lang import * def foo(input, context): # Here we are Initializing the client by providing location of the server, # port number, and the service name client = PersistentClient("goldfish.parasoft.com", 2222, "GoldfishCorbaServer") # Here we are making the actual Method Invocation onto the Service "add(x,y)" return client.add(3, 5) - Right-click within the Text field and select Evaluate from the shortcut menu to make sure the syntax is correct. If the syntax is correct, the name of the function should be auto-populated into the Method drop-down menu: foo(). - Right click on the Extension tool node and select Add Return Value Output> Existing Output> Edit to show the returned values after execution of the test. - Run the test, if the test succeeds the return values should appear in the right GUI panel. - If the test failed, returning a Null Pointer exception on the edit screen; check the CORBA server and make sure the server is listening on the designated port and that the service is up and running. Scenario 3: Interfacing SOAtest with an Existing non-Java Client In this section we will demonstrate how to invoke non-Java services from CORBA server by using SOAtest’s External tool. - Create an external tool by right clicking on the test suite and select Add Test> Standard Test> New Tool> External Tool. - Select the External tool node and change its name to CORBA Client. - Click on the Browse button and select the path to the CORBA client executable. - If CORBA client takes in parameters, add each argument buy clicking on the ADD button. A new line will get generated, allowing users to input a flag and argument associated with the executable. - Double-click on the line generated to enter flag and argument. A new dialog box will pop up; change the name and argument accordingly. - If you wish to use a parameterized value, select Parameterized in the Value drop-down menu and select variable name in the Variable drop-down menu then click OK. - In the right GUI panel select the Keep output check box to keep the returned values after each test run. - Right-click the External tool node and select Add Return Value Output> Existing Output> Edit to show the returned values after execution of the test. - Run the test, if the test succeeds return values should appear in the right GUI panel. - If the test failed, returning a Null Pointer exception on the edit screen; check the CORBA server and make sure the server is listening on the designated port and that the service is up and running.
https://docs.parasoft.com/display/SOA9105/CORBA
2019-06-16T04:46:07
CC-MAIN-2019-26
1560627997731.69
[]
docs.parasoft.com
Contents Now Platform Administration Previous Topic Next Topic Change the preference to submit a form with the enter key Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Change the preference to submit a form with the enter key By default, pressing the Enter key in a simple, one-line, choice list, or a Boolean field submits the form. Before you beginRole required: admin About this taskA system preference controls this behavior, and it can be deactivated. Procedure From the left navigation pane, select User Administration > User Preferences. Select the enter_submits_form preference. Set the value to false. Click Update. The change does not take effect until user preferences are reloaded either at login or when a session is created. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/form-administration/task/t_ChangeTheEnterSubmitsFormPref.html
2019-06-16T05:17:29
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
Contents IT Service Management Previous Topic Next Topic IT Service Management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share IT Service Management Deliver IT Service Management on a single, cloud-based platform. Tips for successful implementation of IT service management (ITSM) on the ServiceNow platform. These tips also apply more broadly to the ServiceNow platform in general. Asset ManagementThe ServiceNow® Asset Management application integrates the physical, technological, contractual, and financial aspects of information technology assets.Contract ManagementManage and track contracts with the ServiceNow® Contract Management application. BenchmarksT. Change ManagementThe ServiceNow® Change Management application provides a systematic approach to control the life cycle of all changes, facilitating beneficial changes to be made with minimum disruption to IT services.
https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/it-service-management/reference/r_ITServiceManagement.html
2019-06-16T05:23:46
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
Crafter Profile Admin Console UI¶ Crafter Profile Admin Console consists of a single WAR file, with a dependency on access to Crafter Profile. This web application provides a simple way to manage all data related to tenants and profiles without the need to call the Crafter Profile API directly. Installation¶ New Installation¶ You can follow the instructions for building a complete bundle as described here, if you add the parameter -Pcrafter.profile=true the bundle will contain crafter-profile.war and crafter-profile-admin-console.war. Existing Installation¶ If you want to add Crafter Profile Admin Console to an existing installation you only need to build or download the WAR file making sure it matches the version of all other components. To deploy the application you only need to move the WAR file into $CRAFTER_HOME/bin/apache-tomcat/webapps Configuration Guide¶ Similar to other Crafter CMS components you can configure the Profile Admin Console using a simple properties file placed in the following location: $CRAFTER_HOME/bin/apache-tomcat/shared/classes/crafter/profile/management/extension/server-config.properties You can change any of the default configuration, some of the more relevant properties are:.
https://docs.craftercms.org/en/3.0/system-administrators/profile/admin/index.html
2019-06-16T05:49:40
CC-MAIN-2019-26
1560627997731.69
[]
docs.craftercms.org
Building Support for extrusions was added with 5.1.0 of the Map SDK, unlocking the possibility to display 3D buildings on your favorite map style. The building plugin extends this functionality and makes it even easier to add buildings to a map style. Install the Building Plugin To start developing an application using the Building Support for extrusions was added with 5.1.0 of the Maps SDK, unlocking the possibility to display 3D buildings on your favorite map style. The building plugin extends this functionality and makes it even easier to add buildings to a map style. To install, head over to the Mapbox Plugin Overview page which will walk you through adding the dependency. - Start Android Studio. - Open up your application's build.gradlefile. - Make sure that your project's minSdkVersionis API 14 or higher. - Under dependencies, add a new build rule for the latest mapbox-android-plugin-building-v8. repositories { mavenCentral() } dependencies { implementation 'com.mapbox.mapboxsdk:mapbox-android-plugin-building-v8:0.6.0' } - Click the Sync Project with Gradle Files near the toolbar in Studio. Add the Building Plugin The Building Plugin requires no additional permissions and initialized by passing in both the map view and mapboxMap objects that you'd like the building layer to show on. Besides the required parameters, you also have the option to provide a layer ID which you'd like the buildings to appear below. Once initialized, setting setVisibility() to true will result in the building layer getting added on top of your map style. BuildingPlugin buildingPlugin = new BuildingPlugin(mapView, mapboxMap);buildingPlugin.setVisibility(true); Customization While the building plugin provides default values which look good for most use cases, you might find yourself wanting to customize the look of the buildings to match a map style. Several APIs are available for changing building color, opacity, what zoom level buildings should start appearing, etc. The table below provides information on the current APIs useful for customization.
https://docs.mapbox.com/android/plugins/overview/building/
2019-06-16T05:54:53
CC-MAIN-2019-26
1560627997731.69
[]
docs.mapbox.com
If migrating a high availability pairing of SoftNAS instances , it is not as simple as using the one-click update on each side of the pairing. In order to preserve replication, and achieve a seamless migration, the following additional steps are required. In this knowledge base article, we will provide you clear instruction on how to perform a successful software update from an HA pairing to the latest version. Sequence to upgrade and HA pair If migrating a version of SoftNAS lower than 3.3.3, refer to the following knowledgebase articles in order to upgrade.
https://docs.softnas.com/plugins/viewsource/viewpagesrc.action?pageId=6783009
2019-06-16T04:32:07
CC-MAIN-2019-26
1560627997731.69
[]
docs.softnas.com
Contents: The cloud-based version of Trifacta Wrangler is now available! Read all about it, and register for your free account. Contents: Wrangle is the domain-specific language used to build transformation recipes in Trifacta® Wrangler. A Wrangle recipe is a sequence of transforms applied to your dataset in order to produce your results. - A transform is a single action applied to a dataset. For most transforms, you can pass one or more parameters to define the context (columns, rows, or conditions). - Some parameters accept one or more functions. A function is a computational action performed on one or more columns of data in your dataset. - .. derive type:single value: myCol as:'myNewCol' Column names with spaces or special characters in a transformation must be wrapped by curly braces. Example: Below, srcColumn is renamed to src Column, which requires no braces because the new name is captured as a string literal: rename type: manual mapping: [srcColumn, 'src Column'] After the column has been renamed with a space, it must be referenced in curly braces to be renamed back to its original name: rename type: manual mapping: [{src Column},'srcColumn'] Functions Some parameters accept functions as inputs. Where values or formulas are calculated, you can reference one of the dozens of functions available in Wrangle. Example: derive type:single value:MULTIPLY(3,2) as:'six' Metadata variables Wrangle supports the use of variable references to aspects of the source data or dataset. In the following example, the ABS function is applied to each column in a set of them using the $col reference. set col: val1,val2 value: ABS($col) ): derive type:single value:ROUND(DIVIDE(10,3),0) as:'three' Integer A valid integer value within the accepted range of values for the Integer datatype. For more information, see Supported Data Types. Example: Generates a column called, my13 which is the sum of the Integer values 5 and 8: derive type:single value: (5 + 8) as:'my13' Decimal A valid floating point value within the accepted range of values for the Decimal datatype. For more information, see Supported Data Types. Example: Generates a column of values that computes the approximate circumference of the values in the diameter column: derive type:single value: (3.14159 * diameter) as: 'circumference' Boolean A true or false value. Example: If the value in the order column is more than 1,000,000, then the value in the bigOrder column is true. derive type:single value:IF(order > 1000000, true, false) as:'bigOrder' String A string literal value is the baseline datatype. String literals must be enclosed in single quotes. Example: Creates a column called, StringCol containing the value myString. derive type:single value:'myString' as:'StringCol'): extract col: MyData on:`%{3}-%{2}-%{4}` limit:10 Regular expression Regular expressions are a common standard for defining matching patterns. Regex is a very powerful tool but can be easily misconfigured. Regular expressions must be enclosed in slashes ( /MyPattern/ ). Example: Deletes all two-digit numbers from the qty column: replace col: qty on: /^\d$|^\d\d$/ with: '' global: true: derive type:single value:DATEFORMAT(myDate, 'yyyymmdd') Array A valid array of values matching the Array data type. Example: [0,1,2,3,4,5,6,7,8] See Supported Data Types. Example: Generates a column with the number of elements in the listed array ( 7): derive type:single value: ARRAYLEN('["red", "orange", "yellow", "green", "blue", "indigo", "violet"]') Object A valid set of values matching the Object data type. Example: {"brand":"Subaru","model":"Impreza","color","green"} See Supported Data Types. Example: Generates separate columns for each of the specified keys in the object ( brand, model, color), containing the corresponding value for each row: unnest col:myCol keys:'brand','model','color'. NOTE: The generated output applies only to the values displayed in the data grid. The function is applied across the entire dataset only during job execution. - Wrangle is also available through Trifacta Wrangler. Select Help menu > Product Docs. Tip: When searching for examples of transforms and functions, try using the following forms for your search terms within the Product Docs site: - Transforms: wrangle_transform_NameOfTransform - Functions: wrangle_function_NameOfFunction All Topics Topics: - Transforms - - Sort Transform - Split Transform - Splitrows Transform - Unnest Transform - Unpivot Transform - Valuestocols Transform - Window Transform - Aggregate Functions - - Logical Functions - Comparison Functions - Math Functions - Numeric Operators - ADD Function - SUBTRACT Function - MULTIPLY Function - DIVIDE Function - MOD Function - NEGATE Function - LCM Function - NUMFORMAT Function - ABS Function - EXP Function - LOG Function - POW Function - CEILING Function - LN Function - SQRT Function - FLOOR Function - ROUND Function - TRUNC Function - RADIANS Function - DEGREES Function - SIGN Function - Date Functions - String Functions - CHAR Function - UNICODE Function - UPPER Function - LOWER Function - PROPER Function - TRIM Function - REMOVEWHITESPACE Function - REMOVESYMBOLS Function - LEN Function - FIND Function - RIGHTFIND Function - SUBSTRING Function - SUBSTITUTE Function - LEFT Function - RIGHT Function - MERGE Function - STARTSWITH Function - ENDSWITH Function - REPEAT Function - EXACT Function - STRINGGREATERTHAN Function - STRINGGREATERTHANEQUAL Function - STRINGLESSTHAN Function - STRINGLESSTHANEQUAL Function - PAD Function - DOUBLEMETAPHONE Function - DOUBLEMETAPHONEEQUALS Function - TRANSLITERATE Function - Nested Functions - ARRAYCONCAT Function - ARRAYCROSS Function - ARRAYINTERSECT Function - ARRAYLEN Function - ARRAYSTOMAP Function - ARRAYUNIQUE Function - ARRAYZIP Function - FILTEROBJECT Function - KEYS Function - ARRAYELEMENTAT Function - LISTAVERAGE Function - LISTMAX Function - LISTMIN Function - LISTMODE Function - LISTSTDEV Function - LISTSUM Function - LISTVAR Function - ARRAYSORT Function - ARRAYINDEXOF Function - ARRAYMERGEELEMENTS Function - ARRAYRIGHTINDEXOF Function - ARRAYSLICE Function - Type Functions - Window Functions - PREV Function - NEXT Function - FILL Function - ROLLINGAVERAGE Function - ROLLINGMAX Function - ROLLINGMIN Function - ROLLINGSUM Function - ROLLINGSTDEV Function - ROLLINGVAR Function - ROWNUMBER Function - SESSION Function - ROLLINGMODE Function - ROLLINGCOUNTA Function - ROLLINGLIST Function - ROLLINGKTHLARGEST Function - ROLLINGKTHLARGESTUNIQUE Function - RANK Function - DENSERANK Function - Other Functions - Other Language Topics - Language Index This page has no comments.
https://docs.trifacta.com/display/SS/Wrangle+Language
2019-06-16T05:34:52
CC-MAIN-2019-26
1560627997731.69
[array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/loading_mini.gif', None], dtype=object) array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/rater.gif', None], dtype=object) ]
docs.trifacta.com
Changelog for package kvh 1.0.3 (2017-03-11) fixed pkg name for launch file added check for testing to remove builder warning added auto generated wiki page for documentation Contributors: Geoff Viola, geoffviola 1.0.2 (2016-09-10) relaxed errors as warnings Contributors: Geoff Viola 1.0.1 (2016-08-13) added installation path for library quiting starting warning unit tested driver roslinted all files clang format Merge pull request #2 from geoffviola/master Removed bad cereal API call bug fixed author tag change the maintainer and author CMake cleanup and reverted back to kvh package Update README.md fixes for kinetic added invert option and changed topic name tested on real dsp3000 added configuration modes via parameters formatted file added a larger buffer for a slower processor consistently reading debugging some changes Initial untested Indigo release the port defined in the launch file now passes properly. Expanded cereal_port README file Removed a bunch of old .svn directories Added header info to dsp3000.cpp Output changed from degrees to radians. Add the needed add_boost_directories macro. Fixed _pub name and TIMEOUT definition Merge branch 'master' of Make package build against local copy of cereal_port. Remove unneeded stuff from cereal_port. removed the scripts directory Initial "write" to device made more efficient, exception has a better description. Added ROS_DEBUG output Changed TIMEOUT from define to const Deleted some temp files Added .gitignore Renamed "flavour" directory to "scripts" deleted build files Initial push containing entire DSP-3000 ROS node. Initial commit Contributors: Geoff Viola, Jeff Schmidt, Mike Purvis, geoffviola, jeff-o, sbir
http://docs.ros.org/en/kinetic/changelogs/kvh/changelog.html
2022-09-25T08:26:19
CC-MAIN-2022-40
1664030334515.14
[]
docs.ros.org
. If you're using Oracle JDK 8, it must be on update 291 or later. By default, Transport Layer Security (TLS) 1.2 and higher is required for all external sources connecting to Appian. This is because TLS 1.0 and 1.1 have outdated security, so we removed them from the bundled JDK. This is applicable for web browsers, databases, authentication, and integrations during their TLS handshake. Appian strongly urges customers to upgrade to TLS 1.2 or above for all connected systems. However, for customers that still need to connect to systems using TLS 1.0 or 1.1, follow the steps in the Post-Install Configuration page to enable TLS 1.0 and above. the following: Based on these factors, your actual requirements may vary. Sizing is best run with sample data while your application is under development, and with real data after your application is complete.: For customers who wish to run their non-production environments on Kubernetes using the Appian Operator must have their clusters set to run with Kubernetes version 1.16-1.22. For more details of the Appian Operator see Appian on Kubernetes. As Appian on Kubernetes is only supported for non-production environments, there is no supported path for upgrading from 21.4 to 22.1 or higher. For environments where future upgrades are desired, use Appian 22.1., other JDBC-compliant databases can be queried using a connected system plug-in. Appian recommends a round-trip time for TCP communications with the database of less than 10 milliseconds with an upper bound of 25 milliseconds for acceptable performance. Network latency outside of these bounds will result in degraded system performance. If you are connecting to an Amazon Aurora data source through the Admin Console, for Type, select the type of database that matches your Aurora version. If you are using Aurora MySQL, choose MariaDB instead of MySQL, since Amazon recommends using the MariaDB driver. If you are using Aurora PostgreSQL, simply choose PostgreSQL. The supported Web browsers are listed in the table below. Use the Appian Mobile application for iOS and Android instead of mobile browsers. See Mobile Devices for more information. Web browsers must allow cookies. If a user's browser is not configured to allow cookies, then Appian displays an alert stating that cookies must be enabled in order to log in. Appian uses browser cookies to maintain user sessions, to enable protections against threats such as cross-site request forgeries (CSRF), and, if configured, remember certain user choices between sessions. The cookies contain anonymized tokens and unique identifiers. No personally identifiable information (PII) is ever stored by Appian in a browser cookie. With Microsoft ending its support for Internet Explorer 11 (IE11) this summer, Appian will also end support for IE11 shortly afterwards, in November 2022.. The Appian Mobile iOS application is generally supported on the latest version of iOS and one prior major version. As of today, we support iOS 15 and iOS 14. The Appian Mobile Android application is generally supported on the latest version of Android OS and three prior major versions. As of today, we support Android 12, Android 11, Android 10, and Android 9. We do our best to maintain support for older Android OS versions. So you should still be able to use the Appian Mobile application on older OS versions. However, we do not commit to addressing issues specific to an unsupported version. It is important that you use an OS version supported by Google and Apple in order to ensure that you have the latest security updates to protect your enterprise data. For more information, please refer to the security bulletins published and maintained by Google and Apple. Network File System (NFS) protocol is supported. Server Message Block (SMB) protocol is unsupported. On This Page
https://docs.appian.com/suite/help/21.4/System_Requirements.html
2022-09-25T08:03:20
CC-MAIN-2022-40
1664030334515.14
[]
docs.appian.com
You can select sidebar layout for different pages such as post page, archive page, default sidebar. Please follow the below steps to select the sidebar for different pages of your website. - Log in to the WordPress Admin Panel - Go to Appearance> Customize> Layout Settings> General Sidebar Layout - Select the Sidebar Layout for Page, Post, and Default Sidebar layout. - Click on Publish Updated on
https://docs.blossomthemes.com/blossom-pinthis/appearance-settings/how-to-change-the-general-sidebar-layout/
2022-09-25T08:51:36
CC-MAIN-2022-40
1664030334515.14
[]
docs.blossomthemes.com
Release Notes Following Release Notes describe status of Open Source Firmware development for Protectli VP2410 For details about our release process please read Dasharo Standard Release Process. Test results for this platform can be found here. v1.0.15 - 2022-05-31 Changed - Customized Network boot menu and strings Fixed - SMBIOS memory information showing 0 MB DRAM in setup Known issues Binaries The binaries will be published by Protectli on their webpage. As soon as they show up, Dasharo will link to them as well.
https://docs.dasharo.com/variants/protectli_vp2410/releases/
2022-09-25T08:56:35
CC-MAIN-2022-40
1664030334515.14
[]
docs.dasharo.com
July 2022 3 months ago by Frankie Freedom Added - API - adding support for templating smart alerts - APP - stream dashboard improvements with Graph View - APP - Send Custom Commands in the Device Settings - APP - Added Map Markers that will provide an intuitive image on the map - APP - Added ROS Launch and Local Scripts to the Send Commands interface. - APP - Added Labels to the Stream Dashboard State Settings. - APP - adding smart notification support for SNPP Emitters - APP - setting up the user experience for new settings to configure smart alerts Fixed - API - Updated Agent Installation Script to account for other legacy dependencies - APP - Updated Zone Map Device Paths to be accessible to all users. Deprecations - APP - Device centric smart alerts are removed and the page will redirect to configurations.
https://docs.freedomrobotics.ai/changelog/unreleased-1
2022-09-25T08:50:44
CC-MAIN-2022-40
1664030334515.14
[]
docs.freedomrobotics.ai
lbuild module: modm:driver:lawicel Converts modm::can::Message to and from the Lawicel string format ( char *). Lawicel AB () offers medium sized CAN to USB and CAN to RS232 converters. Their data format is widely used. This converter only understands messages of type 'r', 't', 'R' and 'T' which transmits CAN frames. It does not understand commands to change the baud rate.
https://docs.modm.io/develop/api/attiny85v-20su/group__modm__driver__lawicel.html
2022-09-25T07:53:54
CC-MAIN-2022-40
1664030334515.14
[]
docs.modm.io
Portworx on Nomad This section covers information on.. Operate and utilize Portworx on Nomad Volume Lifecycle Basics with CSI Instructions on creating and using CSI volumes Secure your volumes with PX Security Instructions on creating and using CSI volumes Data Protection and Snapshots Instructions on creating and using CSI volumes Open Source Nomad is an open source project developed by HashiCorp and a community of developers. CSI support for Nomad is still in active development with many open issues. Portworx participates in and encourages open source contributions to Nomad as well as the CSI spec.
https://docs.portworx.com/install-portworx/install-with-other/nomad/
2022-09-25T08:36:26
CC-MAIN-2022-40
1664030334515.14
[]
docs.portworx.com
Portworx with CSI CSI, or Container Storage Interface, is a model for integrating storage system service with Kubernetes and other orchestration systems. Kubernetes has supported CSI since 1.10 as beta. With CSI, Kubernetes gives storage drivers the opportunity to release on their schedule. This allows storage vendors to upgrade, update, and enhance their drivers without the need to update Kubernetes, maintaining a consistent, dependable, orchestration system. Using Portworx with CSI, you can perform the following operations: - Create and use CSI-enabled persistent volumes - Secure your CSI-enabled volumes with token authorization and encryption defined at the StorageClass or the PVC level - Take snapshots of CSI-enabled volumes - Create sharedv4 CSI-enabled volumes Supported features The following table shows the core features supported by CSI and which minimum versions of Portworx and Kubernetes they require. Portworx, Inc. does not recommend that you use alpha Kubernetes features in production as the API and core functionality are not finalized. Users that adopt alpha features in production may need to perform costly manual upgrades. Contribute Portworx, Inc. welcomes contributions to its CSI implementation, which is open-source and repository is at OpenStorage. In addition, we also encourage contributions to the Kubernetes-CSI open source implementation.
https://docs.portworx.com/operations/operate-kubernetes/storage-operations/csi/
2022-09-25T09:18:49
CC-MAIN-2022-40
1664030334515.14
[]
docs.portworx.com
Difference between revisions of "XQuery Update" Revision as of 14:06, 8 December 2010. New Expressions The XQUF offers five new expressions to modify data. While insert, delete, rename and replace basically explain themselves, the transform expression is different. Modified nodes are copied in advance and the original databases remain untouched. fn:put() Function". fn:put() & Fragments>
https://docs.basex.org/index.php?title=XQuery_Update&diff=next&oldid=277
2022-09-25T08:18:36
CC-MAIN-2022-40
1664030334515.14
[]
docs.basex.org
AngularJS: Overview¶ AngularJS is a client-side framework for development of rich web applications. The core CiviCRM application uses AngularJS for several administrative screens, and extensions increasingly use AngularJS for "leaps" that add or replace major parts of the application. This documentation aims to explain how AngularJS works within a CiviCRM context. AngularJS versions¶ - CiviCRM use AngularJS 1.x which has documentation at docs.angularjs.org - In version 2.x (and onwards) the framework is just called "Angular" and is a significantly different framework from 1.x. The Angular website is angular.io, which you should steer clear of while learning AngularJS. Tip To determine the specific version of AngularJS used within your site: - Go to the default Angular base page for your site at - Open a browser console - Evaluate angular.versionwithin the console Two cultures¶ CiviCRM is an extensible PHP application (similar to Drupal, Joomla, or WordPress). In this culture, the common expectation is that an administrator installs the main application. To customize it, they download, evaluate, and configure a set of business-oriented modules. The administrator's workflow is dominated by web-based config screens and CLI commands. AngularJS is a frontend, Javascript development framework. In this culture, the expectation is that a developer creates a new application. To customize it, they download, evaluate, and configure a set of function-oriented libraries. The developer's workflow is dominated by CLI's and code. The CiviCRM-AngularJS integration must balance the expectations of these two cultures. The balance works as follows: - Build/Activation: The process of building or activating modules should meet administrators' expectations. It should be managed by the PHP application. (This means that you won't see gulpor gruntorchestrating the final build -- because PHP logic fills that role.) - Frontend Code uses Angular (JS+HTML): The general structure of the Javascript and HTML files should meet the frontend developers' expectations. These files should be grounded in the same notations and concepts as the upstream AngularJS framework. (This means that AngularJS is not abstracted, wrapped, or mapped by an intermediary like HTML_QuickForm, Symfony Forms or Drupal Form API.) - Backend Code uses Civi API (PHP): The general structure of web-services should meet the backend developers' expectations. These are implemented in PHP (typically with CiviCRM APIv3). Basics¶ AngularJS is a client-side Javascript framework, and it interacts with CiviCRM in two major ways. To see this, let's consider an example AngularJS page -- it's an HTML document that looks a lot like this: <!-- URL: --> 1: <html> 2: <head> 3: <link rel="stylesheet" href="**all the CSS files**" /> 4: <script type="text/javascript" src="**all the Javascript files**"></script> 5: <script type="text/javascript">var CRM = {**prefetched settings/data**};</script> 6: </head> 7: <body> 8: <div>...site wide header...</div> 9: <div ng-</div> 10: <div>...site wide footer...</div> 11: </body> 12: </html> The first interaction comes when CiviCRM generates the initial HTML page: - CiviCRM listens for requests to the path civicrm/a. (It does this in a way which is compatible with multiple CMSs -- Drupal, Joomla, WordPress, etc.) - CiviCRM builds the list of CSS/JS/JSON resources in lines 3-5. (It does this in a way which allows extensions to add new CSS/JS/JSON. See also: Resource Reference.) - CiviCRM ensures that the page includes the site-wide elements, such as lines 8 and 10. (It does this in a way which is compatible with multiple CMSs.) Once the page is loaded, it works just like any AngularJS 1.x application. It uses concepts like ng-app, "module", "directive", "service", "component", and "partial". Read more about AngularJS 1.x A good resource for understanding AngularJS concepts is the official AngularJS tutorial. The second interaction comes when the AngularJS application loads or stores data. This uses the CiviCRM API. Key concepts in CiviCRM API include "entity", "action", "params", the "API Explorer", and the bindings for PHP/Javascript/CLI. Read more about CiviCRM API A good resource for understanding CiviCRM API concepts is the APIv3: Intro. In the remainder of this document, we'll try to avoid in-depth discussion about the internals of AngularJS 1.x or APIv3. You should be able to follow the discussion if you have a beginner-level understanding of both.
https://docs.civicrm.org/dev/en/latest/framework/angular/
2022-09-25T08:14:44
CC-MAIN-2022-40
1664030334515.14
[]
docs.civicrm.org
Cleaning up after failed jobs The S3A committers upload data in the tasks, completing the uploads when the job is committed. - Go to the AWS S3 console:. - Find the bucket you are using as a destination of work. - Select the Management tab. - Select Add a new lifecycle rule. - Create a rule “cleanup uploads” with no filter, and without any “transitions”.Configure an “Expiration” action of Clean up incomplete multipart uploads. - Select a time limit for outstanding uploads, such as 1 Day. - Review and confirm the lifecycle ruleYou need to select a limit of how long uploads can be outstanding. For Hadoop applications, this is the maximum time that either an application can write to the same file and the maximum time which a job may take. If the timeout is shorter than either of these, then programs are likely to fail.Once the rule is set, the cleanup is automatic.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/cloud-data-access/topics/cr-cda-cleaning-up-after-failed-jobs.html
2022-09-25T08:34:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.cloudera.com
command,: - If they exist, move old InnoDB log files /var/lib/mysql/ib_logfile0and /var/lib/mysql/ib_logfile1out of /var/lib/mysql/to a backup location. - Determine the location of the option file, my.cnf( /etc/my.cnfby default). - Update my.cnfproperty to O_DIRECT. - Set the max_connectionsproperty according to the size of your cluster: - Fewer than 50 hosts - You can store more than one database (for example, both the Activity Monitor and Service Monitor) on the same host. If you do this, you should: - Put each database on its own physical disk for best performance. You can do this by manually setting up symbolic links or running multiple database instances (each instance uses a different data directory path). - Allow 100 maximum connections for each database and then add 50 extra connections. For example, for two databases, set the maximum connections to 250. If you store five databases on one host (the databases for Cloudera Manager Server, Reports Manager,property: - Run /usr/bin/mysql_secure_installationto set the MariaDB root password and other security-related settings. In a new installation, the rootpassword. Installing the MySQL client CDP uses Python version 3.8. To use MariaDB as a backend database for Hue, you must install the MySQL client and other required dependencies on all the Hue hosts based on your operating system. - SSH into the Hue host as a root user. - Install the required dependencies as follows: dependencies as follows: yum install mysql-devel: zypper install libmysqlclient-devel zypper install xmlsec1 zypper install xmlsec1-devel zypper install xmlsec1-openssl-devel -: apt-get install libmysqlclient-dev apt-get install -y xmlsec1 apt-get install libxmlsec1-openssl - Add the path where you installed the packages to the PATH environment variable as follows: export PATH=/usr/local/bin:$PATH - Install the MySQL client as follows: pip3.8 install mysqlclient Creating Databases for Cloudera Software Services that require databases Create databases and service accounts for components that require databases: - Cloudera Manager Server - Cloudera Management Service roles: - Reports Manager - Data Analytics Studio (DAS) Supported with PostgreSQL only. - Hue - Each Hive metastore - Oozie - Data Analytics Studio - Schema Registry - Streams Messaging Manager Steps - Log in as the rootuser,character set. Include the character set for each database when you run the CREATE DATABASEstatements described below. CREATE DATABASE <database> DEFAULT CHARACTER SET utf8>'@'%'; - Record the values you enter for database names, usernames, and passwords. The Cloudera Manager installation wizard requires this information to correctly connect to these databases. Next Steps - If you plan to use Apache Ranger, see the following topic for instructions on creating and configuring the Ranger database. See Configuring a Ranger or Ranger KMS Database: MySQL/MariaDB. -.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/installation/topics/install_cm_mariadb.html
2022-09-25T08:20:55
CC-MAIN-2022-40
1664030334515.14
[]
docs.cloudera.com
LDAP search fails with invalid credentials error LDAP authentication fails with an "Invalid credentials" error, even if you input valid login credentials on the Hue login page, and you are unable to log into Hue. To resolve this issue, verify and update the LDAP Bind User credentials using Cloudera Manager. This issue may happen if the credentials for the LDAP Bind User for Hue configured in Cloudera Manager are invalid. The invalid credentials could either be the “LDAP Bind Password" or "LDAP Bind User Distinguished Name". If the credentials are valid and the issue persists, verify that LDAP Search Base option in is valid. The LDAP search base should be similar to 'dc=hadoop,dc=mycompany,dc=com'. This task assumes that the Use Search Bind Authentication option is enabled in . Search Bind Authentication connects to the LDAP server using the credentials provided in the 'bind_dn' and 'bind_password' configurations. If these configurations are not set, then an anonymous search is performed. If the Use Search Bind Authentication option is not enabled in , then do not set the LDAP Bind User credentials as described in this task. You must use the LDAP Username Pattern field for configuring the LDAP credentials, and verify whether the authentication works as expected. - Log in to Cloudera Manager as an Administrator. - Go to . - Set the LDAP Bind User credentials in the following fields: You can specify the LDAP Bind User Distinguished Nameeither in the generic LDAPv3 Distinguished Name ("CN=binduser,OU=users,DC=Example,dc=com") format or the Active Directory style ([email protected]) format. - LDAP Bind User Distinguished Name - LDAP Bind Password - Click Save Changes. - Restart the Hue service.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/troubleshooting-hue/topics/hue-ldap-search-fails-invalid-credentials.html
2022-09-25T08:17:40
CC-MAIN-2022-40
1664030334515.14
[]
docs.cloudera.com
The Vala Programming Language Vala is a programming language mainly targeted at GNOME developers. Its syntax is inspired by C# (and thus, indirectly, by Java). But unlike C# and Java, Vala does not attempt to provide memory safety: Vala is compiled to C, and the C code is compiled with GCC using typical compiler flags. Basic operations like integer arithmetic are directly mapped to C constructs. As a results, the recommendations in Defensive Coding in C apply. In particular, the following Vala language constructs can result in undefined behavior at run time: Integer arithmetic, as described in Recommendations for Integer Arithmetic. Pointer arithmetic, string subscripting and the substringmethod on strings (the stringclass in the glib-2.0package) are not range-checked. It is the responsibility of the calling code to ensure that the arguments being passed are valid. This applies even to cases (like substring) where the implementation would have range information to check the validity of indexes. See Recommendations for Pointers and Array Handling Similarly, Vala only performs garbage collection (through reference counting) for GObjectvalues. For plain C pointers (such as strings), the programmer has to ensure that storage is deallocated once it is no longer needed (to avoid memory leaks), and that storage is not being deallocated while it is still being used (see Use-after-free errors).
https://docs.fedoraproject.org/sq/defensive-coding/programming-languages/Vala/
2022-09-25T07:44:13
CC-MAIN-2022-40
1664030334515.14
[]
docs.fedoraproject.org
Agent can host an HTTP(S) status endpoint which can be queried. This endpoint is enabled on the Agents when the HttpStatusinfoPort is set or when HTTP/HTTPS are configured for the Agent. This endpoint is hosted at /frendsstatusinfo on the HTTP for the HTTP ports and the HttpStatusinfoPort and on HTTPS for HTTPS ports. This endpoint just returns HTTP result code 200 (OK) if the Agent is running and not paused. It is used e.g. by API gateways for monitoring if the upstream execution Agents are running. It can of course also be used by external load-balancers configured for the systems. If the Agent is paused, the endpoint will return 503 (Service unavailable). If you have API gateways set up, this is used to turn off traffic to Agents behind that gateway in a controlled way. The gateway will stop routing traffic to the paused Agent while it is paused. The Cross-platform Agents additionally provide the amount of Processes currently executing on the Agent and has optional API-key authorization configurable with the HealthCheckApiKey -setting. The API key needs to be provided in a HTTP header named health-check-api-key.
https://docs.frends.com/en/articles/2206728-agent-status-endpoint
2022-09-25T07:47:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.frends.com
Warning This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation. Source code for galaxy.di """Dependency injection framework for Galaxy-type apps.""" from typing import Optional, Type, TypeVar from lagom import Container as LagomContainer from lagom.exceptions import UnresolvableType T = TypeVar("T")[docs]class Container(LagomContainer): """Abstraction around lagom to provide a dependency injection context. Abstractions used by Galaxy should come through this interface so we can swap out the backend as needed. For instance containers look very nice and would allow us to also inject by name (e.g. for config variables for instance). """ def _register_singleton(self, dep_type: Type[T], instance: Optional[T] = None) -> T: if instance is None: # create an instance from the context and register it as a singleton instance = self[dep_type] self[dep_type] = instance return self[dep_type][docs] def resolve_or_none(self, dep_type: Type[T]) -> Optional[T]: """Resolve the dependent type or just return None. If resolution is impossible assume caller has a backup plan for constructing the desired object. Used to construct controllers that may or may not be resolvable (some have upgraded but legacy framework still works). """ try: return self[dep_type] except UnresolvableType: return None
https://docs.galaxyproject.org/en/release_22.01/_modules/galaxy/di.html
2022-09-25T08:21:26
CC-MAIN-2022-40
1664030334515.14
[]
docs.galaxyproject.org
SRS Release 8.0 to 8.6¶ The following covers the important, registrar affecting changes that occured in releases 8.0 to 8.6 of the SRS. EPP Session Lifetime¶ The maximum lifetime of an EPP session is now limited to 1 day after which the server will automatically close the connection. DNS Delegation Loop Prevention¶ It is no longer possible to specify name-servers that cause an in-zone DNS delegation loop. A DNS delegation loop is when a domain names used by a name-server in DNS delegation path contains domain name being created or updated. Disallow Duplicate DNS Glue¶ It is no longer possible to specify name-servers that have duplicate IPv4 or IPv6 DNS glue records. Unused Handle Deletion¶ Contact handles that have not been linked to a domain for over 90 days are now automatically deleted.
https://docs.internetnz.nz/legacy/changes/release8/
2022-09-25T09:22:05
CC-MAIN-2022-40
1664030334515.14
[]
docs.internetnz.nz
Polymorphic GATT Introduction Silicon Labs' Bluetooth stack implements a static GATT database structure, which means that services and characteristics are created at compile time, not at run time. As a result, software cannot change the database structure dynamically. To overcome this issue, Bluetooth SDK v2.4 introduces a new feature called Polymorphic GATT, which can be used to dynamically show or hide GATT services and characteristics. This new feature allows users to create a ‘superset’ GATT database with pre-defined hidden/visible services and characteristics and alter their visibility on the fly. Note: Changing the visibility of services/characteristics should not be done during a connection because that can cause incorrect behavior if Service Change Indication is not enabled. The safest method is to change the visibilities when no devices are connected, or to change the visibilities after making sure that Service Change Indication were enabled on the connection. How it Works. Read through these sections to get a good grasp of the visibility and inheritance rules. To summarize, capabilities are disabled. Note: If certain services/characteristics are meant to be always visible, one good approach is to have one capability that is declared by those services/characteristics which is enabled by default and untouched by the application code. Setting up Capabilities with GATT Configurator Bluetooth SDK's GATT Configurator supports the polymorphic GATT database and it allows declaring capabilities for the whole GATT database as well as subsets for each of the services and characteristics. Always start by declaring the GATT-level capabilities and define their default value by selecting "Custom BLE GATT" and adding the capabilities in "Capability declaration". To add a capability, press the '+' on the right-hand side and then change the capability name and default value. Declaring capabilities After those capabilities are added, they become available on each of the services and characteristics. They can be added through the drop-down list but this time you'll be shown the list of capabilities declared at the GATT-level where to pick from. Applying Capabilities on Services/Characteristics Enabling/Disabling Capabilities Capabilities can be enabled/disabled with the API command sl_bt you defined in the GATT Configurator, e.g.,:; To enable ota and temp_type and disable all other capabilities, use the command call that looks like this: sl_bt_gatt_server_set_capabilities (ota | temp_type, 0); Service Change Indications The stack monitors the local database change status and manages the service change indications for a GATT client that has enabled the indication configuration of the Service Changed characteristic. The Service Changed characteristic is part of the Generic Attribute service, which can be added to the GATT by ticking the Generic Attribute Service check box in the GATT Configurator (it can be found after selecting Custom BLE GATT in the GATT database). For more information, see Service Change Indication.
https://docs.silabs.com/bluetooth/3.3/general/gatt-protocol/polymorphic-gatt
2022-09-25T08:10:33
CC-MAIN-2022-40
1664030334515.14
[]
docs.silabs.com
Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.6.2/en/topic/technical-notes
2022-09-25T08:24:45
CC-MAIN-2022-40
1664030334515.14
[]
docs.us.sios.com
This page describes how to define a record view and style your record header. Once you’ve configured the source of your record type, each row of your source data will be displayed as a record. To extend your data, you should consider what users will want to see and do from the context of each record. Specifically, you’ll want to think about: Let's break it down with an example. If you are working with a Customer Support record type, first consider who will want to view the information on each record. In this example, support engineers and case managers need to view and monitor each submitted customer case. Once you know who will view the record, what information they will want to see? The support engineer may only need to view who submitted the case, the details about the issue, and the date the case was submitted. The case manager, on the other hand, may want to view the total number of supported cases from the customer, their sentiment score, and their payment plan. When you know who will view the records and what information each type of viewer will want to see, you can define your record views. Record views are design elements that you can use to tailor record data to a user’s interests and needs. You can have multiple record views to create a more comprehensive view of your data that benefits many users. In the Customer Support record type, you could create two different record views: one for support engineers that displays the details of the case, and another for case managers that contains information about the customer’s sentiment score, case history, and payment plan. Once you define your record views, learn how to create record actions so users can take action from the context of a record. If you are working with an existing record type created in 20.2 or earlier, update the record type to use new record type object components, features, and functions. A record view is defined on the record type object and is comprised of an interface that displays information from a single record to end users. You can have multiple record views to surface different insights about each record depending on a user’s interests and needs. Although each record in the record type will contain the same record views, the layout and data that display for each record is determined by the expressions used to define the views. By default, each record type will have at least three views: The Summary view is displayed by default as the first view on a record. You can define the Summary view and up to 20 additional record views on your record type. The News and Related Actions views are configured out-of-the-box on the record type to display any news related to a record and any related actions associated with the record type. These two views are pre-configured to save development time, so they cannot be modified. In order to define a record view, first create an interface object to display the record data. To easily pass data into your interface object, use the record type as a rule input in your record view interface. To learn more about creating a record view and passing the record data, see Create a Record View. By default, each record will have a Summary view. This is typically the first view a user sees when clicking on a record in the record list. Users can navigate to this view from a column in a grid-style record list, or from the main text in a feed-style record list. To define the Summary view: In the views grid, click Summary. In the example below, the expression rule!P_PurchaseOrderDashboard(rv!identifer) is used to call the interface and pass in the record's ID. For more information on rv!, see Domain Prefixes. In addition to the Summary view, you can have up to 20 record views. To add another view: Click New View. A record type has two record views that are configured out-of-the-box and displayed by default on each record: Since these record views are auto-populated with related news events and related actions, they cannot be modified. There may be cases when you don't want to display the News view or the Related Actions view on your records. For example, you may want to hide these views if your application doesn't utilize the News feed, or you've used the record action component to display related actions on your interfaces. When you don't want to display the News or Related Actions views, you can hide them to prevent users from navigating or seeing these views on the records. You can determine whether or not the News or Related Actions view is displayed by selecting the Show News view or Show Related Actions view checkbox on the Views page. When you choose to show or hide either of these views, you are determining the view's visibility. This means that if you configure the record type to hide a view, users will not be able to see or interact with the view anywhere in the application. For example, if you configure a site to display the News view, but you've hidden the view on the record type, the News view will not display on the site. Hiding the Related Actions view does not determine the security of the related actions. Users can still perform related actions from related action shortcuts, the record action component, or by navigating to the URL for that related action if they have the proper security permissions to do so. To restrict permissions on related actions, configure the underlying process model's security. Once you've created your record views, think about adding some final touches on the record's presentation. To start, each record will need a title that displays in a record header. The record title appears at the top of each record view, in record tags, and in the hover card for that record. The way you define the record title will vary depending on whether you plan to display your list of records as a grid-style or feed-style list. For grid-style record lists, go to the Views page of the record type. You can configure a specific expression for each record title in the Record Title field. For example, the image below uses the expression rv!record[recordType!purchaseOrder.fields.purchaseOrder] to display each record's purchase order number as the title. For feed-style record lists, the record title comes from the title parameter in a!listViewItem when you define the record list. Learn more about the listViewItem function The record header appears at the top of each record view as the background and contains the title, breadcrumbs, and related actions. Record headers can be styled using colors or a billboard image. By default, the record header style is NONE. Headers can display one background color for all records in a record type, or different colors based on an expression or variables within the record. The record header will display the selected color style with the record title, breadcrumbs, and related action buttons in the card. You can use one of the following options to set the background color: TEXT. You can configure headers to display one image or multiple images. One image from a document or a URL can be used for all records in a record type. Similar to color backgrounds, you can also configure image backgrounds to display different images based on variables within the record or using an expression. The record header will display the billboard image of your choice, where you can style the overlay, height, and background color. The overlay will contain the record title, breadcrumbs, and related action buttons. The following table lists the options you can use to style the image background: If you use variable or expression to configure the image background, the live preview will not display the selected image. To configure the Document option for an image: To configure the URL option for an image: To configure the Variable option for an image: From the Color dropdown, select the record variable of your image. This picker returns record variables of type TEXT, INTEGER, and DOCUMENT. To configure the Expression option for an image: On This Page
https://docs.appian.com/suite/help/21.1/record-view.html
2022-09-25T08:57:01
CC-MAIN-2022-40
1664030334515.14
[]
docs.appian.com
Inventory Item Templates A template for an inventory item that will be automatically created when instantiating a new device. All attributes of this object will be copied to the new inventory item, including the associations with a parent item and assigned component, if any. See the inventory item documentation for more detail.
https://docs.netbox.dev/en/stable/models/dcim/inventoryitemtemplate/
2022-09-25T09:10:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.netbox.dev
Install SQL Patches The Install SQL Patches page allows you to run SQL commands directly. The text box on the top allows you to type in commands. The file selection button on the bottom allows you to import a file containing commands. If you have used a prefix for your database, when you use Install SQL Patches, the prefix doesn’t need to be added to the script (or the command). If you use phpMyAdmin instead, you must adjust each of your SQL statements to reflect the prefix..
https://docs.zen-cart.com/user/admin_pages/tools/install_sql_patches/
2022-09-25T07:20:55
CC-MAIN-2022-40
1664030334515.14
[array(['/images/install_sql_patches.png', 'Install SQL Patches'], dtype=object) ]
docs.zen-cart.com
Pool Enable Auto Scale Options. Ocp Date Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets the time the request was issued. Client libraries typically set this to the current system clock time; set it explicitly if you are calling the REST API directly. [Newtonsoft.Json.JsonConverter(typeof(Microsoft.Rest.Serialization.DateTimeRfc1123JsonConverter))] [Newtonsoft.Json.JsonProperty(PropertyName="")] public Nullable<DateTime> OcpDate { get; set; } member this.OcpDate : Nullable<DateTime> with get, set Public Property OcpDate As Nullable(Of DateTime) Property Value - System.Nullable<System.DateTime> - Attributes - Newtonsoft.Json.JsonConverterAttribute Newtonsoft.Json.JsonPropertyAttribute
https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.batch.protocol.models.poolenableautoscaleoptions.ocpdate?view=azure-dotnet
2022-09-25T09:05:22
CC-MAIN-2022-40
1664030334515.14
[]
docs.azure.cn
Front End Module (FEM) Utility Radio FEM Utility This optional software component can be enabled to include default functionality related to FEM configuration. When using a Silicon Labs-developed board with a FEM, the configuration options for this software component are set up automatically based on the selected board. When using a custom board, manually configure the configuration options. Note that this is a Radio Utility, instead of a RAIL Utility, because this code is independent from the RAIL library. Configuration Options The following configuration options can be changed: - Enable/disable receive mode. - Enable/disable transmit mode. - Enable/disable bypass mode. - Enable/disable transmit high-power mode. - Enable/disable FEM optimized radio configuration. - Enable/disable runtime configuration of FEM optimized radio configuration. The following hardware options can be changed: - Configure the bypass mode GPIO pin. - Configure the receive mode GPIO pin. - Configure the sleep mode GPIO pin. - Configure the transmit mode GPIO pin. - Configure the transmit high-power mode GPIO pin.
https://docs.silabs.com/rail/2.12/fem-util
2022-09-25T07:23:00
CC-MAIN-2022-40
1664030334515.14
[]
docs.silabs.com
Confluent REST Proxy の更新履歴.0¶ Version 3.3.0¶ Version 3.2.2¶ Version 3.2.1 'topic'
https://docs.confluent.io/ja-jp/platform/7.1.1/kafka-rest/changelog.html
2022-09-25T07:47:48
CC-MAIN-2022-40
1664030334515.14
[]
docs.confluent.io
- 14 Sep 2022 - 1 Minute to read - - DarkLight OpenSearch FAQ - Updated on 14 Sep 2022 - 1 Minute to read - - DarkLight What are some of the most common issues encountered when performing an OpenSearch upgrade? Graylog index sets without replicas can cause "red" statuses in an Elasticsearch cluster during rolling upgrades. For example, when you take an Elasticsearch node offline to upgrade to OpenSearch, primary shards that the node was hosting are unavailable. When the primary shards are then available and reallocated to the node, then the cluster returns to a "green" state. After OpenSearch finishes starting up on the node, it allocates the shards and reports them to the cluster. What is OpenSearch's security feature? Like Elasticsearch, OpenSearch also includes similar security features implemented via a plugin. These include, but are not limited to: roles, role-mappings, and TLS-encrypted cluster communication. The instructions for enabling and configuring these features are similar to configuring Elasticsearch. OpenSearch has defined them here. Again, if you do not already have security configured within Elasticsearch, disable it in OpenSearch as per their instructions and revisit your interest in enabling it after the upgrade. I’m using AWS Elasticsearch service. Can I upgrade to OpenSearch? Yes. AWS has specific instructions to accomplish this task. Ensure you have sufficient disk space on your Graylog server(s) to buffer traffic with your journal(s), as the new OpenSearch cluster may not be ready to resume indexing by Graylog until the blue/green deployment is complete. Also confirm that the auto index-create feature is disabled by using the cluster API for the AWS Elasticsearch domain: curl -X PUT "'https://<blah>.es.amazonaws.com/_cluster/settings" -H 'Content-Type: application/json' -d' { "persistent": { "action.auto_create_index": "false" } } '
https://docs.graylog.org/docs/opensearch-faq
2022-09-25T08:23:38
CC-MAIN-2022-40
1664030334515.14
[]
docs.graylog.org
After several months of active development, we want to present our first release of PredictKube - a solution for proactive management of a Kubernetes cluster (scaling, monitoring, security). We are a geek team that has been developing and supporting Kubernetes clusters in a variety of environments for 6 years. Over the years of practice, we have often come across the fact that the dynamic world challenges us faster than modern technologies can respond. Since we have been actively working with MLOps tools for the past few years and have inside AI / ML expertise, we decided to create a tool that would work with multiple DataSources and use this data to predict events. This way we can start preparing for incidents before they actually happen. The first direction of PredictKube was auto-scaling, since it was this industry that was the most relevant for us. Most of our clients work with Blockchain, and blockchain nodes of such massive networks as Ethereum or Binance Smart Chain (each node requires a state greater than 1Tb) cannot be scaled instantly. Despite the fact that we have developed products such as pv-provisioner, which allow you to deploy PersistentVolume from prepared Cloud Snapshots, it can take from 2 to 4 hours to launch and synchronize 1 node. Therefore, at the moment when the traffic has grown and the current number of replicas is not enough for us, we cannot simply scale out using the HPA rules. The solution was to use AI and business metrics, which allows you to find out about the need for scaling in advance. The PredictKube KEDA Scaler was born. We chose KEDA as the foundation for integration, since we see it as the most promising product in the niche for automatic scaling of Kubernetes and implemented our own Scaler, which can work with Prometheus, as a Datasource. All you need to do is define the standard parameters for Prometheus Scaler and a couple more settings - the planning horizon and the amount of historical data. This is enough to make your autoscaline predictive. You can read more about how to configure this in our QuickStart
https://docs.predictkube.com/blog/
2022-09-25T07:39:39
CC-MAIN-2022-40
1664030334515.14
[]
docs.predictkube.com
1 Overview 1.Trading Pairs on StaFi Chain rDEX will support the following trading pairs on StaFi chain: rFIS/FIS rETH/FIS rDOT/FIS rATOM/FIS rBNB/FIS rSOL/FIS rMATIC/FIS Among the above 7 rTokens, only rETH is issued on Ethereum chain directly, and the other 6 rTokens are issued on StaFi chain. Because the rDEX is deployed on StaFi chain, rETH token needs to be swapped from Ethereum chain to StaFi chain, which could be done through rBridge, before being traded on rDEX. Please check the rBridge Guide . 2.Key Features Continuous Liquidity rDEX is an automated market maker DEX to provide continuous liquidity for rTokens by utilizing the Thorchain’s CLP market maker model. Lower Slippage rDEX ensures l ow slippage for small and medium-sized transactions by using the fee model based on slippage. Asymmetrical Deposit Unlike the majority of cryptocurrency liquidity pools, rDEX users can provide liquidity by depositing one token or two tokens asymmetrically. 3.Security Audit rDEX testnet has been tested in three different ways in the past 6 weeks to make sure it is extremely safe before its release on the mainnet: rDEX testnet has been audited by PeckShield. Check the audit report . rDEX testnet has been tested by our community through the Bug Bounty program. Check the Bug Report Recap . rDEX testnet Bug Bounty Program is live on Immunefi since 11th February, 2022. Check this program on Immunefi. 4. Insurance Fund At StaFi, we always say that the ‘safety of funds of our usersis our first priority’. Therefore, StaFi wants to initiate an Insurance Fund for rDEX which will act as an effective safeguard as well as provide additional protection for rDEX users against any potential and unforeseen hacks which were not found or avoided in the security audit endeavors listed above. StaFi Core wants to propose that StaFi Foundation should allocate 5 million FIS into the Insurance Fund for rDEX users amid a barrage of hacks on rDEX contracts. And this proposal will be announced with details for the community in the following weeks. 5.Liquidity Mining Program In order to incentivize the adoption of rDEX, StaFi Core has proposed a grand liquidity mining program for its users. If you are interested in this program, we welcome you to read the rDEX Liquidity Mining Program Proposal to get yourself prepared. Resources Liquidity Bootstrap Plan Last modified 21d ago Copy link Outline 1.Trading Pairs on StaFi Chain 2.Key Features Continuous Liquidity Lower Slippage Asymmetrical Deposit 3.Security Audit 4.Insurance Fund 5.Liquidity Mining Program
https://docs.rdex.finance/welcome-to-rdex/rDEX-The%20AMM%20DEX%20for%20rTokens/resources/rdex-v1-overview
2022-09-25T07:29:04
CC-MAIN-2022-40
1664030334515.14
[]
docs.rdex.finance
# Changelog for Teamscale 8.0 8.0.x, drop-in. - When updating from 7.8.x or earlier, a full re-analysis via backup is required. # Version 8.0.19 September 19th, 2022 # Fixes RaexFindingSynchronizerfailed due to missing DLL file NumberFormatExceptionwhen parsing JUnit reports which contained numbers using commas as decimal separators - Legacy Teamscale links did not redirect to their corresponding pages # Improvements - Tables on Tests by Spec Item and Spec Items by Test views are now sortable - Added support to disable Switch statement without default casecheck in case for switches over enums # Version 8.0.18 September 13th, 2022 # Fixes - False positives for "Each variable should be declared in a separate statement" check in C++ when using genericas the name of a function or class - False positives for "Multiple statements in same line" check in C++ when calling constructors with curly brackets - Introduction date in findings table was cut off on narrow screens # Version 8.0.17 September 6th, 2022 # Fixes - False positives for "Avoid usage of implicit int" check for C/C++ - Link to specification item was included in the Merge Request view even if the item did not exist - Gerrit connector voted +1 in case of yellow findings and the option "Ignore Yellow Findings For Votes" enabled # Version 8.0.16 August 30th, 2022 # Fixes - C/C++: StackOverflowErrorin Dataflow analysis for switch without braces in lambda function - Objective-C blocks were incorrectly parsed - False positives for "Comment Completeness" analysis for C/C++ due to Doxygen comment identification - Large values formatted with SI prefixes were displayed with three decimal places in metric trend chart slides - SCM Manager credentials: Passwords with non-ASCII characters did not work correctly - S3 and Artifactory connectors ignored archive deletion events - Failures in CodeChangeIndexSynchronizerdue to changes in shallow parser implementations PolarionSynchronizerperformed too many logins, leading to session problems # Version 8.0.15 August 23rd, 2022 # Fixes NullPointerExceptionin BitBucketServerMergeRequestAnnotationTriggerwhen merge request was deleted - Teamscale failed to update Bitbucket Server pull requests in case of out-of-date information and when someone replied to a comment - In reports, introduction diffs for some findings were not rendered - Clone compare slide did not correctly show clones for same file - False positives for "Commented-out code" check in Java - False positives for "Do not put multiple statements in a Lambda expression" check in Java - Gosu parser error in case of nested lambdas - Rendering of findings descriptions in the Check Explorer included too many line breaks - IntelliJ plugin: NullPointerExceptionwhen fetching findings # Version 8.0.14 August 16th, 2022 # Fixes - S3 connector did not collect all items in large buckets - TGA coverage sources chooser was inconsistent with latest upload in some rare cases - Number of loadable custom checks per location was wrong on System Information page - URLs did not contain branch name for default branch which made them inadequate for sharing - Tests that did not exist in the selected partitions were still shown in the Test Metrics view with count zero - Counting the number of committers did not take commit type and aliases into account - Selection of integrated GitHub app was undeterministic when two apps had the same id but different URLs # Improvements - Backup minimizer now supports branch include and exclude patterns # Version 8.0.13 August 9th, 2022 # Fixes JiraIssueUpdatePostAnalysisTriggerwas unnecessarily executed even for pre-commit changes - Empty Jira update notifications were sent by Teamscale - Jira connector blocked analysis if many requests were sent in a short time frame - Analysis of Bitbucket project stopped due to many merge request annotation triggers and lots of activities on Bitbucket server - Voting failed in some rare branch scenarios - Slashes in the default branch name caused errors when using Git connectors - The .NET version check required an installed SDK although .NET Runtime should have been sufficient - False positives in JavaScript naming convention check for React components - Simulink: Root-level model findings popup was obscured by the browser's top bar and findings were not clickable - Preprocessor-generated tokens of C/C++ were handled wrong in several analyses - Some SVN forks were not correctly detected - False positives for check "Non-void function should return a value" in C++ in case of unresolved macros - False positives for "Comments should not contain nested comments" check in Python when comments contained URLs with # # Improvements - System view now displays committers of the last 90 days - Support for setting a credentials process for S3 connector through a configuration property # Version 8.0.12 August 2nd, 2022 # Fixes - Swift long method findings had wrong locations - "Save anyway" button did not save project after project validation error - Kotlin try/ catch/ finallyconstructs were not parsed correctly abap-findingsservice missed findings from (transitive) includes - Activity > Issues view did not correctly handle clicking the "Show trend and treemap" button on invalid issue queries # Improvements - Provide debug service to list/download/remove temporary files # Version 8.0.11 July 26th, 2022 # Fixes - C# analysis profiles could not be edited BackupMinimizerdid not delete temporary storage directory after its execution - Changing the project used by a widget redirected the user without saving - Widget path chooser did not work correctly when value was removed - When code review findings were enabled, opening a non-existing file led to a red error page NumberFormatExceptionin division by zero analysis - Uploaded external findings got lost in cases where analyses had not reached the target commit - Font color UI glitch in left sidebar # Improvements - Updated dependencies in docker image to latest version # Version 8.0.10 Security Improvements This version contains security improvements. If possible, please update to at least this version. July 19th, 2022 # Fixes - Annotating merge requests with line comments and findings badges could not be configured independently NullPointerExceptionin GerritAnalysisResultUploadTrigger - Global keyboard shortcuts did no longer work hasParent()operator in issue queries always returned an empty result - Slow Teamscale backup import due to incorrectly configured Xodus database-memory setting - 8.0.9 July 12th, 2022 # Fixes - File regexes could not be deleted in Metric File Distribution widget - Saving a GitHub connector always caused re-analysis - # Improvements - Backup Minimizer only supported in-memory storage systems # Version 8.0.8 July 5th, 2022 # Fixes - Check Explorer: Deselecting all languages also removed all selected tools - Compare view scroll synchronization was inconsistent when scrolling horizontally - Sorting the rows in the Test Gaps perspective tables by 'Test Gap', 'Execution', or 'Churn' did not work correctly - - Improved wording for TGA badges in merge requests # Version 8.0.7 June 28th, 2022 # Fixes - Opening non-code metrics resulted in an "index out of bounds" error message - IEC61131-3 structured text .tufiles were not parsed correctly - Java module definitions were not parsed correctly - Eclipse/Intellij/Netbeans plugin: Using "Open in Editor" in the Findings view didn't work after findings have been retrieved for an entire folder rather than a single file - Missing error message when using SonarLint for C# when .NET SDK 6 runtime was not installed - AbapLint findings were missing for function group and ABAP include extension class files - Polarion fields for linked work items were mixed up - S3 connector failed due to malformed authorization header - High memory consumption when multiple external reports were uploaded or changed at the same time # Improvements - Reduced memory consumption for C/C++, as well as Objective-C projects # Version 8.0.6 8.0.5 June 14th, 2022 # Fixes - Azure DevOps TFVC connector did not respect the http.nonProxyHostsflag - Autocomplete feature of password fields caused odd configurations in some input fields - Some findings were wrongly flagged as false positive or tolerated AssertionErrorin SimulinkBuilderif two Simulink blocks had the same name - # Improvements - Optimized Delta perspective when used with an architecture path for components containing a lot of files # Version 8.0.4 June 7th, 2022 # Fixes - Description of Pylint report generation was not compatible with newer Pylint versions - Errors about overly long finding message from SAP Code Inspector imports - Code was not visible when a user did not have view permissions for another user who last reviewed a file IllegalArgumentExceptioncould occur when building the custom check sample - KB - Added anti-aliasing to Code City widget - Speed up for scheduling of external analysis triggers - Better initial loading time for Test Filter view - Better documentation for the creation of GitLab webhooks # Version 8.0.3 May 31st, 2022 # Fixes - Simulink: Stateflow transitions crossing states were not displayed correctly - Delta perspective did not handle cases consistently where paths had a common prefix - Swift keyword letwas not parsed correctly AssertionErrorin TypeIndexSynchronizerdue to missing support for ABAP enhancement objects from abapGit repositories - Branch coverage detection failed for branches with only a single line of code # Version 8.0.2 May 24th, 2022 # Fixes - Backups with enabled SonarLint C# checks could not be imported - Requirements Tracing and Issues pages contained a single dangling semicolon - The ternary operator ( ?:) was not parsed correctly in C# when using expression-bodied methods - ABAP system ID was not logged for asynchronous ABAP imports - Check descriptions in the Analysis profile editor could appear hidden in the background - Using awaitbefore foreachstatement in C# was not parsed correctly - Static local functions in C# were not parsed correctly - - Added new default directives to the C/C++ analysis profiles (does not affect existing profiles) - Improved security against path traversal and DoS attacks # Version 8.0.1 May 18th, 2022 # Fixes - Backup import failed with "Unknown configuration item" when importing Java projects - Line numbers of SAP code inspector findings were wrong in class private or protected sections # Version 8.0.0 Bug Fixes - 8.0.0 contains all fixes from previous versions released on and before April 26th, 2022 - For brevity, only new features are included in the changelog April 26th, 2022 # Web UI - Issue perspective: stored issue queries on the right sidebar can now be filtered - Issue perspective now contains a link to the Test Gap perspective to quickly review test gaps for issues of the currently selected issue query - Issue perspective now provides a dialog to easily navigate to a specific issue - Commit Details view of an aggregated commit now includes the commit messages and revisions of the aggregated commits - Commit Details view now displays the upload timestamp of coverage upload commits - Architecture editor can now be set to only show incoming or outgoing dependencies - Test code in Metrics and Test Gap tables is now marked green - Delta perspective for test gaps now provides a partition selector - Method History view now has an additional tab showing the issues referenced in the commit history - Time travel now supports the input of revisions obtained via GitHub - Trends in dashboard widgets can now easily be removed via a button # New Checks - "Avoid Jumbled Loop Variable Modifications" check (C++/C, C#, Java, JS/TS, PHP, Swift, Groovy, Objective C, Go) - " HttpClientinstantiated in usingstatement" check (C#) - Updated Clangtidy support from version 10 to 13.01 # Issues and Requirements Tracing - Jira connections with identical settings are now transparently cached across project boundaries - Issue ids are now matched to the corresponding Teamscale issue regardless of casing - List of issues can now be exported with corresponding test gap data - RTC requirements tracing: links between work items can now be imported # Testing - Support for testwise coverage execution units - Pareto ranking is now performed asynchronously on the server - Pareto ranking: the number of selected tests is now shown - Public API for getting a list of methods executed by a test case # Code Collaboration Platforms - Voting on merge requests and annotating them with badges can now be configured independently from each other - Configuration of multiple GitHub organizations is now supported # Reporting - Task Detail slide now supports macro expansion for code snippets - Test Gap Treemap slide now allows setting annotations for components # IDE Integrations - Visual Studio: improved pre-commit selection dialog # Administration - Instance comparison can now check for differences in TGA data - SAML login errors now contain additional information - Option "Use Teamscale's default crypto key" is now hidden if no other key is configured
https://docs.teamscale.com/changelog/v8.0.x/
2022-09-25T08:59:53
CC-MAIN-2022-40
1664030334515.14
[]
docs.teamscale.com
Collated Funding Rates is a Crypto Specific Indicator that pulls Exchange Funding Rate Data from several exchanges for both Bitcoin and Ethereum. By combining both the Funding Data for Bitcoin and Ethereum across several exchanges, a Trader can see the Collated Funding Rates for the Majority of the Crypto market. Funding rates are periodic payments either to traders that are long or short based on the difference between perpetual contract markets and spot prices. Therefore, depending on open positions, traders will either pay or receive funding. Crypto funding rates prevent lasting divergence in the price of both markets These Funding Rates also incentivize Liquidity Providers to take certain positions whether Long or Short based on the Funding Rate. A Liquidity Provider, also known as a Market Maker, is someone who provides their crypto assets to a platform to help with the decentralization of trading. In return, they are rewarded with fees generated by trades on that platform, which can be thought of as a form of passive income. When the Funding Rate is positive, the price of the perpetual contract is usually higher than the market price. Thus, traders who are long pay for short positions. Conversely, a negative Funding Rate means that short positions pay for longs. As the funding rate decreases from positive to negative Traders and Liquidity Providers are incentivized to take long positions. This can be seen on the indicator as it changes from green to red. Funding Rates are used by Exchanges to manipulate the price. Because of this fact, Collated Funding Rates can be a Leading Edge Indicator as the Funding Rates tend to change when a specific up or down move in Price is coming to an end.
https://docs.trendmaster.com/collated-funding-rates/overview
2022-09-25T08:40:25
CC-MAIN-2022-40
1664030334515.14
[]
docs.trendmaster.com
SPF, DKIM and DMARC are three independent features in Trend Micro Email Security. You can enable or disable those features based on your requirements. The following are typical scenarios for your reference: DMARC enabled only Trend Micro Email Security performs its own SPF check and DKIM signature check before alignment check. SPF check, DKIM verification and DMARC authentication enabled at the same time Trend Micro Email Security checks the sender domain for each inbound email message. If a message does not pass the SPF check, the message will be deleted, quarantined or delivered depending on the action configured. If the message passes the SPF check, Trend Micro Email Security verifies DKIM signatures in the message. If the message does not pass DKIM verification, the message will be deleted, quarantined or delivered depending on the action configured. If the message continues to the next step in the delivery process, Trend Micro Email Security implements DMARC authentication on the message.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-email-security-online-help/inbound-and-outbound/domain-based-authent/how-dmarc-works-with.aspx
2022-09-25T09:10:28
CC-MAIN-2022-40
1664030334515.14
[]
docs.trendmicro.com
Kubernetes Costs via a Container Insights Cross Account Integration# Vantage follows the official AWS documentation on securely sending CloudWatch logs to another AWS account to ingest Kubernetes costs through Container Insights. The steps below are for users who choose to use Container Insights, instead of the recommended OpenCost integration. Deploy Cloudwatch Agent with Cross Account ARN# The Cloudwatch agent must be setup to collect metrics from your clusters. You will have to make one change on step 3 of the AWS Container Insights setup instructions and modify the cwagent-configmap.yml to include the role_arn. Vantage will have provisioned this role for you already, see below. # create configmap for cwagent config apiVersion: v1 data: # Configuration is in Json format. No matter what configure change you make, # please keep the Json blob valid. cwagentconfig.json: | { "agent": { "credentials": { "role_arn": "arn:aws:iam::<VANTAGE_ACCOUNT>:role/containerinsights-<CUSTOMER_NAME>" } }, "logs": { "metrics_collected": { "kubernetes": { "metrics_collection_interval": 60 } }, "force_flush_interval": 5 } } kind: ConfigMap metadata: name: cwagentconfig namespace: amazon-cloudwatch Adding Permissions for Node Roles# After this is done you will have to modify the IAM permissions of the Node Role that is used for your EKS Cluster roles. They will require two changes: - An inline policy that allows the role to assumeRole the IAM Role on the Vantage side. - Attachment of an AWS managed policy called CloudWatchAgentServerPolicywhich allows the Node to send cloudwatch metrics. Each Node will have to assume the role above to write logs to your Vantage account. That means that each node IAM role in your AWS account will need to attach the inline policy below. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<VANTAGE_ACCOUNT>:role/containerinsights-<CUSTOMER_NAME>" } ] } Note: If using self-managed nodes on EKS you will have to find out the node roles you have assigned within the cluster yourself. Now, attach the CloudWatchAgentServerPolicy policy to each node role. Provision a Cross Account Role# Vantage will provision an IAM role internally with the following trust policy and attach the CloudWatchAgentServerPolicy managed policy. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<CUSTOMER_AWS_ACCOUNT_ID>:root" }, "Action": "sts:AssumeRole" } ] }
https://docs.vantage.sh/kubernetes_container_insights/
2022-09-25T08:49:01
CC-MAIN-2022-40
1664030334515.14
[]
docs.vantage.sh
Geographic Centroid Matches (G category) Matches in the G category indicate that a match was made at the state, county, or city level. Code Description G1 State match, point located at the state centroid. G2 County match, point located at the county centroid. G3 City match, point located at the city centroid. A number of prominent U.S cities can be matched even if no other information is provided. For example, if you provide Chicago (city) but no state, the record is matched to Chicago, IL.
https://docs.precisely.com/docs/sftw/mapmarker/main/en-us/webhelp/mmo/InputOptions/MMResultCodes_G_category.html
2021-04-10T11:26:38
CC-MAIN-2021-17
1618038056869.3
[]
docs.precisely.com
Log events Construct custom log events to index and search metadata. Log events are sent to your Splunk deployment for indexing. As with other alert actions, log events can be used alone or in addition to other alert actions for a given alert. Authorization requirement Using the log event alert action requires the edit_tcp capability for users without the admin role. Tokens for log events When you set up a log event alert action, populate event fields with plain text or tokens representing search, job, or server metadata. You can also use tokens to access the first search results set. Tokens available for email notifications are also available for log events. For more information on using tokens with alert actions, see Use tokens in email notifications in this manual. Set up a log event alert action Here are the steps for setting up a custom log event alert action after building a search. Prerequisites To review token usage, see Use tokens in email notifications in this manual. Steps - You can configure the log event action when ceating a new alert or editing an existing alert's actions. Follow one of the options below. - From the Add Actions menu, select Log event. - Add the following event information to configure the alert action. Use plain text or tokens for search, job, or server metadata. - Event text - Source and sourcetype - Host - Destination index for the log event. The mainindex is the default destination. You can specify a different existing index.. - Click Save. The following steps are the same for saving new alerts or editing existing alerts.!
https://docs.splunk.com/Documentation/Splunk/7.0.5/Alert/LogEvents
2021-04-10T12:28:20
CC-MAIN-2021-17
1618038056869.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Impala details page The Impala APM provides a detailed view into the behavior of Impala queries. Key performance indicators Events: The number, if any, of Unravel insights for this query. Duration: Total time taken by the query. Data I/O: Total data read and written by the query. Number of Fragments: Total number of query fragments. Number of Operators: Total number of operators in this query. Left tabs Fragments: Displays a table with information about each fragment associated with this query. Click More to expose the Fragments operators and Less to hide them. The coordinator fragment ( ) is always the nth fragment. This window shows the Fragment and its KPIs. It defaults to the table of the Fragment's Operators with the associated KPIs for the operations. Clicking the operator brings up the operator window. (See Operators for more information.) You can view the Query Plan or the Instance View. Instance View: Lists each instance with its KPIs. Operators: Displays a list of all operators for all fragments. You can search the operator's name. Click the operator to display its details. Scan HDFS details Aggregate details Exchange details Hash Join Gannt Chart: Charts the fragments and the time spent on each operation. Hover over a section to see the operation and its KPIs. Query plan: Shows the query plan in fragment or operator view. Both the fragment and operator view are shown here. Hover over the operator to get detailed information. Click the button to switch views. Right tabs Query: Shows the query plan code. Click Query Copy to copy the query. See Impala APM image above for the Query Tab. Mem Usage: Graphs the Memory Usage by peak usage. Notes the maximum memory used on what host and the estimated memory per host.
https://docs.unraveldata.com/en/apms-impala-462.html
2021-04-10T11:34:37
CC-MAIN-2021-17
1618038056869.3
[array(['image/uuid-4dc3f87b-1244-decf-c2fa-c3a5762bd22e-en.png', 'impala-apms-main.png'], dtype=object) ]
docs.unraveldata.com
If you have this, check next things: Do you have approved programs in your affiliate system? Most of affiliate aggregators require approved programs to use in the API. Also, not all merchants add their products in the product API feed. Do you have enabled advanced search filters? It's a very common issue, when users enable a filter like Search only in Amazon products that will search products which belong to Amazon company.
https://ce-docs.keywordrush.com/faq/nothing-found-while-search
2021-04-10T11:48:13
CC-MAIN-2021-17
1618038056869.3
[]
ce-docs.keywordrush.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Confluent Cloud Quick Start¶ This quick start shows you how to get up and running with Confluent Cloud. This quick start will show the basics of using Confluent Cloud, including creating topics and producing and consuming to a Apache Kafka® cluster in Confluent Cloud. Confluent Cloud is a resilient, scalable streaming data service based on Apache Kafka®, delivered as a fully managed service. Confluent Cloud has a web interface and local command line interface. You can manage cluster resources, settings, and billing with the web interface. You can use Confluent Cloud CLI to create and manage Kafka topics. For more information about Confluent Cloud, see the Confluent Cloud documentation. - Prerequisites - Access to Confluent Cloud - Confluent Cloud Limits and Supported Features - Maven to compile the client Java code Step 1: Create Kafka Cluster in Confluent Cloud¶ Important This step is for Confluent Cloud users only. Confluent Cloud Enterprise users can skip to Step 2: Install and Configure the Confluent Cloud CLI. Log into Confluent Cloud at. Click Create cluster. Specify a cluster name, choose a cloud provider, and click Continue. Optionally, you can specify read and write throughput, storage, region, and durability. Confirm your cluster subscription details, payment information, and click Save and launch cluster. Step 2: Install and Configure the Confluent Cloud CLI¶ After you have a working Kafka cluster in Confluent Cloud, you can use the Confluent Cloud command line tool to interact with your cluster from your laptop. This quick start assumes your are configuring Confluent Cloud for Java clients. You can also use Confluent Cloud with librdkafka-based clients. For more information about installing the Confluent Cloud CLI, see Install the Confluent Cloud CLI. From the Environment overview page, click your cluster name. Click Data In/Out in the sidebar and click CLI. Follow the on-screen Confluent Cloud installation instructions. Step 3: Configure Confluent Cloud Schema Registry¶ Important - Confluent Cloud Schema Registry is currently available as a preview. For more information, see Confluent Cloud Schema Registry Preview. - Your VPC must be able to communicate with the Confluent Cloud Schema Registry public internet endpoint. For more information, see Using Confluent Cloud Schema Registry in a VPC Peered Environment. Enable Schema Registry for your environment¶ Configure the Confluent Cloud CLI for Schema Registry¶ From the Environment Overview page, click CLUSTERS and select your cluster. Tip You can view Confluent Cloud Schema Registry usage and API access information from the Environment Overview -> SCHEMA REGISTRY page. Select Data In/Out -> Clients and the JAVA tab. Follow the onscreen instructions to create the Schema Registry-specific Java configuration, including API key pairs for Schema Registry and your Kafka cluster. Copy this information and paste in your Confluent Cloud CLI configuration file ( ~/.ccloud/config). Your configuration file should resemble this. cat ~/.ccloud/config ssl.endpoint.identification.algorithm=https sasl.mechanism=PLAIN request.timeout.ms=20000 bootstrap.servers=<bootstrap-server-url> retry.backoff.ms=500 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<kafka-api-key>" password="<kafka-api-secret>"; security.protocol=SASL_SSL // Schema Registry specific settings basic.auth.credentials.source=USER_INFO schema.registry.basic.auth.user.info=<schema-registry-api-key>:<schema-registry-api-secret> schema.registry.url=<schema-registry-url> // Enable Avro serializer with Schema Registry (optional) key.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer value.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer schemaregistry5000:alsdkjaslkdjqwemnoilbkjerlkqj123123opwrqpru \ Step 4: Create Topics and Produce and Consume to Kafka¶ Create a topic named my_topicwith default options. ccloud topic create my_topic Tip By default the Confluent Cloud CLI creates topics with a replication factor of 3. Optional: Describe the my_topictopic. ccloud topic describe my_topic Your output should resemble: Topic:my_topic PartitionCount:12 ReplicationFactor:3 Configs:message.format.version=1.0-IV0,max.message.bytes=2097164,min.insync.replicas=2 Topic: my_topic Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2 Topic: my_topic Partition: 1 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3 Topic: my_topic Partition: 2 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0 Topic: my_topic Partition: 3 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 Topic: my_topic Partition: 4 Leader: 3 Replicas: 3,2,0 Isr: 3,2,0 Topic: my_topic Partition: 5 Leader: 0 Replicas: 0,3,1 Isr: 0,3,1 Topic: my_topic Partition: 6 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2 Topic: my_topic Partition: 7 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3 Topic: my_topic Partition: 8 Leader: 3 Replicas: 3,0,1 Isr: 3,0,1 Topic: my_topic Partition: 9 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 Topic: my_topic Partition: 10 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Topic: my_topic Partition: 11 Leader: 2 Replicas: 2,3,0 Isr: 2,3,0 Modify the my_topictopic to have a retention period of days ( 259200000milliseconds). ccloud topic alter my_topic --config="retention.ms=259200000" Your output should resemble: Topic configuration for "my_topic" altered. Produce records to the my_topictopic. ccloud produce --topic my_topic You can type messages in as standard input. By default they are newline separated. Press Ctrl + Cto exit. foo bar baz ^C Consume items from the my_topictopic and press Ctrl + Cto exit. ccloud consume -b -t my_topic Your output should show the items that you entered in ccloud produce: baz foo bar ^C Processed a total of 3 messages. The order of the consumed messages does not match the order that they were produced. This is because the producer spread them over the 10 partitions in the my_topictopic and the consumer reads from all 10 partitions in parallel. Step 5: Run Java Examples¶ In this step you clone the Examples repository from GitHub and run Confluent Cloud Java examples with Avro. The examples repository contains demo applications and code examples for Confluent Platform and Kafka. Clone the Confluent Cloud examples repository from GitHub and navigate to the Confluent Cloud Java directory. git clone cd examples/clients/cloud/java Build the client example. mvn clean package Run the producer (with arguments that specify the path to connect to your local Confluent Cloud instance and topic name). mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ProducerAvroExample" \ -Dexec.args="$HOME/.ccloud/config my_topic_avro". Run the Kafka consumer application to read the records that were just published to the Kafka cluster, and display the records in the console. Rebuild the example. mvn clean package Run the consumer (with arguments that specify the path to the Confluent Cloud instance and topic name). mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ConsumerAvroExample" \ -Dexec.args="$HOME/.ccloud/config my_topic_avro" Hit Ctrl+Cto stop. View the schema information registered in Confluent Cloud Schema Registry, where Schema Registry API key ( <schema-registry-api-key>), API secret ( <schema-registry-api-secret>), and endpoint ( <schema-registry-url>) are specified. View the list of registered subjects. curl -u <schema-registry-api-key>:<schema-registry-api-secret> \ <schema-registry-url>/subjects/my_topic_avro-value/versions/1 Your output should resemble: {"subject":"my_topic_avro","version":1,"id":100001,"schema":"{\"name\":\"io.confluent.examples.clients.cloud.DataRecordAvro\",\"type\":\"record\",\"fields\":[{\"name\":\"count\",\"type\":\"long\"}]}"} View the list of topics. ccloud topic list Your output should show: my_topic my_topic_avro Delete the topics my_topicand my_topic_avro. Caution Use this command carefully as data loss can occur. ccloud topic delete my_topic ccloud topic delete my_topic_avro Your output should resemble: Topic "my_topic" marked for deletion. Topic "my_topic_avro" marked for deletion. Next Steps¶ - Connect your components and data to Confluent Cloud - Configure Multi-Node Environment - Learn more about Confluent Cloud in the documentation
https://docs.confluent.io/5.1.2/quickstart/cloud-quickstart.html
2021-04-10T12:37:34
CC-MAIN-2021-17
1618038056869.3
[]
docs.confluent.io
NXP i.MX RT¶ The i.MX RT series of crossover processors features the Arm Cortex-M core, real-time functionality and MCU usability at a cost-effective price. For more detailed information please visit vendor site. Examples¶ Examples are listed from NXP i.MX R NXP i.MX RT development platform and the latest upstream version using platform option in “platformio.ini” (Project Configuration File) as described below. Stable¶ ; Latest stable version [env:latest_stable] platform = nxpimxrt board = ... ; Custom stable version [env:custom_stable] platform = nxpimxrt.
https://docs.platformio.org/en/latest/platforms/nxpimxrt.html
2021-04-10T11:56:51
CC-MAIN-2021-17
1618038056869.3
[]
docs.platformio.org
The Google OAuth step is handy when using Buildable's Login with Google Recipe template. This template gives developers the ability to integrate Google SSO into their frontend user applications in a few minutes, instead of a few days. You can find your Google Client ID and Client Secret by following the instructions on the Google OAuth 2.0 page. To add Google OAuth as a step in your Recipe, click the "+" button in the Recipe editor canvas and choose Google OAuth from the step selector drawer. Click the dropdown in the "Select your account" section and click the "Add new" option. In the popup that appears, add a name for the account to easily distinguish it in future steps, the Google Client ID and Google Client Secret. Click the Connect button to complete the connection.
https://docs.buildable.dev/steps/secure-variables/google-account
2021-04-10T11:13:55
CC-MAIN-2021-17
1618038056869.3
[]
docs.buildable.dev
This update was released on April 10th, 2017. Hacks Increased Video Resolution Support - Changed the video segment size from 1024x512 to 2048x1024 to allow higher resolution videos (up to about 6144x3072 instead of 3072x1536). - Fixed an issue causing videos to display incorrectly in the non-English versions of the game. Custom Limits Added support for changing the world's Y bounds.
https://docs.donutteam.com/docs/lucasmodlauncher/versions/version_1.15.3
2021-04-10T12:29:01
CC-MAIN-2021-17
1618038056869.3
[]
docs.donutteam.com
Path API Paths are text strings that contain nodes separated by character separators. Paths are used in many common applications like file system addressing, URLs, etc. so being able to parse them is quite important. The Path API is intended for general purpose use and supports UTF-8 null-terminated strings and multi-character separators. Directory and Basename The function le_path_GetDir() is a convenient way to get the path's directory without having to create an iterator. The directory is the portion of the path up to and including the last separator. le_path_GetDir() does not modify the path in anyway (i.e., consecutive paths are left as is), except to drop the node after the last separator. The function le_path_GetBasenamePtr() is an efficient and convenient function for accessing the last node in the path without having to create an iterator. The returned pointer points to the character following the last separator in the path. Because the basename is actually a portion of the path string, not a copy, any changes to the returned basename will also affect the original path string. Thread Safety All the functions in this API are thread safe and reentrant unless of course the path iterators or the buffers passed into the functions are shared between threads. If the path iterators or buffers are shared by multiple threads then some other mechanism must be used to ensure these functions are thread safe.
https://docs.legato.io/latest/c_path.html
2021-04-10T12:13:03
CC-MAIN-2021-17
1618038056869.3
[]
docs.legato.io
The High Definition Render Pipeline (HDRP) is a prebuilt Scriptable Render Pipeline, built by Unity. HDRP lets you create cutting-edge, high-fidelity graphics for high-end platforms. Use HDRP for AAA quality games, automotive demos, architectural applications and anything that requires high-fidelity graphics. HDRP uses physically-based lighting and materials, and supports both forward and deferred rendering. HDRP uses compute shader technology and therefore requires compatible GPU hardware. For more information on the latest version of HDRP, see the HDRP package documentation microsite.
https://docs.unity3d.com/es/2019.3/Manual/high-definition-render-pipeline.html
2021-04-10T12:08:53
CC-MAIN-2021-17
1618038056869.3
[]
docs.unity3d.com
azure.azcollection.azure_rm_iothubconsumergroup module – Manage Azure IoT hub_iothubconsumergroup. New in version 0.1.2: of azure.azcollection Synopsis Create, delete an Azure IoT hub.: Create an IoT hub consumer group azure_rm_iothubconsumergroup: name: test resource_group: myResourceGroup hub: Testing Return Values Common return values are documented here, the following are the fields unique to this module: Collection links Issue Tracker Homepage Repository (Sources)
https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_iothubconsumergroup_module.html
2022-06-25T14:32:13
CC-MAIN-2022-27
1656103035636.10
[]
docs.ansible.com
Deployment Considerations Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 credentials GPO will be ignored by Windows 2000 clients and the GPO will always be applied on Windows 2000. If ADPrep /DomainPrep has not been run in a given domain, the WMI Filters node will not be present, and the GPO scope tab will not have a WMI filters section.. In order to perform the simulation in cross-domain scenarios, the service must have read access to all GPOs in the forest. In a Windows Server 2003 domain (whether it is upgraded from Windows 2000 or installed as new), the Enterprise Domain Controllers group is automatically given read access to all newly created GPOs. This ensures that the service can read all GPOs in the forest. However, if the domain was upgraded from Windows 2000, any existing GPOs that were created before the upgrade do not have read access for the Enterprise Domain Controllers group.: Ensure that the person running this script is either a Domain Admin or has permissions to modify security on all GPOs in the domain. Open a command prompt and navigate to the %programfiles%\gpmc\scripts folder by typing: CD /D %programfiles%\gpmc\scripts Type the following: Cscript GrantPermissionOnAllGPOs.wsf "Enterprise Domain Controllers" /Permission:Read /Domain:value GPOs. This setting is available on Windows Server 2003 located at: Computer Configuration\Administrative Templates\System\Group Policy\Allow Cross-Forest User Policy and Roaming Profiles. It is possible to deploy Group Policy settings to users and computers in the same forest, but have those settings reference servers in other trusted forests. For example, the file and Active Directory Sites GPOs that are linked to site containers affect all computers in a forest of domains. Site information is replicated and available between all the domain controllers within a domain and all the domains in a forest. Therefore, any GPO that is linked to a site container is applied to all computers in that site, regardless of the domain (in the forest) to which they belong. This has the following implications: It allows multiple domains (within a forest) to get the same GPO (and included policy settings), although the GPO only lives on a single domain and must be read from that domain when the affected clients read their site policy. If child domains are set up across wide area network (WAN) boundaries, the site setup should reflect this. If it does not, the computers in a child domain could be accessing a site GPO across a WAN link. To manage site GPOs, you need to be either an Enterprise Admin or Domain Admin of the forest root domain. You may want to consider using site-wide GPOs for specifying policy for proxy settings and network-related settings. In general, it is recommended that you link GPOs to domains and organizational units rather than sites. Using Group Policy and Internet Explorer Enhanced Security Configuration Windows Server 2003 includes a new default security configuration for Internet Explorer, called Internet Explorer Enhanced Security Configuration, also known as Internet Explorer hardening. You can manage Internet Explorer Enhanced Security Configuration by: Enabling or disabling Internet Explorer Enhanced Security Configuration. This is commonly used in situations where you want to ensure that Internet Explorer Enhanced Security Configuration is always enabled. For example, Internet Explorer Enhanced Security Configuration might need to be reapplied on a specific computer if the local administrator on that computer disables it using the Optional Component Manager in the Windows Components Wizard (available from Add or Remove Programs.) Restricting who can manage trusted sites and other Internet Explorer security settings on a server. This is commonly used when you want to ensure that all servers have the same Internet Explorer Enhanced Security Configuration settings. For example, you might want to configure Internet Explorer Enhanced Security Configuration so that machined-based security settings are applied to each server rather than user-based security settings. Adding trusted Web sites and UNC paths to one of the trusted security zones. This is commonly used when you want to allow users access to specific Web sites and corporate resources, but still reduce the risk of users downloading or running malicious content. Enhanced Security Configuration impacts the Security Zones and Privacy settings within the Internet Explorer Maintenance settings of a GPO. The Security Zones and Privacy settings can either be enabled with Enhanced Security Configuration or not. When you edit settings for Security Zones and Privacy settings in a GPO from a computer where Enhanced Security Configuration is enabled, that GPO will contain Enhanced Security Configuration-enabled settings. When you look at the HTML report for that GPO, the Security Zones and Privacy heading will be appended with the text (Enhanced Security Configuration enabled). When you edit settings for Security Zones and Privacy settings in a GPO from a computer where Enhanced Security Configuration is not enabled , that GPO will contain Enhanced Security Configuration-disabled settings. ESC is not enabled on any computer running Windows 2000 or Windows XP, nor on computers running Windows Server 2003 where ESC has been explicitly disabled. Enhanced Security Configuration settings deployed through Group Policy will only be processed on and applied by computers where Enhanced Security Configuration is enabled. Enhanced Security Configuration settings will be ignored on computers where Enhanced Security Configuration is not enabled (all computers running Windows 2000 and Windows XP, and Windows Server 2003 computers where Enhanced Security Configuration has been explicitly disabled). The converse is also true: A GPO that contains non- Enhanced Security Configuration settings will only be processed on and applied by computers where Enhanced Security Configuration is not enabled. For more information, see Managing Internet Explorer Enhanced Security Configuration, available from the Microsoft Group Policy Web site at.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc757395(v=ws.10)?redirectedfrom=MSDN
2022-06-25T13:23:24
CC-MAIN-2022-27
1656103035636.10
[]
docs.microsoft.com
The BRK6110 is a simple two-layer “breakout board” which can be used to evaluate or transition to the XEM6310. It provides standard 2-mm thru-hole connections to the 0.8-mm high-density connectors on the XEM6310 and a DC power connector (2.1mm/5.5mm, center positive) for providing +VDC to the XEM6310. Please visit the Pins reference for the XEM6310 for pin mapping details. Full Altium schematics and PCB layout are available through Pins Downloads. Reference Designators A brief note about reference designators… Reference designators are the marks on the silkscreen used to associate the parts on a PCB design with the corresponding parts on the bill of materials (BOM). Assembly personnel use this association to put the correct parts on the PCB where they belong. It’s important to note that this is not a universal association. J1 on one PCB assembly is not necessarily associated with J1 on a different assembly. After all, one assembly might have J1-J13 and another only J1-J4. So it is the case with our products such as the XEM6310 and BRK6110. Notably, J1 on the XEM6310 does not mate with J1 on the BRK6110. This should be obvious by the way the board mounts on the BRK6110. Please see the BRK6110 schematics and the XEM6310 Pins reference to see how things map. In the Pins reference, you will need to enable the BRK6110 column under Display Options. Mechanical Drawing
https://docs.opalkelly.com/xem6310/brk6110-breakout-board/
2022-06-25T14:59:24
CC-MAIN-2022-27
1656103035636.10
[array(['https://docs.opalkelly.com/wp-content/uploads/2021/08/1BRK6110-MechanicalDrawing.png', None], dtype=object) ]
docs.opalkelly.com
Manage Customers For information on inviting customers and managing invites, see Invite Customers. Once they have accepted, you can view and manage customers from the Customers page. Note: - If you cannot see any customers, or a customer that you know exists is missing, this likely means you are not an Admin user. You need an Admin user to grant you permission to view that customer. - If you can see a customer, but the columns next to their name are empty, this means you have been granted permission to view that customer in a Support user capacity. See Manage user access to customer accounts below. The Customers page displays customers who have accepted one of your invites and set up an account. You can view the following information: Header row - Field - Description Row 1 MRR The monthly recurring revenue for services that are currently billing. This number might not include newly provisioned ports, as those can take up to 15 days to enter billing. Row 2 - Ports - Number of ports the customer has provisioned. Row 2 - VCs - Number of virtual circuits the customer has provisioned. This includes virtual circuits for cloud connections. Row 2 - Cloud - Number of cloud connections (Dedicated ports and Hosted connections) the customer has. Row 2 - Last Activity - The date and time a customer account user was last active. This includes logging in. Row 2 - Last Order - The date and time a service was last ordered. This includes orders done directly by the customer and orders done by reseller users on behalf of the customer. Hide inactive customers Select this option to hide customer accounts that match ALL of the following criteria: - More than 30 days since the last order - More than 30 days since the last activity - $0.00 MRR Manage user access to customer accounts By default, only Admin users can view the full list of customers in the Reseller Admin portal. To allow non-Admin users access to view a customer and access that customer’s account, complete the following steps: Click Manage Users next to the customer: Select the user you want to add (only non-Admin users are listed) and click Link user. By default, the user is added with Support permissions. These permissions determine what they can view and what actions they are able to perform with respect to the customer. To change these permissions, click the arrow to open a menu: You can grant users the following permissions: Header row - Group - Description Row 1 - Admin - Can perform any action, including inviting users and updating the company profile. Row 2 - Regular - Can add, remove, or change any product service. Row 2 - Read-Only - Can view all services and download documents and invoices, but cannot make any changes. Row 2 - Support - Can perform troubleshooting actions, but cannot perform any action that has a financial impact (creating, upgrading, or deleting services). Support users also cannot see financial information for the customer, such as MRR. The user can have different permissions between accounts. For example, in the Reseller Admin Portal, they might have Read Only permissions. But for Customer A, you can grant them Regular permissions. For Customer B, you grant them Support permissions, and so on. For a detailed list of permissions by group, see User Permissions. Unlink a user To revoke a user’s access to a customer, simply click the X next their user name: Order and manage services on behalf of a customer Go to the Customers page and click View next to the customer: You are redirected to the dashboard in the customer’s portal view. From here, you can manage services on behalf of the customer (depending on your group permissions detailed above). Notifications You can receive an email notification any time a customer provisions or deletes a service. This notification is sent to all Reseller Admin Portal “Admin” contacts (note that an contact is different than a user). For more information, see Company Contacts. Logging If you access a customer account and perform any actions such as ordering or removing a service, that action appears in the customer activity logs. You are identified by your user name (email address). These actions are not recorded in the activity log for the reseller account.
https://docs.packetfabric.com/reseller/customer/
2022-09-24T19:26:18
CC-MAIN-2022-40
1664030333455.97
[array(['../images/cust_page.png', 'screenshot of the customers page'], dtype=object) array(['../images/unlink_user.png', 'screenshot of the manage users panel'], dtype=object) array(['../images/cust_view.png', 'screenshot of the customers page with the view action highlighted'], dtype=object) array(['../images/cust_view_red.png', 'screenshot of the customers page'], dtype=object) ]
docs.packetfabric.com
file in your favorite text editor, such as vi, emacs, pico or mcedit. Add the following two lines at the bottom of the file, replacing /usr/java/jdk1.7.0_80with the actual directory where the JDK is installed. export JAVA_HOME=/usr/java/jdk1.7.0_80 export PATH=${JAVA_HOME}/bin:${PATH} Save the file. If you do not know how to work with text editors in an SSH session, run the following command: cat >> .bashrc Paste the string from the clipboard and press "Ctrl+D." - To verify that the JAVA_HOMEvariable is set correctly, execute the following command: echo $JAVA_HOMET.
https://docs.wso2.com/display/PP410/Installing+on+Solaris
2022-09-24T19:20:38
CC-MAIN-2022-40
1664030333455.97
[]
docs.wso2.com
ought tell a story: what, where, when, even why and how if you’re lucky. But most logging systems omit the all-important why. You know that some things happened, but not how they relate to each other. The problem: What caused this to happen?¶. The solution: Eliot¶ Eliot is designed to solve these problems: the basic logging abstraction is the action. An “action” is something with a start and an end; the end can be successful or it can fail due to an exception. Log messages, as well as log actions, know the log action whose context they are running in. The result is a tree of actions. In the following example we have one top-level action (the honeymoon), which leads to other action (travel): from sys import stdout from eliot import start_action,_action(action_type="honeymoon", people=family): destination.visited(family) honeymoon(["Mrs. Casaubon", "Mr. Casaubon"], Place("Rome, Italy", [Place("Vatican Museum", [Place("Statue #1"), Place("Statue #2")])])) Actions provide a Python context manager. When the action starts, a start message is logged. If the block finishes successfully a success message is logged for the action; if an exception is thrown a failure message is logged for the action with the exception type and contents. By default the messages are machine-parseable JSON, but for human consumption a visualization is better. Here’s how the log messages generated by the new code look, as summarized by the eliot-tree tool: f9dcc74f-ecda-4543-9e9a-1bb062d199f0 +-- honeymoon@1/started |-- people: ['Mrs. Casaub.
https://eliot.readthedocs.io/en/1.14.0_a/introduction.html
2022-09-24T20:08:00
CC-MAIN-2022-40
1664030333455.97
[]
eliot.readthedocs.io
Using the HuBMAP CLT The HuBMAP Command Line Transfer utility provides the functionality to download HuBMAP data of individual files and directories across multiple datasets at one time by specifying all downloaded data files and directories in a single manifest file. This document covers usage of the HuBMAP CLT. Detailed instructions for installing hubmap-clt as well as other first time setup can be found here. A tutorial on how to view the current GCP download directory is also available. usage: hubmap-clt [-h | –help | -v | –version] [transfer manifest-file | login | logout | whoami] Commands: One of the following commands is required: transfer manifest-file Transfer files specified in manifest-file (see below for example) using Globus Transfer. The transfered files will be stored in the directory “hubmap-download” under the user’s home directory. login Login to Globus logout Logout of Globus whoami Displays the information of the user who is currently logged in. If no user is logged a message will be displayed prompting the user to log in. -h or –help Show this help message. -d or –destination Manually select a download location within the user’s home direcotry. For example: ‘hubmap-clt transfer manifest-file -d Desktop’ will download to the user’s Desktop directory. The directory will be created under the user home directory if it doesn’t already exist. -v or –version Displays the version of the currently installed hubmap-sdk package Manifest Files A manifest file is required for usage of the hubmab-clt. This simple text file will contain the dataset id and the path to the dataset separated by a space, one line for each file or directory to download. for example: HBM123.ABCD.456 /metadata.tsv #download the metadata.tsv file for dataset HBM123.ABCD.456 HBM345.ABCD.456 / #download all files in the dataset HBM345.ABCD.456 HBM378.HDGT.837 /extras #download the extras directory from dataset HBM378.HDGT.837 See below for more examples of manifest files. A one-time login is required for any download session. For any non-public data, you must login with your HuBMAP authorized account, for publicly available data you can log in with any account accepted on the login form (Google and OrCID accepted) as well. To login issue the following command on the command line: hubmap-clt login Similarly, log out with the command: hubmap-clt logout To check the identity of the currently logged in user, enter the command: hubmap-clt whoami Example of usage Having prepared or downloaded a manifest.txt file, logged in and having verified that the local GCP endpoint is running (see below), the hubma-clt can be used with the following command: hubmap-clt transfer manifest.txt where manifest.txt is the file containing the resources to be downloaded and their locations. Depending on where the manifest file is located, the path to the file may be necessary along with the filename in the argument. For example: hubmap-clt transfer ~/Documents/manifest.txt The files/directories will be transferred to the directory hubmap-downloads by default. This directory will be created under the local user directory if it does not yet exist. You can specify an alternative directory or subdirectory instead to download the data to with –destination or -d. Like: hubmap-clt transfer manifest.txt --destination data/hubmap/rna-seq Similarly, if you give the path/name to a directory that doesn’t exist, it will be created. Be mindful of typos. Note about Globus Connect Personal In order to transfer data to the local machine, the Globus Connect Personal (GCP) endpoint must be up and running. Refer to the installation guide if this has not yet been set up. The hubmap-clt transfer command will alert you if an instance of GCP is not running. Please see the documentation at Globus to intall or run it here Manifest File Examples Download the cell by gene matrix for multiple single nuclei RNA sequencing datasets: HBM744.FNLN.846 /expr.h5ad HBM658.VPJK.669 /expr.h5ad HBM592.RPKF.946 /expr.h5ad HBM363.TBHH.346 /expr.h5ad HBM322.XJQZ.894 /expr.h5ad HBM749.MTJC.865 /expr.h5ad HBM722.TVXP.469 /expr.h5ad HBM223.JQLM.452 /expr.h5ad HBM524.KHPH.599 /expr.h5ad The second item in each line (the specific path to a given resource) may contain spaces. If you are given the path to a directory rather than a file, be sure to prepend it with a trailing slash. For example: HBM744.FNLN.846 "fastqc_output/" If the path provided is for a directory, but there is no trailing slash, Globus will be unable to download the contents of the directory. A directory will still be created, but it will be blank. Checking the Status of a Transfer Once the transfer has been started successfully, the user will receive a success message that looks like this: Message: The transfer has been accepted and a task has been created and queued for execution Task ID: 1234abcd-56ef-78gh-90ij-123456klmnop At which point the transfer will be handled completely through Globus. To see the status of a transfer, whether it succeeded or failed, a progress bar, and other details about the transfer, a user must visit. The user will be prompted to sign into Globus just as they were when logging into globus through the hubmap-clt. Once logged in, the user will be brought to the file manager page. Click the activity tab on the left to view all past and active transfers. Here, users will see a list of past and present transfers. Clicking on one will provide more information about when the transfer started, the location of the data transferred, and much more information. For a complete break down of how to use the globus web app, please consult the globus documentation
https://software.docs.hubmapconsortium.org/clt/index.html
2022-09-24T19:44:04
CC-MAIN-2022-40
1664030333455.97
[array(['../images/globus_file_manager_transfer_tab.png', 'Globus App File Manager Transfer Tab'], dtype=object)]
software.docs.hubmapconsortium.org
Improved case auditing in App Studio Valid from Pega Version 8.5 In Cosmos UI, App Studio now supports expandable case steps. This enhancement helps users quickly navigate a case and provides deeper insight into the case flow. In addition, Cosmos UI also introduces an improved history view that helps you better meet your business auditing requirements. For more information, see Managing Cosmos UI settings in case designer. Upgrade impact After an upgrade to Pega Platform 8.5 or later, the history and chevron designs change automatically. However, applications with custom history settings might still display the styling that you defined in the override. What steps are required to update the application to be compatible with this change? If your application uses custom settings and you want to use the updated history, remove the overrides from the pyWorkCommonActions rule.. Tamper-proof Pega Web Mashup loading Valid from Pega Version 8.5 To protect your application from hackers, Pega Web Mashup is now loaded in a more secure way. The system generates a channel ID in the mashup code for validation on the server, before passing the mashup request. For more information, see Creating a mashup. Upgrade impact After an upgrade to Pega Platform 8.5, existing mashups, which do not have the channel ID parameter in their code, cannot load and users see the access control warning. What steps are required to update the application to be compatible with this change? If you need to maintain full availability of the mashup during the upgrade of the production environment, perform the steps in Migrating existing mashups...
https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=releases_note_type%3A985&f%5B1%5D=releases_note_type%3A986&f%5B2%5D=releases_version%3A7071&f%5B3%5D=releases_version%3A7121&f%5B4%5D=releases_version%3A7136&f%5B5%5D=releases_version%3A7786&f%5B6%5D=releases_version%3A26871&f%5B7%5D=releases_version%3A32691&f%5B8%5D=releases_version%3A33606
2022-09-24T19:12:15
CC-MAIN-2022-40
1664030333455.97
[]
docs.pega.com
Inviting users To invite a user to pair with you, click the following icon: You can choose to add the user to be added to your team (your team will be billed for them): Alternatively, you can invite someone to pair without adding them to your team. If they have an existing account, they will be added to your Friends List, otherwise they will get an invite to create a free account:
https://docs.tuple.app/article/41-inviting-users
2022-09-24T19:08:05
CC-MAIN-2022-40
1664030333455.97
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5c7e923c04286350d088a5cf/images/5ecff53404286306f8044e9d/file-BFUQHfmV46.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5c7e923c04286350d088a5cf/images/5ecff4e804286306f8044e96/file-NdnKXNNYRN.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5c7e923c04286350d088a5cf/images/5ecff4ef04286306f8044e97/file-yGL2iIfEJs.png', None], dtype=object) ]
docs.tuple.app
After working on your artifacts using the Tooling environment, you bundle them into Composite Applications (C-Apps) that can later be deployed in the server. The steps below describe how to bundle your artifacts into a C-App. When deploying via the management console, you will need to create a Composite Application Archive (CAR) file. See Creating a Composite Application Archive (CAR) file. Also, you can package individual artifacts into separate C-Apps as well. See Packaging individual artifacts into separate Composite Applications. - Open the Tooling interface with all the artifacts/projects that you created. For example, shown below is an ESB mediation sequeance created with ESB artifacts. - Right-click the Project Explorer and click New -> Project. - From the window that opens, click Composite Application Project. - Give a name to the Composite Application project and select the projects that you need to group into your C-App from the list of available projects below. For example, - In the Composite Application Project POM Editor that opens, under Dependencies, note the information for each of the projects you selected earlier. You can also change the project details here. Creating a Composite Application Archive (CAR) file To deploy a C-App via the product's management console, you will need to first create a Composite Application Archive (CAR) file of that C-App. - To create the CAR file, do one of the following: -. Tip: When you create a CAR file with artifacts, ensure that each artifact name is the same as the relevant artifact file name. You have now exported all your project's artifacts into a single CAR file. Next, deploy the Composite Application in the server.. Packaging individual artifacts into separate Composite Applications You can also create separate deployable artifacts for each individual artifact in your project. For example, suppose you created an Apache Axis2 Service. When you right-click the Axis2 Service Project, there is an option called Export Project as Deployable Archive. It creates the relevant deployable archive in a location you specify. Following are the deployable archives that will be generated for each artifact type.
https://docs.wso2.com/display/ADMIN44x/Packaging+Artifacts+into+Composite+Applications
2022-09-24T19:08:08
CC-MAIN-2022-40
1664030333455.97
[]
docs.wso2.com
Contributing to Eliot¶ To run the full test suite, the Daemontools package should be installed. All modules should have the from __future__ import unicode_literals statement, to ensure Unicode is used by default. Coding standard is PEP8, with the only exception being camel case methods for the Twisted-related modules. Some camel case methods remain for backwards compatibility reasons with the old coding standard. You should use yapf to format code.
https://eliot.readthedocs.io/en/1.3.0/development.html
2022-09-24T20:37:40
CC-MAIN-2022-40
1664030333455.97
[]
eliot.readthedocs.io
Memory Usage¶ Send Path¶ The core MQTT Reactor wraps an outgoing message queue. Maximum memory usage should be be bounded by about 2x the byte size of the outgoing message queue. Receive Path¶ The peak receive path memory usage on the order of 2x the maximum MQTT message size. In MQTT 3.1.1 the maximum message length is mqtt_codec.packet.MqttFixedHeader.MAX_REMAINING_LEN (268435455 bytes) so the maximum memory usage will be about 512MB. Typical MQTT messages are much smaller than this so peak memory usage will likewise be much smaller. A possible future enhancement to the reactor could be to set a maximum receive message size lower than the protocol maximum.
https://haka-mqtt.readthedocs.io/en/latest/memory.html
2022-09-24T19:08:11
CC-MAIN-2022-40
1664030333455.97
[]
haka-mqtt.readthedocs.io
What is Glassfy? Glassfy is the infrastructure that enables you to easily build, manage, and grow in-app subscriptions, so you can focus on your app. Congrats on making your life easier! You can use our open-source SDK and our backend to: - Integrate in-app subscriptions in your app in minutes, saving weeks of development and time-consuming maintenance - Use our backend to validate receipts, verify permissions, and access subscriptions status without building your server - Remotely manage SKUs and permissions, without creating new builds - Monitor your subscriptions performance with basic and advanced real-time analytics–that you won't find on Apple AppStore Connect or Google Play Console - Get notified of any subscription events via webhooks - Send any purchases and subscription events to third-party analytic tools - Configure paywalls remotely - Create new paywalls with our no-code editor Made In Europe 🇪🇺 With ❤️ Glassfy is a 100% European company, following the most advanced policies on privacy and security. Updated 4 months ago Did this page help you?
https://docs.glassfy.io/docs
2022-09-24T19:33:47
CC-MAIN-2022-40
1664030333455.97
[]
docs.glassfy.io
Features¶ QGIS offers many common GIS functions provided by core features and plugins. A short summary of six general categories of features and plugins is presented below, followed by first insights into the integrated Python console. Työskentely vektoridatalla. Työskentely OGC datan kanssa. Explore data and compose maps¶ You can compose maps and interactively explore spatial data with a friendly GUI. The many helpful tools available in the GUI include: QGIS browser On-the-fly reprojection Tietokannan hallinta Print layout¶ You can create, edit, manage and export vector and raster layers in several formats. QGIS offers the following: Digitizing tools for OGR-supported formats and GRASS vector layers Ability to create and edit multiple file formats files with the DB Manager plugin Improved handling of spatial database tables Tools for managing vector attribute tables Option to save screenshots as georeferenced images DXF-Export tool with enhanced capabilities to export styles and plugins to perform CAD-like functions Analyze data¶ Johdanto.) Publish maps on the Internet¶ QGIS can be used as a WMS, WMTS, WMS-C or WFS and WFS-T client, and as a WMS, WCS or WFS server (see section Työskentely OGC datan kanssa). Additionally, you can publish your data on the Internet using a webserver with UMN MapServer or GeoServer installed. Extend QGIS functionality through plugins¶ QGIS can be adapted to your special needs with the extensible plugin architecture and libraries that can be used to create plugins. You can even create new applications with C++ or Python! Core Plugins¶ Core plugins include: Coordinate Capture (capture mouse coordinates in different CRSs)) External Python Plugins¶ QGIS offers a growing number of external Python plugins that are provided by the community. These plugins reside in the official Plugins Repository and can be easily installed using the Python Plugin Installer. See Section The Plugins Dialog. Python-konsoli¶ Kehittäjän Ohjekirja. Known Issues¶ Number of open files limitation¶:
https://docs.qgis.org/3.4/fi/docs/user_manual/preamble/features.html
2022-09-24T18:57:27
CC-MAIN-2022-40
1664030333455.97
[]
docs.qgis.org
Disclaimer# The information herein is believed to be correct as of the date issued. Acconeer AB (“Acconeer”) will not be responsible for damages of any nature resulting from the use or reliance upon the information contained herein. Acconeer makes no warranties, expressed or implied, of merchantability or fitness for a particular purpose or course of performance or usage of trade. Therefore, it is the user’s responsibility to thoroughly test the product in their particular application to determine its performance, efficacy and safety. Users should obtain the latest relevant information before placing orders. Unless Acconeer has explicitly designated an individual Acconeer product as meeting the requirement of a particular industry standard, Acconeer is not responsible for any failure to meet such industry standard requirements. Unless explicitly stated herein this document Acconeer has not performed any regulatory conformity test. It is the user’s responsibility to assure that necessary regulatory conditions are met and approvals have been obtained when using the product. Regardless of whether the product has passed any conformity test, this document does not constitute any regulatory approval of the user’s product or application using Acconeer’s product. Nothing contained herein is to be considered as permission or a recommendation to infringe any patent or any other intellectual property right. No license, express or implied, to any intellectual property right is granted by Acconeer herein. Acconeer reserves the right to at any time correct, change, amend, enhance, modify, and improve this document and/or Acconeer products without notice. This document supersedes and replaces all information supplied prior to the publication hereof. In our code you might encounter features tagged “experimental”. This means that the feature in question is an early version that has a limited test scope, and the API and/or functionality might change in upcoming releases. The intention is to let users try these features out and we appreciate feedback. Best radar performance is achieved with all our modules XM112/XM122/XM132 and with our sensor A111, with exception for A111 sensors belonging to batch numbers: 10467, 10457, 10178.
https://docs.acconeer.com/en/docs-revamp/disclaimer.html
2022-09-24T18:33:03
CC-MAIN-2022-40
1664030333455.97
[]
docs.acconeer.com
Testing with CodeceptionTesting with Codeception Codeception is an extensible testing framework for PHP applications, with modular architecture, built on-top of PHPUnit, with modules that enable multiple different types of testing. wp-browser is used to provide WordPress-specific modules and helpers to facilitate setting up and running tests for WordPress themes, plugins, and whole sites. Note that while Codeception is typically geared towards acceptance and functional tests, it can also run unit and integration tests, replacing the need for separate Codeception and PHPUnit test suites. Table of contentsTable of contents - Getting started - Running tests - Writing tests - Scaffolding - Advanced usage Getting startedGetting started Altis provides a zero-configuration approach for setting up and running Codeception tests, so you can start writing and running tests right-away! If you're already familiar with Codeception and wp-browser, you can start boostrapping and scaffolding tests, and run them via the zero-config command, they'll just work! # Bootstrap the tests directory and create all default suites composer dev-tools codecept bootstrap # Generate an acceptance test class composer dev-tools codecept generate:cest acceptance awesome-feature/admin/AwesomeFeatureAdminTest # Run tests! composer dev-tools codecept run For some extra control, check out the available advanced usage for some fine-tuning. Running TestsRunning Tests Note: Codeception setup is currently only available while using the Local Server module, with no support for Local Chassis. In order to run Codeception tests, you can run the following shorthand command: composer dev-tools codecept run This assumes you have tests in the root tests directory. Check the Advanced usage section below for command options. Advanced usageAdvanced usage There are multiple available options to customize the running command, eg: composer dev-tools codecept [-p PATH/TO/TESTS] [-o PATH/TO/OUTPUT] [-b BROWSER] [-a] run [TEST-SUITE] [[TestClass]:testMethod] -p/--pathdefines the directory where tests exists. Omit to use the testsroot directory. -o/--outputdefines the path to store artifacts from the test running process. Omit to use/create the _outputdirectory within the chosen tests path. -b/--browserdefines which browser to use for acceptance tests. Omit to use the default browser. Possible parameters are chrome(default), and firefox. -a/--allruns all testing suites despite any failure, otherwise fails/stops on the first failing suite (default). TEST-SUITEreferences the name of the test suite to run, typically one of the *.suite.ymlfiles in the tests directory. Omit to run all found test suites. TestClassreferences one of the test classes in the specified suite. Omit to run all tests within the suite(s). testMethodreferences a single test method within the specified test class. Omit to run all test method within specified test class(es)/suite(s). Altis codecept command proxies commands to the codeception CLI, except for the -b and -p parameters, so you'll be able to execute advanced commands and utilize more of what Codeception has to offer as needed. To generate a suite or a test for example you could run the following command: composer dev-tools codecept generate g:cest TEST-SUITE TestClassName To pass arbitary options to codeception command, eg: -vv to enable verbose mode, use the options delimiter -- to split those as follows: composer dev-tools codecept run -- -vv When you invoke the codecept run command, this happens in the background: - Altis looks within the tests directory/directories for test suites files, and runs through each suite one by one in separate threads. - For suites using the WebDriver module, Altis boots up a docker container with a headless browser to execute those tests, based on Selenium standalone web driver containers. - For suites using the WPDb module, Altis sets up test databases, and seeds them with a bundled sample dump file. - Tests output, eg: failed tests screenshots/html snapshots, and debugging artifacts, are saved to PROJECT/ROOT/tests/_outputfor convenience. - After tests have run, Altis removes the test databases, clears test caches, and the browser container. Continuous IntegrationContinuous Integration In order to run Codeception tests in Continous Integration environments, follow the documentation on setting up Continous Integration on Travis, and specify your test running command(s) as per the documentation above, typically using composer dev-tools codecept run instead of / in addition to composer dev-tools phpunit as explained in the docs. Writing TestsWriting Tests TerminologyTerminology - Test suite Collection of test classes, sharing running configuration and testing environment like modules, helpers, and constants, that typically runs in the same thread. Defined by a suite definition file, eg: acceptance.suite.yml, and a neighbouring folder that hosts related tests, with the same name as the suite, eg: acceptance. - Test class Collection of tests for a certain functionality, or one aspect of it, typically combined in the same class. Defined by a class file, eg: class-test-authorship-admin.php. - Test Individual test methods within a test class, that typically tests a single specific scenario. eg: test_user_can_signup or test_submission_invalid_email. - Actor A Codeception actor is the main driver of acceptance or functional tests, whose methods typically come from the defined modules and helpers in the test suite configuration, typically referred to as $I. Read more on Codeception docs. - Module Codeception modules extend its functionality or environment, and provide related methods that can be used within tests. For instance: WPDbprovides methods to allow accessing the database and inserting or updating objects, and also enables importing a base database snapshot using a .sqlfile. WPLoaderprovides a bootstrapped WordPress environment Assertsprovides the commonly used Symfony\Assertsmethods, eg: assertEquals. Read more about modules in the Codeception docs. - Helper Codeception helpers are classes that provide commonly used actions and assertions to Actors, eg: $I->havePostInDatabase() which creates a new post in the database, using the WPDb module. Those are typically located in the _helpers directory. Read more about helpers on the Codeception docs. - Environment Codeception environments are sets of configurations that allows specifying different environment setups, modules, helpers, etc, to be able to run tests in different, well, environments! eg: running tests in Firefox vs Chrome, in Linux vs Windows, etc. Codeception allows defining environments in test suites or in dedicated shared files, eg: _envs/chrome.yml. Read more about environments on the Codeception docs. - PageObject Codeception's PageObject is a special type of helper that represents a specific web page and/or template, where you define constants and actions for interacting with that page/template to be able to use it in different tests. This makes it easier to write tests, and to refactor actions based on changing templates. For instance, a LoginPage helper would define the CSS and XPath selectors for forms and buttons, as well as the actions needed to login to a site. Read more about the page object on the Codeception docs. - StepObject Codeception's StepObject is a special type of helper that represents a set of actions common to a role or area of functionality, eg: Admin can represent actions that an Admin user can do, eg: loginAsAdmin or activatePlugin. Read more about step objects on the Codeception docs. Test directory structureTest directory structure Codeception tests are split into suites, each suite is defined by a file, eg: acceptance.suite.yml, and a tests directory with the same suite name, eg: acceptance, that hosts test files. eg: tests/ - acceptance/ - Signup/ - SignupSubmissionTest.php - integration/ - Signup/ - SignupSubmissionHandlingTest.php - acceptance.suite.yml - integration.suite.yml Typical suite configuration includes the main actor, modules, helpers, and extensions used by the suite, eg: # acceptance.suite.yml actor: AcceptanceTester modules: enabled: - WPDb - WPWebDriver - Asserts - \Helper\Acceptance Test typesTest types To start writing tests, you need to decide which type of tests you need from the typical types available below. You can mix and match different types of tests to satisfy the project needs. Acceptance testsAcceptance tests In short: Testing a scenario from a user perspective, in the browser, ie: opening login page, typing credentials, clicking sign in, and checking browser output. This type uses a browser, where a web driver drives the browser clicking and typing to simulate user actions. These can be written in CEPT format, eg: // SignupSubmissionTest' => 'John Doe', 'email' => '[email protected]', ] ); // Make sure I see a confirmation message. $I->waitForElement( '#signup-confirmation' ); or the more nuanced CEST format, largerly recommended due to its DRY capabilities, eg: // SignupSubmissionCest.php class SignupSubmissionCest { public function _before( AcceptanceTester $I ) { // Add a page that contains the shortcode that will render the signup form. $I->havePageInDatabase( [ 'post_name' => 'signup', 'post_content'=> 'Sign-up for our awesome thing! [signup]', ] ); $I->amOnPage( '/signup' ); } public function test_good_signup( AcceptanceTester $I ) { // Submit the form as a user would submit it. $I->submitForm( '#signup-form', [ 'name' => 'John Doe', 'email' => '[email protected]', ] ); // Make sure I see a confirmation message. $I->waitForElement( '#signup-confirmation' ); } public function test_bad_email_signup( AcceptanceTester $I ) { // Submit the form as a user would submit it. $I->submitForm( '#signup-form', [ 'name' => 'John Doe', 'email' => 'not-really-an-email', ] ); // Make sure I see an error message. $I->waitForElement( '#signup-error' ); } } Functional testsFunctional tests In short: Testing a scenario from a developer perspective, eg: sending AJAX/API requests and checking responses and/or database changes. This type of test doesn't necessarily use a browser, as it can use a PHP library that acts like a browser, but without Javascript support. These are a lot like Acceptance tests, but serve a slightly different purpose. Functional and acceptance tests can co-exist, eg: executing browser actions and checking expected database changes rather than just browser output. Functional tests are typically written in CEST format, eg: // SignupSubmissionCest.php class' => 'John Doe', 'email' => '[email protected]', ] ); $I->seeResponseCodeIsSuccessful(); $I->seeUserInDatabase( [ 'user_login' => 'john.doe', 'user_email' => '[email protected]' ] ); } public function test_bad_email_signup( FunctionalTester $I ) { $I->sendAjaxPostRequest( '/wp-json/acme/v1/signup', [ '_wpnonce' => $I->grabAttributeFrom( '#signup-nonce', 'value' ), 'name' => 'John Doe', 'email' => 'not-really-an-email', ] ); $I->seeResponseCodeIs( 400 ); $I->dontSeeUserInDatabase( [ 'user_login' => 'john.doe', 'user_email' => 'not-really-an-email' ] ); } } Integration testsIntegration tests In short: Testing code within the context of a WordPress site, eg: testing filters and actions are behaving as expected. This type is written in the PHPUnit format, however extending the \Codeception\TestCase\WPTestCase class provided by wp-browser, eg: // SubmissionHandlingTest.php class SubmissionHandlingTest extends \Codeception\TestCase\WPTestCase { public function test_good_request() { $request = new WP_Rest_Request(); $request->set_body_params( [ 'name' => 'john.doe', 'email' => 'john.doe@altis( 'john.doe', $handler->last_submission()->name() ); $this->assertEquals( '[email protected]', $handler->last_submission()->email() ); } public function test_bad_email_request() { $request = new WP_Rest_Request(); $request->set_body_params( [ 'name' => 'john.do( 'john.doe', $handler->last_submission()->name() ); $this->assertEquals( 'not-a-valid-email', $handler->last_submission()->email() ); } } WordPress unit testsWordPress unit tests In short: Testing single classes or functions in as much isolation as possible, eg: testing one class or one function that requires WordPress-defined functions or classes, with a unit testing approach. This type is written also in PHPUnit format, extending the \Codeception\Test\Test class. eg: // SubmissionHandlerTest.php class( 'john.doe' ); $this->request->get_param( 'email' )->willReturn( '[email protected]' ); $handler = new Acme\Signup\SubmissionHandler( $this->validator->reveal() ); $handler->set_validator( $this->validator ); $response = $handler->handle( $this->request->reveal() ); $this->assertInstanceOf( WP_REST_Response::class, $response ); // Verify on the validator spy. $this->validator->validate( '[email protected]' )->shouldHaveBeenCalled(); } public function test_will_not_validate_email_if_missing() { $this->request->get_param( 'name' )->willReturn( 'john.do(); } } Dependency InjectionDependency Injection Codeception has two different ways to inject Helper dependencies, or virtually any defined class: a. Automated dependency injectiona. Automated dependency injection You can specify dependencies to inject to a test method by defining it as an argument like the following, and Codeception will take care of bootstrapping the helper and passing it as an argument: //... function test_some_action( AcceptanceTester $I, \Helper\AdminBar $adminBar ) {} //... b. _inject() Codeception test / actor / helper classes have a special method that lets you bootstrap helpers and virtually any PHP class that can be autoloaded. You can then attach these helpers to the test class object, which has the added benefit of being able to construct objects with arbitrary arguments: class SampleTest { /** * @var \Helper\AdminBar */ protected $adminBar; protected function _inject( \Helper\AdminBar $adminBar ) { $this->adminBar = $adminBar->init( 'single-page' ); } public function test_clicking_new_post( AcceptanceTester $I ) { $this->adminBar->clickNew( 'post' ); } } Read more about dependency injection on the Codeception docs. AnnotationsAnnotations Codeception has different special annotations that help you to write tests in a more efficient way. ExamplesExamples Codeception provides a similar functionality to PHPUnit's @dataProvider annotations, to specify different scenarios or data sets for the same test to run once per each set of values, eg: /** * @example ["/api/", 200] * @example ["/api/protected", 401] * @example ["/api/not-found-url", 404] * @example ["/api/faulty", 500] */ public function test_api_responses( ApiTester $I, \Codeception\Example $example ) { $I->sendGet( $example[0] ); $I->seeResponseCodeIs( $example[1] ); } You can define examples in Doctrine or JSON style, eg: @example ["/api/", 200] or @example { "url": "/api/", "code": 200 } or @example(url="/api/", code=200). DataProvidersDataProviders You can also use PHPUnit's @dataProvider pattern to create dynamic data sets for test methods, where the test will run once per each data set returned from the protected data provider method. The syntax differs a bit given the way test methods are written, eg: /** * @dataProvider pageProvider */ public function testStaticPages( AcceptanceTester $I, \Codeception\Example $example ) { $I->amOnPage( $example['url'] ); $I->see( $example['title'], 'h1' ); $I->seeInTitle( $example['title'] ); } /** * @return array */ protected function pageProvider() { return [ [ 'url' => "/", 'title' => "Welcome" ], [ 'url' => "/info", 'title' => "Info" ], [ 'url' => "/about", 'title' => "About Us" ], [ 'url' => "/contact", 'title' => "Contact Us" ] ]; } Read more about examples and data providers on the Codeception docs. Before and AfterBefore and After Codeception tests have special annotation types to execute methods before a certain test method, where you can define one or more prerequisites/cleanup functions, eg: protected function activate( AcceptanceTester $I ) { $this->loginAsAdmin(); $this->activatePlugin( 'some-plugin' ); } protected function cleanup( AcceptanceTester $I ) { $I->deactivatePlugin( 'some-plugin' ); $I->logout(); } /** * @before activate * @before anotherPrerequisite * @after cleanup */ public function checkPluginPageExists( AcceptanceTester $I ) { // ... } EnvironmentEnvironment Codeception tests can be instructed to run in multiple / different environments, via the @env special annotation, eg: /** * @env chrome * @env firefox */ public function someTest() {} Available modulesAvailable modules Altis' Codeception integration comes bundled with the wp-browser library, which provides additional modules to simplify testing WordPress applications. Altis pre-configures these modules via the zero-config installation, so you don't need to manually configure them unless you need to override some of the default values, which you can do via test suite configuration, eg: # acceptance.suite.yml actor: AcceptanceTester modules: enabled: - WPDb - WPBrowser - \Helper\Acceptance config: WPBrowser: headers: X_WPBROWSER_REQUEST: 1 For a list of available modules, please check the wp-browser documentation on modules, and the respective configuration options, and methods, of each. These are the available modules from wp-browser: WPBrowserWPBrowser This module extends the PHPBrowser module, adding WordPress-specific configuration parameters and methods. It simulates a user interaction with the site without Javascript support; if you need to test your project with Javascript support use the WPWebDriver module instead. Read more on WPBrowser module configuration. WPWebDriverWPWebDriver This module extends the WebDriver module adding WordPress-specific configuration parameters and methods. It simulates a user interaction with the site with Javascript support; if you don't need to test your project with Javascript support use the WPBrowser module to skip the overhead of loading a headless browser. Altis comes with built-in browser support for Chrome, and Firefox, based on Selenium standalone Docker images, which is pre-configured to run with and be available for acceptance tests with zero-configuration required. Important notes: - During acceptance tests, two processes (or more) are working in parallel: - The test runner request, ie: Codeception process. - The browser session driven by WPWebDriver, ie: application process. Both of those use different configurations and different running context / environment. And it'll save you time to distinguish between the two running processes/threads. Read more on WPWebDriver module configuration. WPDbWPDb This module extends the Db module adding WordPress-specific configuration parameters and methods. It provides methods to read, write and update the WordPress database directly, without relying on WordPress methods, using WordPress functions or triggering WordPress filters. Altis comes with a pre-prepared database dump that is imported on the fly to simulate a basic working site. Important notes: WPDbimports the sample database content to a database called test, which is created (and later removed) on the fly. - Altis detects acceptance test requests (to the actual running application) and switches the database to testin runtime, so it doesn't mess with existing site content. Read more on WPDb module configuration. WPLoaderWPLoader This module is typically used in integration tests, to bootstrap WordPress code in the context of the tests. It can also be used in acceptance and functional tests, by setting the loadOnly parameter to true, in order to acccess WordPress code in the tests context (using the tests database imported by WPDb ). This module is a wrapper around the functionalities provided by the WordPress PHPUnit Core test suite, as such it provides the same methods and facilities. The parameters provided to the module duplicate the ones used in the WordPress configuration file. WPLoader will not bootstrap WordPress using the wp-config.php file, it will define and use its own WordPress configuration values passed from the defined module parameters. Important notes: - If the loadOnlyparameter is set to false, Codeception will execute all database modification requests, eg: created and/or deleted content, as an SQL transaction, which gets rolled-back whenever the test scenario completes. - WordPress defined functions and classes (and those of the plugins and themes loaded with it) will be available in the setUpBeforeClassmethod. - WordPress would not have loaded yet when PHPUnit calls the data provider methods, so don't expect to be able to use any WordPress functions within data provider methods. Read more on WPLoader module configuration. WPQueriesWPQueries This module is typically used in integration tests, to make assertions on the database queries made by the global $wpdb object, and it requires the WPLoader module in order to work. It will set, if not set already, the SAVEQUERIES constant to true and will throw an exception if the constant is already set to a falsy value. Read more on WPQueries module configuration. WPFilesystemWPFilesystem This module is typically used in acceptance and functional tests, it extends the Filesystem module adding WordPress-specific configuration parameters and methods. It provides methods to read, write and update the WordPress filesystem directly, without relying on WordPress methods, using WordPress functions or triggering WordPress filters. One of the handy use cases of this module is scaffolding plugins and themes on the fly in the context of tests and automatically removing them after each test. Read more on WPFilesystem module configuration. WPCLIWPCLI This module is typically used in acceptance and functional tests to invoke WP-CLI commands, and test their output. It will use its own version of WP-CLI, not the one installed in the machine running the tests! Important notes:* - By default, wp-browserwill only include the wp-cli/wp-clipackage; this package contains the basic files to run WP-CLI and does not contain all the commands that come with a typical wp-cliinstallation. If you require all the commands that usually come installed with WP-CLI, then you should require the wp-cli/wp-cli-bundlepackage as a development dependency of your project. - This module defines the environment variable WPBROWSER_HOST_REQUESTto distinguish testing sessions. Altis will detect this and switch to the test database, similar to what happens in acceptance test sessions. Read more on WPCLI module configuration. Altis helpersAltis helpers Altis extends Codeception/wp-browser with its own helpers. Check out the tests/_helpers directory within the dev-tools package to check out existing helpers and new available functionality. ScaffoldingScaffolding Altis has a command to generate / scaffold tests and related artifacts, through the Codeception bootstrap and generate subcommands. Bootstrapping testsBootstrapping tests To bootstrap the tests folder, which will create the five default tests suites: composer dev-tools codecept bootstrap # OR, bootstrap tests in a custom directory: composer dev-tools codecept bootstrap -p path/to/tests # OR, bootstrap specific test suites composer dev-tools codecept bootstrap -p path/to/tests acceptance,unit Notes: the boostrap command here is a custom implementation different from Codeception's bootstrap command, so it works with Altis' implementation. Generating tests and objectsGenerating tests and objects Codeception includes a subcommand to generate different types of entities, eg: tests, helpers, environments, page objects. composer dev-tools codecept generate:[generator] [suite] [subdir/][test-class] # To generate a new CEST-style test in the existing `acceptance` test suite composer dev-tools codecept generate:cest acceptance awesome-feature/admin/AwesomeFeatureAdmin # Other generators include: # Generates a sample Cest test composer dev-tools codecept generate:cest suite filename # Generates a sample PHPUnit Test with Codeception hooks composer dev-tools codecept generate:test suite filename # Generates Gherkin feature file composer dev-tools codecept generate:feature suite filename # Generates a new suite with the given Actor composer dev-tools codecept generate:suite suite actorclass name # Generates text files containing scenarios composer dev-tools codecept generate:scenarios suitefrom tests # Generates a sample Helper File composer dev-tools codecept generate:helper filename # Generates a sample Page object composer dev-tools codecept generate:pageobject suite filename # Generates a sample Step object composer dev-tools codecept generate:stepobject suite filename # Generates a sample Environment configuration composer dev-tools codecept generate:environment env # Generates a sample Group Extension composer dev-tools codecept generate:groupobject group Note: you'll need to specify the path to the tests folder if it's not the default root tests directory. Note: you'll need to manually update suite configuration(s) to include the new helper/page object as needed. Advanced usageAdvanced usage DebuggingDebugging Codeception has two ways to get more detailed output, using the --debug flag, and the -v/-vv/-vvv flags inherited from composer. Debug statements and screenshotsDebug statements and screenshots Codeception allows printing debugging information, saving HTML snapshots, or saving screenshots for debugging purposes, eg: /** * @example ["", "Welcome"] * @example ["about", "About us"] * @example ["login", "Sign in"] */ public function testAwesomePages( AcceptanceTester $I, \Codeception\Example $example ) { # Print a debug statement. codecept_debug( sprintf( 'Checking page: "%s"', $example[0] ); # Go to the page, and check its title. $I->amOnPage( $example[0] ); $I->seeInTitle( $example[1] ); # Save a page snapshot. $I->makeHtmlSnapshot( 'awesome-snapshot-' . $example[0] ); # Save a screenshot of the page. $I->makeScreenshot( 'awesome-screenshot-' . $example[0] ); # Save a screenshot of a specific element on the page. $I->makeElementScreenshot( '#header', 'awesome-screenshot-' . $example[0] ); } Interactive consoleInteractive console Codeception allows real-time execution of arbitrary acceptance test code via a live browser session, so you can try out commands before writing the actual test, eg: composer dev-tools codecept console acceptance Even better, you can pause test executions programmatically and get a nice console where you can execute arbitrary commands, provided are in debug mode by supplying --debug flag to the run command ( note the need for the options delimiter -- ), eg: composer dev-tools codecept run acceptance -- --debug then, within the test method: $I->pause(); Note: using the interactive console requires the hoa/console composer package, which is not installed by default. Install it via: composer require --dev hoa/console ExtensionsExtensions Codeception provides a set of useful extensions that can be used with tests, find more information about the built-in extensions here. To give a quick glance: - DotReporter Provides less verbose output for test execution. Like PHPUnit printer it prints dots "." for successful tests and "F" for failures. - Logger Logs suites/tests/steps using Monolog library. - Recorder Saves a screenshot of each step in acceptance tests and shows them as a slideshow on one HTML page. Usable only for suites with WebDriver module enabled. - RunBefore Executes of some processes before running tests. - RunFailed Saves failed tests into tests/_output/failed in order to rerun failed tests. Enabled by default. - RunProcess Starts and stops processes per suite. Can be used to start/stop selenium server, chromedriver, mailcatcher, etc. Custom configCustom config Projects can use a custom Codeception configuration file and override Altis' zero-config setup (or select only bits and pieces as needed), by providing a custom codeception.yml file within the tests directory, and using the -c option to specify the path to it, eg: composer dev-tools codecept run -- -c path/to/codeception.yml FAQFAQ - Why do my tests fail because it cannot find the content I created using WPBrowser DB helper functions ? WPBrowser DB helper functions, like $I->havePostInDatabase() and the like, use direct database queries to manage content, which means WordPress filters are not run for those operations. This means that integrations like ElasticPress are not notified of the changes and do not update the Elasticsearch index as a result of that. So while the content is created in the database, it is not synced to Elasticsearch, and subsequently will not show up in queries that are handled by ElasticPress. The fix for this is to explicitly reindex content after such direct database operations to ensure the Elasticsearch index is synced properly, and for that you can use the $I->reindexContent() helper function. Example: $I->havePostInDatabase( $params ); $I->haveUserInDatabase( $params ); // Use $extra_params to pass params to the `elasticpress` CLI command like `--indexables=post,user`, etc. $I->reindexContent( $extra_params );
https://docs.altis-dxp.com/v12/dev-tools/testing-with-codeception/
2022-09-24T18:56:57
CC-MAIN-2022-40
1664030333455.97
[]
docs.altis-dxp.com
KPI Model Hands On The KPI model contains objects that can be linked to I/O model items but the hierarchy and organisation of the KPI objects can be arranged independent of location or server type. If using enterprise:inmation with Visual KPI, the KPI model is available on the Visual KPI web browser interface.
https://docs.inmation.com/datastudio/1.76/infrastructure-hands-on/kpi-model/index.html
2022-09-24T19:41:50
CC-MAIN-2022-40
1664030333455.97
[]
docs.inmation.com
Notifications¶ In the “Notifications” section you can manage emails sent from your server to inform you of their status, as well as emails sent to users with configuration instructions. System email sender¶ If you wish you can select from the drop-down list a sender email for when the system needs to send an email to a user, for example with VPN account instructions. Note: this will not be the sender mail for sending logs. Log Recipient¶ The system periodically sends by email a set of logs informing about your server status. You can choose who you want to receive these emails: the MaadiX technical team, you or both. You can indicate the email address to which you want the logs to be sent, if none is indicated, the email address associated with the Control Panel administration account will be used. By default the logs will be sent to the MaadiX technical team. The main logs received are: Puppet notifications: every time puppet runs or if it encounters an error. Monitors notifications: every time there is a high consumption of RAM, CPU or disk space, and every time monit stops or starts for some reason. Logwatch notifications: a summary of the system logs is sent every 24h. Security Events Notifications: logs related to system security are sent every 24 hours. Unattended Updates notifications: you will be notified each time it is detected that some system package can be updated. It will not be necessary to do anything, the Unattended Updates tool will perform the updates automatically (every 7 days). The sender email will be [email protected] (for monit notifications) or [email protected] (for the rest).
https://en.docs.maadix.net/notifications/
2022-09-24T18:55:26
CC-MAIN-2022-40
1664030333455.97
[]
en.docs.maadix.net
Define security controls as a post-build action step After you have set security controls at the system level in Jenkins, you can also add security controls at a job level for freestyle jobs that are not part of a Jenkins Pipeline. To do this: When defining a job in Jenkins, find the Post-Build Actions section. Select a Connection you have previously created, from the dropdown. Choose your application. This field is required. If your application has been instrumented, select your application from the Choose your application dropdown. If your application has not yet been instrumented, indicate your application using the Application Name and Application Language fields. You must provide the same application name in Jenkins that you will use when you do instrument your application. Contrast will use that same name and language during the post-build action step after the application has been instrumented. If the connection is configured to allow the system-level vulnerability security controls to be overridden, you can override that setting by checking the box next to Override Vulnerability Security Controls at the Jenkins system level. If you do this, you will also need to indicate the Number of Allowed Vulnerabilities, Vulnerability Severity, Vulnerability Type, and Vulnerability Statuses for this job. Select how you want to query vulnerabilities by selecting an option under Query vulnerabilities by. That way, only those vulnerabilities found from that job will be considered. By default, the plugin uses the first option: appVersionTag, format: applicationId-buildNumber.
https://docs.contrastsecurity.com/en/jenkins-freestyle-security-controls.html
2022-09-24T20:07:49
CC-MAIN-2022-40
1664030333455.97
[]
docs.contrastsecurity.com
Internationalizing games¶ Introduction. This automatic translation behavior may be undesirable in certain cases. For instance, when using a Label to display a player's name, you most likely don't want the player's name to be translated if it matches a translation key. To disable automatic translation on a specific node, use Object.set_message_translation and send a Object.notification to update the translation: func _ready(): # This assumes you have a node called "Label" as a child of the node # that has the script attached. var label = get_node("Label") label.set_message_translation(false) label.notification(NOTIFICATION_TRANSLATION_CHANGED) For more complex UI nodes such as OptionButtons, you may have to use this instead: func _ready(): var option_button = get_node("OptionButton") option_button.set_message_translation(false) option_button.notification(NOTIFICATION_TRANSLATION_CHANGED) option_button.get_popup().set_message_translation(false) option_button.get_popup().notification(NOTIFICATION_TRANSLATION_CHANGED). Testing translations¶ You may want to test a project's translation before releasing it. Godot provides two ways to do this. First, in the Project Settings, under Input Devices > Locale, there is a Test property. Set this property to the locale code of the language you want to test. Godot will run the project with that locale when the project is run (either from the editor or when exported). Keep in mind that since this is a project setting, it will show up in version control when it is set to a non-empty value. Therefore, it should be set back to an empty value before committing changes to version control. Translations can also.
https://docs.godotengine.org/en/stable/tutorials/i18n/internationalizing_games.html
2022-09-24T18:32:12
CC-MAIN-2022-40
1664030333455.97
[array(['../../_images/localization_dialog.png', '../../_images/localization_dialog.png'], dtype=object) array(['../../_images/localization_remaps.png', '../../_images/localization_remaps.png'], dtype=object) array(['../../_images/locale_test.png', '../../_images/locale_test.png'], dtype=object) array(['../../_images/localized_name.png', '../../_images/localized_name.png'], dtype=object)]
docs.godotengine.org
Shopify Integrate InviteReferrals App with Shopify Shopify is an e-commerce platform that allows you to build your brand, foster community, and drive traffic to your website. As your customers are omnipresent, you are required to engage them at every channel to provide them a favorable experience. This is how you can leave a long-lasting impact on the users by engaging them. Also, you can host your entire business website on Shopify which is easy and hassle-free. Follow below steps to integrate: Install App - Go to the Shopify App Store and add the InviteReferrals app to your shopify account. - Once app is added to shopify account, it can be seen in shopify apps. - Click on the InviteReferrals and login to the InviteReferrals account. - Now click on the “Go to dashboard” button and you will be navigated to the IR dashboard. Create Campaigns - Once the integration process is done, you can now create the marketing campaigns through Campaigns to engage users and nudge them to refer their friends on your website. Track Conversions: - In order to track the conversions for your campaigns, Go to Settings> Conversion Tracking Code <script> var ir = ir || function(){(window.ir.q = window.ir.q || []).push(arguments)}; var invite_referrals = window.invite_referrals || {}; (function() { invite_referrals.auth = { bid_e :'XXXX48BF2XXXX482D5F8C28425AAC6F', bid : '29XXX',); })(); ir('track',{ orderID : '{{ order_number }}', event: 'sale', email:' {{ customer.email }}', fname: '{{ customer.name}}', mobile:'{{ customer.mobile}}',purchaseValue:'{{ total_price | money_without_currency }}'}); </script> Note: In above code, Dummy Brand ID and Encryption keys shown. Kindly login your IR_account to see your credentials. - Place the Conversion Tracking Javascript Code to Additional Scriptsunder Order Processingas shown below. Updated about 2 years ago
https://docs.invitereferrals.com/docs/shopify
2022-09-24T19:30:25
CC-MAIN-2022-40
1664030333455.97
[array(['https://files.readme.io/8baa79e-shopify-app-store.png', 'shopify-app-store.png 1289'], dtype=object) array(['https://files.readme.io/8baa79e-shopify-app-store.png', 'Click to close... 1289'], dtype=object) array(['https://files.readme.io/96d00aa-read_me_image_1.png', 'read me image 1.png 1348'], dtype=object) array(['https://files.readme.io/96d00aa-read_me_image_1.png', 'Click to close... 1348'], dtype=object) array(['https://files.readme.io/d4872c7-ir-shopify-login.png', 'ir-shopify-login.png 1303'], dtype=object) array(['https://files.readme.io/d4872c7-ir-shopify-login.png', 'Click to close... 1303'], dtype=object) array(['https://files.readme.io/d612f0f-read_me_doc.png', 'read me doc.png 459'], dtype=object) array(['https://files.readme.io/d612f0f-read_me_doc.png', 'Click to close... 459'], dtype=object) array(['https://files.readme.io/9ab15ef-ir-campaigns.png', 'ir-campaigns.png 1303'], dtype=object) array(['https://files.readme.io/9ab15ef-ir-campaigns.png', 'Click to close... 1303'], dtype=object) array(['https://files.readme.io/adb4e0e-read_me_image_7.png', 'read me image 7.png 1350'], dtype=object) array(['https://files.readme.io/adb4e0e-read_me_image_7.png', 'Click to close... 1350'], dtype=object) array(['https://files.readme.io/1ba0855-conversion-additional-script.png', 'conversion-additional-script.png 1288'], dtype=object) array(['https://files.readme.io/1ba0855-conversion-additional-script.png', 'Click to close... 1288'], dtype=object) ]
docs.invitereferrals.com
Applying Changes Applying Changes to Your App Part of the magic of Zeet is automating your deployment pipeline. No longer will you need to login to your server then pull code and build all the while your application is offline. Zeet connects to your github to pull the latest version of your specified branch. To make changes, all you have to do is push new code to the specified branch. Zeet will automatically rebuild your app with the new code. If the build is successful, the existing build will be swapped with the new build upon completion resulting in no down time. Otherwise, the existing build is remain rendered. Swap Branches By default, Zeet will deploy the "main" or "master" branch. However, you can change the deployed branch by navigating to Settings => Source Control => In the "Production Branch" field, replace "main" with your desired branch name => Save Resources - Discord: Join Now - GitHub: - Express:
https://docs.zeet.co/serverless/applying-changes/
2022-09-24T19:01:54
CC-MAIN-2022-40
1664030333455.97
[array(['/assets/images/source-control-8262b33e27d7a80d1eb2ff9b82071f6d.png', None], dtype=object) ]
docs.zeet.co