content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Describe the bug We are having a serious problem when trying to log in via Azure AD Single-tenant. We've read many forums and many other issues and no solution or answer seems to explain how to correctly configure a native android application, using msal library, to work as it should. The error says that we are using an incompatible endpoint, but there is no way to force an endpoint in the current configuration of the msal.config file. We are currently with our hands tied. We did all the setup correctly. We chose "Accounts in this organizational directory only - single tenant" in Supported Account Types. We configure the return URLs using package name + sha when adding the Android platform. We configured msal_config.json as shown: { "client_id": "my-client-id", "redirect_uri": "msauth://my-package-name/sha", "broker_redirect_uri_registered": true, "authorities": [ { "type": "AAD", "audience": { "type": "AzureADMyOrg", "tenant_id": "my-tenant-id" } } ] } We configure the intent-filter in android manifest as shown: <activity android: <intent-filter> <action android: <category android: <category android: <data android: </intent-filter> </activity> Smartphone (please complete the following information): Device: OnePlus 3T Android Version: 9 Browser Chrome and Edge MSAL Version: 2.+ Stacktrace Authentication failed: com.microsoft.identity.client.exception.MsalServiceException: AADSTS50194: Application 'my-tenant-id-here'(PORTAL_APPGAMIFICATION) is not configured as a multi-tenant application. Usage of the /common endpoint is not supported for such applications created after '10/15/2018'. Use a tenant-specific endpoint or configure the application to be multi-tenant. Trace ID: db590646-c5c9-4c06-8af5-d826f4127301 Correlation ID: cafd4040-eb7c-4ebb-ac08-3950048fd58e Timestamp: 2021-07-12 10:17:58Z To Reproduce Steps to reproduce the behavior: 1 - Set up an Azure app with single-tenant 2 - Configure the android platform with your package name and the sha generated with your subscription key 3 - Configure msal.config as indicated above 4 - Try running the app on your mobile Expected behavior It was expected that the token would be returned and the login process would complete. Actual Behavior The login process runs well halfway through. The app correctly opens the Microsoft login screen, you can fill in the user's email and password, but when you go to login, this error appears in the android LOG: Application 'my-tenant-id-here'(PORTAL_APPGAMIFICATION) is not configured as a multi-tenant application. Usage of the /common endpoint is not supported for such applications created after '10/15/2018'. Use a tenant-specific endpoint or configure the application to be multi-tenant. Additional context We don't want multi-tenant. We don't want any logins that aren't from within our organization's directory. It makes no sense to reconfigure Azure AD to "multi-tenant", the library should work fine with the single-tenant option.
https://docs.microsoft.com/en-us/answers/questions/472282/sign-in-with-34accounts-in-this-organizational-dir.html
2021-11-27T04:16:20
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
This section explains in detail how the Management Console of a WSO2 product can be used for configuring the permissions granted to a user role. You will also find detailed descriptions on all the types of permissions that can be granted. Introduction to role-based permissions. By default, every WSO2 product comes with the following User, Role and Permissions configured: The Admin user and Admin role is defined and linked to each other in the user-mgt.xml file, system administrator for more information. Configuring permissions for a role To configure the permissions for a role: - Click Users and Roles in the Configure tab of the navigator. All the roles created in the system will be listed in the Roles page as shown below. Click Permissions to open the permissions navigator for the role as shown below. Note that there may be other categories of permissions enabled for a WSO2 product, depending on the type of features that are installed in the product. - You can select the relevant check boxes to enable the required permissions for your role. The descriptions of all the available permissions are explained below. Descriptions of permissions Let us now go through each of the options available in the permissions navigator to understand how they apply to functions in WSO2 MB. Log-in permissions The Login permission defined under Admin permissions allows users to log in to the Management Console of the product. Therefore, this is the primary permission required for using the Management Console. Super Tenant permissions The following table describes the permissions at Super Tenant level. These are also referred to as Super Admin permissions. Tenant-level permissions The following table describes the permissions at Tenant level. These are also referred to as Admin permissions. Note that when you select a node in the Permissions navigator, all the subordinate permissions that are listed under the selected node are also automatically enabled. Configuration permissions The following table explains the permissions required for performing various configuration tasks in the WSO2 MB. Permissions for managing Queues and Topics WSO2 Message Broker is primarily used for brokering messages between external applications using Queues and Topics. Explained below are the role-based permissions applicable for working with queues and topics in WSO2 MB. Permissions required for working with Queues: Permissions required for working with Topics: Listed below are the permissions that will allow users to manage subscriptions to a Topic or Queue. Subscribing to Topics/Queues Explained above are the list of role-based permissions that are required by users in order to create and manage queues/topics from the Management Console. Note that the permission to create topics/queues also includes the permissions for publishing messages to that topic/queue and consuming the messages published to that topic/queue. Once queues and topics are created in the Management Console, other users should be able to publish to these topics/queues and consume the messages that are published. Therefore, the creator of the topic/queue should grant permissions to other user roles at the time of creating the topic/queue as shown below. - When adding a topic from the Management Console, all the available user roles will be listed as shown below. The topic creator can then select the relevant check box to grant the relevant permissions. See the detailed instruction on creating topics in WSO2 MB. - When adding a queue from the Management Console, all the available user roles will be listed as shown below. The queue creator can then select the relevant check box to grant the relevant permissions. See the detailed instruction on creating queues in WSO2 MB. General management permissions Listed below are the permissions for some of the general functions applicable to WSO2 MB.
https://docs.wso2.com/display/EI600/User+Permissions+for+the+EI+Message+Broker
2021-11-27T03:08:39
CC-MAIN-2021-49
1637964358078.2
[]
docs.wso2.com
Common Workflows¶ Krita’s main goal is to help artists create a digital painting from scratch. Krita is used by comic artists, matte painters, texture artists, and illustrators around the world. This section explains some common workflow that artists use in Krita. When you open a new document in Krita for the first time, you can start painting instantly. The brush tool is selected by default and you just have to paint on the canvas. However, let us look at what artists do in Krita. Below are some of the common workflows used in Krita: Speed Painting and Conceptualizing¶ Some artists work only on the digital medium, sketching and visualizing concepts in Krita from scratch. As the name suggests a technique of painting done within a matter of hours to quickly visualize the basic scene, character, look and feel of the environment or to denote the general mood and overall concept is called a speed painting. Finishing and finer details are not the main goals of this type of painting, but the representation of form value and layout is the main goal. Some artists set a brushes under Block Tag like Block fuzzy, Block basic, layout_block, etc. After the composition and a basic layout has been laid out the artists add as many details as possible in the given limited time, this requires a decent knowledge of forms, value perspective and proportions of the objects. Below is an example of speed paint done by David Revoy in an hours time. Artwork by David Revoy, license : CC-BY You can view the recorded speed painting demo for the above image on Youtube. Colorizing Line Art¶. Preparing the line art¶ If your images have a white or other single-tone background, you can use either of the following methods to prepare the art for coloring: Place the line art at the top of the layer stack and set its layer blending mode to Multiply. If you want to clean the line art a bit you can press the Ctrl + L shortcut or go to. You can clean the unwanted graysor press the Ctrl + M shortcut. Now select Red from the drop-down, click on the top right node on the graph and slide it all the way down. Or you can click on the top right node and enter 0 in the input field. Repeat this step for Green too. Now the whole drawing will have a blue overlay, zoom in and check if the blue pencil lines are still visible slightly. If Max from the list.or press the Ctrl + Shift + U shortcut. Now select Hint It is good to use non-photo-blue pencils to create the blue lines as those are easy to remove. If you are drawing digitally in blue lines use #A4DDED color as this is closer to non-photo-blue color.menu item. Use the dialog box to turn all the white areas of the image transparent. The Color Selector art. Your line art can be in grayscale color space, this is a unique feature in Krita which allows you to keep a layer in a color-space independent from the image. Laying in Flat Colors¶ There are many ways to color a line art in Krita, but generally, these three are common among the artists. Paint blocks of color directly with block brushes. Fill with Flood Fill Tool. Use a Colorize Mask. Blocking with brush¶ The first is the more traditional method of taking a shape brush or using the geometric tools to lay in color. This would be similar to using an analog marker or brush on paper. There are various block brushes in Krita, you can select Block Tag from the drop-down in the brush presets docker and use the brushes listed there. Add a layer underneath your line art layer and start painting with the brush. If you want to correct any area you can press the E key and convert the same brush into an eraser. You can also use a layer each for different colors for more flexibility. Filling with Flood Fill tool¶ The second method is to use the Flood fill tool to fill large parts of your line art quickly. This method generally requires closed gaps in the line art. To begin with this method place your line art on a separate layer. Then activate the flood fill tool and set the Grow selection to 2px, uncheck Limit to current layer if previously checked. Choose a color from color selector and just click on the area you want to fill the color. As we have expanded the fill with grow selection the color will be filled slightly underneath the line art thus giving us a clean fill. Colorize Mask¶ The third method is to take advantage of the built-in Colorize Mask. This is a powerful tool that can dramatically improve your workflow and cut you down on your production time. To begin coloring with the Colorize Mask, select your line art layer and click the Colorize Mask Editing Tool icon in the toolbar. With the Colorize Mask Editing Tool enabled, click on the canvas—this will add a Colorize Mask layer to your document and make your lineart look a little blurry. You can now lay down solid brush strokes to indicate which areas should be colored in what colors: Whenever you press the Update button in the Tool Options, you will see which colors will fill which areas. You can continue to edit your brush strokes until you are happy with the result. To get a clean look of your painting, disable the “Edit key strokes” checkbox: Once you are done, you can convert the Colorize Mask layer into a paint layer in the Layers docker. Have a look at the Colorize Mask manual to learn more about this tool. Changing Line Art Color¶ To change the color of your line art, you can use the Alpha Lock feature. In the layer docker, click on the rightmost icon of your line art layer. It’s the icon that looks like a little checker board: When Alpha Lock is enabled, you can only change the color of the pixels, not their opacity—meaning that everything you paint will only change the colors of your existing lines, not add new lines. If you want to change the color of your line art to one solid color, you can now use the bucket fill tool and it will only apply to your existing lines. Or if you want to apply several different colors to specific areas of your line art, you can quickly paint over your line art with a broad brush: Painting¶ Starting from chaos¶ Here, you start by making a mess through random shapes and texture, then taking inspirations from the resulting chaos you can form various concepts. It is kind of like making things from clouds or finding recognizable shapes of things in abstract and random textures. Many concept artists work with this technique. You can use brushes like the shape brush, or the spray brush to paint a lot of different shapes, and from the resulting noise, you let your brain pick out shapes and compositions. You then refine these shapes to look more like shapes you think they look, and paint them over with a normal paintbrush. This method is best done in a painting environment. Starting from a value based underground¶ This method finds its origins in old oil-painting practice: You first make an under-painting and then paint over it with color, having the dark underground shine through. With Krita you can use blending modes for this purpose. Choosing the color blending mode on a layer on top allows you to change the colors of the image without changing the relative luminosity. This is useful, because humans are much more sensitive to tonal differences than the difference in saturation and hue. This’ll allow you to work in grayscale before going into color for the polishing phase. You can find more about this technique here. Preparing Tiles and Textures¶ Many artists use Krita to create textures for 3d assets used for games animation, etc. Krita has many texture templates for you to choose and get started with creating textures. These templates got to. Now when you paint the canvas is tiled in real-time allowing you to create seamless pattern and texture, it is also easy to prepare interlocking patterns and motifs in this mode. Creating Pixel Art¶ Krita can also be used to create a high definition pixel painting. The pixel art look can be achieved by using Index color filter layer and overlaying dithering patterns. The general layer stack arrangement is as shown below. The index color filter maps specific user-selected colors to the grayscale value of the artwork. You can see the example below, the strip below the black and white gradient has an index color applied to it so that the black and white gradient gets the color selected to different values. You can choose the required colors and ramps in the index color filter dialog as shown below. grayscale and add.
https://docs.krita.org/en/tutorials/common_workflows.html
2021-11-27T02:01:14
CC-MAIN-2021-49
1637964358078.2
[array(['../_images/Pepper-speedpaint-deevad.jpg', 'Speedpaint of Pepper & Carrot by deevad (David Revoy).'], dtype=object) array(['../_images/Levels-filter.png', 'Level filter dialog.'], dtype=object) array(['../_images/Color-adjustment-cw.png', 'Remove blue lines from image step 1.'], dtype=object) array(['../_images/Color-adjustment-02.png', 'Removing blue lines from scan step 2.'], dtype=object) array(['../_images/Color-adjustment-03.png', 'Remove blue lines from scans step 3.'], dtype=object) array(['../_images/Color-adjustment-04.png', 'Remove blue lines from scans step 4.'], dtype=object) array(['../_images/Color-to-alpha.png', 'Color to alpha dialog box.'], dtype=object) array(['../_images/Floodfill-krita.png', 'Flood fill in krita.'], dtype=object) array(['../_images/krita-colorize-mask-01.png', 'Colorize Mask Editing Tool in the toolbar.'], dtype=object) array(['../_images/krita-colorize-mask-02.png', 'Colorize Mask with brush strokes'], dtype=object) array(['../_images/krita-colorize-mask-03.png', 'Colorize Mask result'], dtype=object) array(['../_images/layer-alpha-lock.png', 'Alpha lock button'], dtype=object) array(['../_images/color-lineart.png', 'Changing Line Art Color'], dtype=object) array(['../_images/Chaos2.jpg', 'Starting a painting from chaotic sketch.'], dtype=object) array(['../_images/Layer-docker-pixelart.png', 'Layer stack setup for pixel art.'], dtype=object) array(['../_images/Gradient-pixelart.png', 'Color mapping in index color to grayscale.'], dtype=object) array(['../_images/Index-color-filter1.png', 'Index color filter dialog.'], dtype=object) array(['../_images/Kiki-pixel-art.png', 'Pixel art done in Krita.'], dtype=object) ]
docs.krita.org
To relieve users of the necessity of supplying a mechanism name for every connect request, a default mechanism can be used instead. This mechanism name may be supplied at either the client or the server. The specified mechanism must appear in both the client and server lists of supported mechanisms. If there is no match, the default mechanism is not be eligible for use. A client-supplied default mechanism will take precedence over a server-supplied default mechanism. The default attribute is specified in the XML-based mechanism list that resides on both client and server platforms. A server default mechanism must always be supplied. The presence of a default mechanism does not prevent users from using an alternative mechanism (provided the mechanism occurs in both the client and server lists of supported mechanisms). An application-supplied mechanism name (if supplied) is preferred to the client default mechanism (if one exists), which is preferred to the server default mechanism. In all cases the chosen mechanism must exist on the client and the server for a successful connection, otherwise an error (CLI507) will be returned. CLIv2 will place the name of actual mechanism used into the logmech_name field of the DCBAREA. It will be available to the application after control has been returned from DBCHCL (DBFCON). If a mechanism has been tagged as disabled in either the client- or server-based XML configuration file, it will be ignored (default or otherwise). The server side config file must contain one and only one default mechanism. The client side config file may or may not contain a default mechanism; if it does, there can only be one default mechanism.
https://docs.teradata.com/r/bh1cB~yqR86mWktTVCvbEw/9lq3FCKQyfWQymLaUqqE~Q
2021-11-27T02:03:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.teradata.com
Valve Index Controllers VRChat offers support for the Valve Index Controllers! Finger Posing The Valve Index Controllers contain capacitive sensors for the pinky, ring, and middle fingers. It also contains capacitive sensors on the trigger for the index finger, and sensors on the thumbpad/touchpad/buttons to sense when your thumb is "down". Finally, it contains a "squeeze" sensor to detect squeezing of the controller. At all times, your fingers on your avatar in VRChat will track to your fingers' states on the controller. Although the tracking isn't exact, the closed or open state of the finger will allow for finger movement on your avatar, improving immersion. Gesture Toggle When you have Gesture Toggle enabled, VRChat will attempt to match your current finger pose against the standard VRChat hand poses. Any applied Gesture Overrides will play, however. Object Interaction Grabbing objects (like a Disc in Battle Discs) is done by squeezing the grip. Releasing the grip will drop the object. Playing games like Battle Discs can feel more immersive and natural with the Valve Index controllers. The grip strength required can be adjusted along with many other settings. Set Default Bindings Ensure that you are using the "VRChat bindings for Index Controller" for Index Controllers in the SteamVR Settings > Controller Binding > VRChat menu. This is very important-- if you are using old custom bindings for Index controllers, they will not work! The SteamVR Controller Binding page can be a bit buggy, so ensure that your bindings have properly set once you've chosen them. These bindings have been set as the "default bindings" for VRChat. You can find this in the SteamVR Controller settings. You may have to use the "Use Old Binding UI" button in SteamVR. You can also adjust many settings in the SteamVR Controller Bindings menu, including the required strength for grip. Tune these settings if you feel like things are a bit off. If you find a setup that you like, you can use SteamVR to share these bindings out to the Community! If it's a bit hard to close your pinky/ring/middle fingers, try lowering Force Hold and Force Release on the Grip Grab bindings. Binding Customization Notes - Be careful when re-assigning "thumb-touchable" button Touch events. VRChat checks for touch events on every button the thumb can touch to know if the thumb is bent. If a "thumb-touchable" button is not assigned for the same touch events, VRChat cannot tell that the thumb has been bent, and will not track properly. Jump, Mic Toggle, Gesture Toggle, and Action Menu Left / Rightare hard input bindings to the application. VRChat Standard Hand Poses Button Assignments Rigging Notes VRChat uses Unity's Mechanim and the Mixamo "YBot" character as the standard for rigging setups. You may find with your new-found dexterity that your standard for hand rigging increases, and you may need to tune your rigging and weight painting. A good way to test this is to use Unity's Avatar Muscles & Settings tab to test the hand open/closed state. If the hand pose appears odd, you may have to tune your weight-painting and rigging slightly. Updated over 1 year ago
https://docs.vrchat.com/docs/valve-index
2021-11-27T03:28:34
CC-MAIN-2021-49
1637964358078.2
[array(['https://files.readme.io/8d84f6f-chrome_2019-05-29_17-39-32.png', "chrome_2019-05-29_17-39-32.png If it's a bit hard to close your pinky/ring/middle fingers, try lowering Force Hold and Force Release on the Grip Grab bindings."], dtype=object) array(['https://files.readme.io/8d84f6f-chrome_2019-05-29_17-39-32.png', "Click to close... If it's a bit hard to close your pinky/ring/middle fingers, try lowering Force Hold and Force Release on the Grip Grab bindings."], dtype=object) ]
docs.vrchat.com
Express will appear as ‘Third Party Pending’ or ‘Third Party Approved’ once they go through and complete purchase at PayPal. Live processors such as Stripe, Square, , authorize.net and others will all have a ‘Card Approved’ or ‘Order Confirmed’ status when completed. Approved order statuses such as ‘Card Approved’, ‘Order Confirmed’. Bill Later orders will start as ‘Direct Deposit Pending’ and you may change them to any other status once you receive payment since collection of funds with this payment method is entirely up to you as a business. Printing Receipts & Packaging Slips: These allow you to quickly get a print off from the order. Packaging slips contain check boxes to use while fulfilling or boxing orders, while the receipts contain order totals and values. Send Order Shipped Emails: You can bulk send order shipped emails by selecting all orders and choosing to send notices. IF you have updated and added shipping and tracking information in an order, that will also be included in the email. Change Order Status: Bulk changing of the order status is also available. Be sure to understand the importance of order status as those approved statuses allow users to download products, see subscription or membership content, etc.. User Account: You may change the account that is tied to an order by starting to type the users name, it will then search all EasyCart users for a match and this alters who the order belongs to. Changing this value will allow users to see the order in their account page on the front end. Shipping Address: You may edit the shipping address information if you have a Professional or Premium edition license installed. Shipment Information: This section lets you edit the shipping selection the customer has, including adding a carrier and tracking number. After editing this information, you may want to send the customer a shipping emailer from the middle of the screen. Billing Address: You may edit the billing address information if you have a Professional or Premium edition license installed. Billing. Customer Notes: If a user leaves customer notes during checkout they will appear here. Customer Collection Info: We collect the customers IP address and whether they agreed to your checkout terms & conditions. This is often helpful information with disputes and fraudulent charges.
https://docs.wpeasycart.com/wp-easycart-administrative-console-guide/?section=order-management
2021-11-27T02:11:07
CC-MAIN-2021-49
1637964358078.2
[]
docs.wpeasycart.com
Template Template is a Google Document. You can create a brand new document or turn an existing document into template by adding merge fields. Add-on creates for each row in a sheet a copy of the template you selected and replaces merge fields in the template with values from columns in a sheet. MERGE fields A merge field will be recognized only if you enclose it in double curly brackets like {{this is a merge field}} otherwise will be ignored and treated as normal text. You can use white spaces as well as accented characters. Sample Template Take a look at the sample template which is used in the tutorial. Notice there are three merge fields defined in the header {{day}}, {{month}} and {{year}}. Then the document itself contains multiple times the tags {{name}} and {{surname}}. Notice the rich formatting like color and font size. For merge fields which are mapped as text rich formatting will be preserved in the output document. Along the text merge fields there are {{qr}} and {{image}} placeholders which are mapped as QR code and image. Finally the footer contains again same merge fields as are in the header but the date pattern is different. Prohibited Characters If a merge field contains one of these characters the tag}, {Name}}, Name}, Name}} - use of prohibited character in tag - {{First^Name}}, {{Name(1)}}
https://docs.anymerge.com/template
2021-11-27T02:37:11
CC-MAIN-2021-49
1637964358078.2
[]
docs.anymerge.com
V This scenario can also be optionally configured for IPv6—you can use the VPC wizard to create a VPC and subnet with associated IPv6 CIDR blocks. Instances launched into the subnet can receive IPv6 addresses. We do not support IPv6 communication over a AWS Site-to-Site VPN connection on a virtual private gateway; however, instances in the VPC can communicate with each other via IPv6. For more information about IPv4 and IPv6 addressing, see IP Addressing in your VPC. For information about managing your EC2 instance software, see Managing software on your Linux instance in the Amazon EC2 User Guide for Linux Instances. Overview The following diagram shows the key components of the configuration for this scenario. For this scenario, see Your customer gateway device to configure the customer gateway device device, see Your customer gateway device. Internetwork traffic privacy in Amazon VPC. For scenario 4, you'll use the default security group for your VPC but not a network ACL. If you'd like to use a network ACL, see Recommended network ACL rules for a VPC with a private subnet only and AWS Site-to-Site VPN access.. Security group rules. Recommended network ACL rules for a VPC with a private subnet only and AWS Site-to-Site VPN access The following table shows the rules that we recommend. They block all traffic except that which is explicitly required. Recommended network ACL rules for IPv6 If you implemented scenario 4 with IPv6 support and created a VPC and subnet with associated IPv6 CIDR blocks, you must add separate rules to your network ACL to control inbound and outbound IPv6 traffic. In this scenario, the database servers cannot be reached over the VPN communication via IPv6, therefore no additional network ACL rules are required. The following are the default rules that deny IPv6 traffic to and from the subnet.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario4.html
2021-11-27T02:27:48
CC-MAIN-2021-49
1637964358078.2
[array(['images/case-4.png', 'Diagram for scenario 4: VPC with only a virtual private gateway'], dtype=object) ]
docs.aws.amazon.com
Getting Started Setting up EnvKey with a new or existing project doesn't take long. 1.) Download the EnvKey App. If you haven't yet, you can download the latest version of the EnvKey App for your platform by going to EnvKey's homepage and clicking the big Download button at the top of the page. Install it and open it once it's finished downloading. 2.) Create an organization or accept an invitation. 2a - Create an organization. Click Sign In / Sign Up on the home screen of the EnvKey App and enter your email. In a few seconds, you'll get an email token you can use to sign up. Next, fill in the Create Organization form with the organization's name, your name, and a master encryption passphrase. From there, you'll be guided through creating or importing your first application, adding and editing config, generating development and server EnvKeys, and inviting collaborators. 2b - Accept an invitation. If you need to get access to an existing EnvKey organization, ask an organization owner or admin to send you an invitation. You should receive an Invite Code in an email from EnvKey and an Encryption Code directly from the user who invited you. Click the Accept Invite button on the home screen of the EnvKey App and enter both codes to get access. You'll also be asked to set a master encryption passphrase. You should now have access to your new organization and at least one of its applications. From there, you'll be guided through adding and editing config, generating development EnvKeys, and if your access level is high enough, generating server EnvKeys and inviting additional collaborators.
https://docs.envkey.com/
2021-11-27T03:10:36
CC-MAIN-2021-49
1637964358078.2
[]
docs.envkey.com
Securing the system by keeping it up-to-date This section explains: Why it is important to update your system regularly How to apply updates manually by using the GUI or CLI How to enable automatic updates Why it is important to keep your system up-to-date This section briefly explains the importance of updating your system on a regular basis. All software contains bugs. Often, these bugs can result in a vulnerability that can expose your system to malicious users. Packages that have not been updated are a common cause of computer intrusions. Implement a plan for installing security patches in a timely manner to quickly eliminate discovered vulnerabilities, so they cannot be exploited. Manual updating using GUI This section describes how to manually download and install new updates by using GUI. Procedure Hover the cursor over the upper-left corner of the screen and type "Software" and select the Software application to open it. Click the Updates button dnf upgrade Confirm to download the available packages. Additional Resources The dnf(8)manual page Setting automatic updates This section describes how to use the DNF Automatic application to automatically: Download and install any new updates Only download the updates Get notified about the updates Procedure Install the dnf-automatic package: sudo dnf install dnf-automatic Edit the /etc/dnf/automatic.confconfiguration file as needed. See the DNF Automatic documentation for details. Enable and start the systemdtimer: sudo systemctl enable --now timer Replace timerwith one of following ones depending on what action you want to do: dnf-automatic-install.timerto download and install packages dnf-automatic-download.timerto only download packages dnf-automatic-notifyonly.timerto only get a notification using configured emitters in the /etc/dnf/automatic
https://docs.fedoraproject.org/nb_NO/quick-docs/securing-the-system-by-keeping-it-up-to-date/
2021-11-27T03:07:59
CC-MAIN-2021-49
1637964358078.2
[array(['../_images/software-updates.png', 'Updating by using the Software application'], dtype=object)]
docs.fedoraproject.org
A user account with Admin permissions is created when you install ISPmanager.. For more information please refer to the article User accounts. To create a new admin: - Log in to ISPmanager with the superuser permissions. - Go to User accounts → Administrators → Add. - Enter a Username that the user will enter to log in to ISPmanager. - Enter his Full name for identification. This information will be displayed only to the administrators of the control panel in User accounts → Administrators → the Full name column. - Enter the Password to log in to the system and Confrm it. Use the upper and lower-case letters, digits, special symbols (such as @, ?, %) for better protection of your password. - Enable the Superuser option to allow the user to log in to ISPmanager and connect through the SSH protocol with the superuser permissions (root per default). - Click on Ок.
https://docs.ispsystem.com/ispmanager6-business/user-accounts/add-an-administrator
2021-11-27T01:44:09
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Manage safe attachments Microsoft Defender for Office 365 , formerly called Microsoft 365 ATP, or Advanced Threat Protection, helps protect your business against files that contain malicious content in Outlook, OneDrive, SharePoint, and Teams. Try it! -.
https://docs.microsoft.com/et-EE/microsoft-365/business-video/safe-attachments?view=o365-worldwide
2021-11-27T04:27:10
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
cursor.readPref()¶ On this page Definition¶ cursor.readPref(mode, tagSet, hedgeOptions)¶ -. Append readPref()to a cursor to control how the client routes the query to members of the replica set.Note You must apply readPref()to the cursor before retrieving any documents from the database. Parameters¶ readPref() does not support the Read Preference maxStalenessSeconds option for read preference. Examples¶ Specify Read Preference Mode¶ The following operation uses the read preference mode to target the read to a secondary member. Specify Read Preference Tag Set¶ To target secondaries with specific tags, include both the mode and the tagSet array: During the secondary selection process, MongoDB tries to find secondary members with the datacenter: "B" tag first. - If found, MongoDB limits the eligible secondaries to those with the datacenter: "B"tag and ignores the remaining tags. If none are found, then, MongoDB tries to find secondary members with the "region": "West"tag. - If found, MongoDB limits the eligible secondaries to those with the "region": "West"tag. - If none are found, MongoDB uses any eligible secondaries. See Order of Tag Matching for details. Specify Hedged Read¶ Starting in MongoDB 4.4 for sharded clusters, you can enable hedged reads for non-primary read preferences. To use hedged reads, the mongos must have enabled support for hedged reads (the default) and the non- primary read preferences must enable the use of hedged reads. To target secondaries on 4.4+ sharded cluster using hedged reads, include both the mode and the hedgeOptions, as in the following examples: Without a tag set With a tag set
https://docs.mongodb.com/v5.0/reference/method/cursor.readPref/
2021-11-27T03:19:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.mongodb.com
Why you add disks to storage pools You can add SSDs to an existing storage pool and increase its cache size. When you add SSDs to a storage pool that has allocation units already allocated to Flash Pool aggregates, you increase the cache size of each of those aggregates and the total cache of the storage pool. If the allocation units of the storage pool are not yet allocated, adding SSDs to that storage pool does not affect the SSD cache size. When you add SSDs to an existing storage pool, the SSDs must be owned by one node or the other of the same HA pair that already owned the existing SSDs in the storage pool. You can add SSDs that are owned by either node of the HA pair.
https://docs.netapp.com/us-en/ontap-sm-classic/online-help-96-97/concept_why_you_add_disks_to_storage_pools.html
2021-11-27T02:06:14
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
Analyze all of your data with the fastest and most widely used cloud data warehouse. Amazon Redshift is based on PostgreSQL. Amazon Redshift and PostgreSQL have a number of very important differences that you must be aware of as you design and develop your data warehouse applications. read more Amazon Redshift is compatible with postgresql just use the ODBC connection in the environment variable DATABASE_URL with the parameter OpenSourceSubProtocolOverride to make pREST connect to the database. DATABASE_URL=postgresql://localhost:5432/postgres?OpenSourceSubProtocolOverride=true
https://docs.prestd.com/integrations/redshift/
2021-11-27T02:26:59
CC-MAIN-2021-49
1637964358078.2
[]
docs.prestd.com
Workspace ONE UEM partners with Adaptiva to offer an alternative peer distribution system. In the Adaptiva peer distribution system, installation begins with a specific device in the office or subnet called the rendezvous point (RVP). This initial download takes time. However, installation times improve because devices are not taxing the storage system or the line of communication for the application package. Instead, devices receive the package from other devices in the network. The system also monitors the network for traffic. If the network is busy, installations pause until the network availability increases. Adaptiva Peer Distribution Component Roles Peer distribution uses two main components: a peer-to-peer server and peer-to-peer clients. - Peer-to-peer server - This component Enterprise Systems Connector - SQL Database or SQL Server Express - Peer-to-peer clients on devices - Download and install the server from the Workspace ONE UEM console before you configure the peer distribution. - This component distributes application packages between peers, or devices, and it receives application metadata from the server. These clients use licenses you buy with the peer distribution feature. - This component resides on devices and it must communicate with these components: - Software distribution clients on devices - Peer-to-peer server - The peer distribution system automatically deploys clients to devices when you complete the peer distribution software setup. An installed peer-to-peer client uses one license. - Network Topology - This component represents your network as offices in a hierarchy. It enables the peer distribution system to deploy applications more efficiently. It uses the hierarchy to control what clients get downloads and in what order. It uses devices called rendezvous points, or RVPs, as master clients in an office. The RVP receives downloads and disseminates the applications to peer clients. - This component is a spreadsheet that you upload to the Workspace ONE UEM console. If you do not have a network topology, you can download the spreadsheet from the console and edit the topology initially identified by the peer distribution system. - Though this component is optional, it greatly improves efficiencies and download speeds. Considerations for Peer-To-Peer Distribution with Adaptiva To help set up your peer distribution system and to avoid configuration issues, review the network behaviors, the types of communication, the communication channels between components, and license management. - Common Network - The peer-to-peer server, the VMware Enterprise Systems Connector, and the peer-to-peer clients must all communicate on the same network. If these system components are on subnets of your network and the subnets can communicate, then the feature can transfer applications. Clients that are not on the network cannot receive applications with the peer-to-peer distribution. - Encryption - Communication between the peer-to-peer server and Workspace ONE UEM is encrypted. The communication is not encrypted between peer-to-peer clients in the network. This communication uses UDP but the package itself is not encrypted between clients. Although the system checks for tampered packages, a best practice is not to send confidential packages with the peer-to-peer distribution. - UDP - The peer-to-peer server and client use UDP to communicate with Workspace ONE UEM. - Central Office - The peer-to-peer server must reside in one of the subnets in the top-tiered Central Office. - License Overages - The peer-to-peer system does not stop you from assigning more licenses than you have bought. If you assign extra licenses, the system charges you for them. To help gauge license usage, the ratio of client installation to the used license is one to one. - Open Ports - The peer-to-peer client needs specific ports open to transfer metadata. Find out if your network management team has closed the required ports or has blocked broadcasting on these ports. If these ports are closed or do not allow broadcasting, contact your Workspace ONE UEM representative about alternative ports. - Console, Client, and Server Versions - You must deploy and use the supported version of the peer-to-peer client and the peer-to-peer server. Update the peer-to-peer server when the Workspace ONE UEM console includes an update to the peer-to-peer client. If the versions are not supported, the feature does not work. - SQL Server Express - Download and install SQL Server Express on the same server that has the VMware Enterprise Systems Connector. Install this component before configuring peer-to-peer setup because it might take some time to complete its installation. -. - Activation Processes - After you save your configurations, the system activates the peer-to-peer server and clients with a license key. You can input your topology or use the one the network generates at activation. Also at the time of activation, the system publishes all the existing Win32 application content to the peer-to-peer server. From this point on, devices that belong to the peer distribution network begin to receive the application download. Requirements for Adaptiva Peer-To-Peer Distribution Peer distribution requires components for communication, data management, application deployment, and optional storage.Supported Platforms and Application Types - Windows Desktop (Windows 10) - Win32 applications. Ports Used for Peer-To-Peer Distribution with Adaptiva Open specific ports in your network so that the peer-to-peer clients can transfer metadata to the peer-to-peer server. If you have no group policies that block the creation of firewall policies, the peer distribution component installers create the necessary firewall rules. Data Transport Behaviors for Peer-To-Peer Networks To control the sources of application packages, also called distribution optimization, in your peer-to-peer deployment, consider how data transfers within networks and subnetworks. - Office Types - Peer distribution has three types of offices, and these office types share data in specific ways. Note: If you have a physical office with a wired (default) subnet and a WiFi subnet, create an office for each network. Make the WiFi office a child of the wired office so that the WiFi network receives packages from the wired parent office. - Default - Defines a standard wired LAN. Clients attempt to the share content and they send broadcast discovery requests. - VPN - Defines an office and subnet range allocated for clients connecting through VPN. Clients within a VPN office do not attempt to the share content, but they do send broadcast discovery requests. WiFi - Defines an office and subnet range allocated to clients connected over WiFi. Clients within a WiFi office share content, but they do not send broadcast discovery requests. - Central Office and the Peer-to-Peer Server - The peer-to-peer server must reside in one of the subnets in the top-tiered Central Office. This placement makes it available to all clients in the hierarchy. Data Transport in Offices - Adaptive Protocol - The adaptive protocol is a proprietary protocol that monitors the length of edge router queues and sends data when queues are nearly empty. This protocol, implemented by an advanced kernel driver, removes the need to throttle the bandwidth when deploying applications with the peer distribution. - Within Offices - Data transport within offices uses the LAN, or Foreground protocol. The peer distribution system does not manage this protocol. - Between Offices - Data transport between offices uses the WAN, or Background protocol. This protocol is also called the Adaptive Protocol that protects the bandwidth availability on WAN links. - Between Subnets - Define subnets connected over a WAN link as separate offices. If offices are misconfigured, the LAN protocol might be used over a WAN link, causing saturation of the WAN. Clients Receive Applications According to Ordered Criteria The peer-to-peer system sends and receives applications according to many factors, including the available device space, device form factor, and operating system type. The download order follows these elections from top to bottom. - Devices with the largest actual free space - Devices that are identified as preferred, also called RVPs (rendezvous points) - Device chassis type (desktops are selected over laptops) - Device operating system type (servers are selected over work stations) - Devices with the longer system up-times - Devices with the largest usable free space Back up Systems Peer-to-peer clients receive application packages from a CDN or a file storage system when they cannot find packages within the hierarchy. A CDN, which is optional for on-premises deployments, offers increased download speed over the file storage system.
https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/Peer-to-Peer_Distribution_Management/GUID-4FF398A5-0925-48E9-BF87-D398980D24B1.html
2021-11-27T03:17:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.vmware.com
Gesture Toggle Gesture Toggle is an option available in the Action Menu. You can also set it as a custom action in SteamVR's controller binding interface. It is on by default. When you have Gesture Toggle enabled, VRChat will attempt to match your current finger pose against the standard VRChat hand poses. Avatars 2.0 When enabled, any applied Gesture Overrides will play. On Valve Index Controllers,. Avatars 3.0 When enabled, your Gesture Animator Parameters will update as normal. When disabled, your Gesture Animator Parameters will stay "stuck" at the value you had when you disabled Gesture Toggle, no matter what inputs you provide. Updated over 1 year ago
https://docs.vrchat.com/docs/gesture-toggle
2021-11-27T02:05:19
CC-MAIN-2021-49
1637964358078.2
[]
docs.vrchat.com
Romanian language support in Salience The Romanian data directory extends the text analysis capabilities of Salience to extract meaning and information from Romanian content. For best performance, ensure that the comma-below versions of the Romanian special characters "Ș ș Ț ț" are used when processing Romanian text with Salience. Release history Support for Romanian is distributed as a separate data directory download from the Customer Support portal. Release notes and versions can be found on the Customer Support portal. Updated almost 2 years ago
https://salience-docs.lexalytics.com/docs/romanian-language-support-in-salience
2021-11-27T02:23:44
CC-MAIN-2021-49
1637964358078.2
[]
salience-docs.lexalytics.com
ASPxClientUtils.GetShortcutCode(keyCode, isCtrlKey, isShiftKey, isAltKey) Method Returns a specifically generated code that uniquely identifies the combination of keys specified via the parameters. Declaration static GetShortcutCode( keyCode: number, isCtrlKey: boolean, isShiftKey: boolean, isAltKey: boolean ): number Parameters Returns Remarks If your application logic requires using shortcuts, you need to create a set of shortcuts and then somehow identify whether the currently pressed key combination corresponds to any shortcut defined. The GetShortcutCode method assists in creating a unique code identifying a custom combination of keys. You can store this identifier in any appropriate storage on the client side (using, for example, ASPxClientUtils.GetShortcutCodeByEvent method can be used. For more details on how to work with shortcuts on the client side, refer to the following Code Central example: E1137.
https://docs.devexpress.com/AspNet/js-ASPxClientUtils.GetShortcutCode.static(keyCode-isCtrlKey-isShiftKey-isAltKey)
2021-11-27T03:32:24
CC-MAIN-2021-49
1637964358078.2
[]
docs.devexpress.com
SolSea Search… SolSea Welcome to SOLSEA Getting Started How to spot a fake NFT? Licenses License Scope Summary Private use/Non-Commercial exploitation Personal public display/Non-Commercial exploitation Public display/Non-Commercial exploitation Reproduction/Commercial exploitation GitBook last updated on September 3, 2021 1. INTRODUCTION The following Terms of Use (“Terms”) and conditions govern all use of the SolSea website (“the Website”), including, without limitation, the creation, minting, listing, purchase, sale, exchange, or modification of certain digital assets (collectively, the “Service”). The Website is owned and operated by ENEFTIA LIMITED (“we”, “us”, the “Company”). The Website is offered subject to your acceptance without modification of all the terms and conditions contained herein and all other operating rules, policies and procedures that may be published from time to time on the Website (taken together, the “Agreement”). SolSea is a platform. SolSea is not a broker, financial institution, or creditor. The Services offered on the platform are of administrative nature, use any Services, or buy crypto assets. The Website is available only to individuals who are at least 18 years old. 2. PRIVACY POLICY SolSea Privacy Policy explains the way we handle and protect your personal data in relation to your use and browsing of the Website. By agreeing to the present terms and conditions and to be able to use the Service, you also agree to our Privacy Policy. 3. ACCOUNT REGISTRATION In order to list and purchase crypto assets, you need to register for an account on the Service (“Account”).. SolSea will block multiple accounts of the same user. Also, you agree that you will not: a) create another account if SolSea disabled one you had, unless you have SolSea’s written permission first; b) buy, sell, rent or lease access to your Account or username, unless you have SolSea’s written permission first; c) share your Account password with anyone; or d) log in or try to login to access the Service through unauthorized third party applications or clients. SolSea may require you to provide additional information and documents at the request of any competent authority or in case of application of any applicable law or regulation, including laws related to anti-laundering (legalization) of incomes obtained by criminal means, or for counteracting financing of terrorism. SolSea may also require you to provide additional information and documents in cases where it has reasons to believe that: a) your Account is being used for money laundering or for any other illegal activity; b) you have concealed or reported false identification information and other details; or c) transactions made via your Account were made in breach of these Terms. In such cases, SolSea, in its sole discretion, may pause or cancel your purchase transactions until such additional information and documents are reviewed by SolSea and accepted as satisfying the requirements of applicable law. If you do not provide complete and accurate information and documents in response to such a request, SolSea may refuse to provide the Service. By registering an Account on SolSea, you give us permission to use your name and picture for marketing and promotional purposes. Users registered as creators also understand and agree that SolSea may display, reproduce, and distribute their works represented in digital assets minted, listed and tradable on SolSea, for the purpose of operating, promoting, sharing, developing, marketing, and advertising the Website, or any other purpose related to SolSea. 4. MODIFICATION TO TERMS OF SERVICE Within the limits of applicable law, SolSea reserves the right to review and change this Agreement at any time. You are responsible for regularly reviewing these terms and conditions. Continued use and browsing of the Website after such changes shall constitute your consent to such changes. 5. COMMUNICATION PREFERENCES By creating an Account, you consent to receive electronic communications from SolSea (e.g. via email or by posting notices to the Service). These communications may include notices about your Account (e.g. password changes and other transactional information) and are part of your relationship with SolSea. You agree that any notices, agreements, disclosures or other communications that SolSea sends to you electronically shall satisfy any legal communication requirements, including, but not limited to, that such communications be in writing. SolSea. You acknowledge that the ownership of digital assets (NFTs) made available or purchased on the Website may give you the right to view, store, exchange, sell and display the NFT publicly but does not allow or imply commercial use or ownership of intellectual property on the brand, design, music, video, art or other media displayed in your digital asset NFTs, unless specifically stated otherwise. 6. DISCLAIMERS Except as expressly provided to the contrary in writing by SolSea, the Service and content contained therein, and crypto assets listed therein are provided on an “as is” and “as available” basis without warranties or conditions of any kind, either express or implied. SolSea makes no warranty that the Service: a) will meet your requirements; b) will be available on an uninterrupted and timely basis. SolSea disclaims all other warranties or conditions, express or implied, including, without limitation, implied warranties or conditions of merchantability, fitness for particular purpose, title and non-infringement to the Service, as well to the content published therein. While SolSea attempts to make your access to and use of the service and content safe, you accept the inherent security risks of providing information and dealing online over the internet and will not hold SolSea responsible for any breach of security unless it is due to our gross negligence. SolSea shall not be responsible or liable to you for any loss and take no responsibility for, and shall not be liable to you for any use of crypto assets, including but not limited to any losses, damages or claims arising from: a) user error, such as forgotten passwords, incorrectly contruced a blockchain. Any transfer of title that might occur in any unique digital asset occurs on the decentralized ledger within the blockchain. SolSea does not guarantee that SolSea can effect the transfer of title or right in any crypto assets. However, SolSea requests its users who register as creators to warrant that the crypto assets they mint as NFTs and list through SolSea are their own individual creations, which have not previously been published and/or exploited in any manner, in order for them to comply with these Terms, under liability to SolSea and other users. SolSea is not responsible for sustained casualties due to vulnerability or any kind of failure, abnormal behavior of software (e.g., wallet, smart contract), blockchains or any other features of crypto assets. SolSea is not responsible for casualties due to late report by developers or representatives (or no report at all) of any issues with the blockchain supporting crypto assets including forks, technical node issues or any other issues having fund losses as a result. Nothing in these Terms. 7. ASSUMPTION OF RISKS You accept and acknowledge: a) The prices of blockchain assets are extremely volatile. Fluctuations in the price of other digital assets could materially and adversely affect the crypto assets, which may also be subject to significant price volatility. We cannot guarantee that any purchasers of crypto assets will not lose money. b) You are solely responsible for determining what, if any, taxes apply to your crypto assets transactions. SolSea is not responsible for determining the taxes that apply to crypto assets transactions. c) SolSea shall not be responsible for any communication failures, disruptions, errors, distortions or delays you may experience when using the crypto assets, however caused. d) A lack of use or public interest in the creation and development of distributed ecosystems could negatively impact the development of those ecosystems and related applications, and could therefore also negatively impact the potential utility or value of crypto assets. e) The regulatory regime governing blockchain technologies, cryptocurrencies, and tokens is uncertain, and new regulations or policies may materially adversely affect the development of the Service and the utility of crypto assets. f) There are risks associated with purchasing user generated content, including but not limited to, the risk of purchasing counterfeit assets, mislabeled assets, assets that are vulnerable to metadata decay, assets on smart contracts with bugs, and assets that may become untransferable. SolSea reserves the right to hide collections, contracts, and assets affected by any of these issues or by other issues. Assets you purchase may become inaccessible on SolSea. Under no circumstances shall the inability to view your assets on SolSea serve as grounds for a claim against SolSea. 9. LIMITATION OF LIABILITY To the fullest extent permitted by law, in no event shall SolSea be liable to you or any third party for any lost profit or any indirect, consequential, exemplary, incidental, special or punitive damages arising from these terms, the Service, or for any damages related to loss of revenue, loss of profit, loss of business or anticipated savings, loss of use, loss of goodwill, or loss of data, and whether caused by tort (including negligence), breach of contract, or otherwise. The access to and use of the Services are at your own discretion and risk, and you shall be solely responsible for any damage to your computer system or mobile device or loss of data resulting therefrom. Notwithstanding anything to the contrary contained herein, in no event shall the maximum aggregate liability of SolSea arising out of or in any way related to these terms, the access to and use of the service, content, crypto assets, or any products or services purchased on the service exceed the greater of: a) the amount received by SolSea from the sale of crypto assets that are the subject of the claim, and b) the operational costs from the sale of crypto assets that are the subject of the claim. The foregoing limitations of liability shall not apply to liability of SolSea for: a) death or personal injury caused by a member of SolSea’s negligence; or for b) any injury caused by a member of SolSea. 10. TERMINATION Notwithstanding anything contained in these Terms, we reserve the right, without notice and in our sole discretion, to terminate your right to access or use the Service. 11. SEVERABILITY If any term, clause or provision of these Terms is held invalid or unenforceable, then that term, clause or provision shall be severable from these Terms and will not affect the validity or enforceability of any remaining part of that term, clause or provision, or any other term, clause or provision of these Terms. 12. APPLICABLE LAW This Agreement shall be governed in all respects by the substantive laws of Switzerland. Any controversy, claim, or dispute arising out of or relating to the Agreement shall be subject to the jurisdiction of the competent courts of Switzerland, the jurisdiction of the Swiss Court being expressly reserved. Getting Started - Previous Next - Getting Started Last modified 1mo ago Copy link Contents 1. INTRODUCTION 2. PRIVACY POLICY 3. ACCOUNT REGISTRATION 4. MODIFICATION TO TERMS OF SERVICE 5. COMMUNICATION PREFERENCES 6. DISCLAIMERS 7. ASSUMPTION OF RISKS 9. LIMITATION OF LIABILITY 10. TERMINATION 11. SEVERABILITY 12. APPLICABLE LAW
https://docs.solsea.io/getting-started/terms-and-conditions
2021-11-27T02:47:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.solsea.io
}\) Examples See Routine verification measures for radar-based precipitation estimates.
https://docs.wradlib.org/en/1.2.0/generated/wradlib.dp.process_raw_phidp_vulpiani.html
2021-11-27T02:46:53
CC-MAIN-2021-49
1637964358078.2
[]
docs.wradlib.org
Smart Segmenting by Product purchased allows you to: - Manually send relevant new arrivals or restock alerts - Automate cross-sell and up-sell recommendations using auto followups - Segment customers by specific interests Follow the steps in this article to create a segment of customers who have or haven't purchased specific products - Enter Smart Segments within the Subscribers tab 2. Click the Create new segment button 3. Name your segment 4. Choose All Subscribers to select all lists or choose the list you'd like to segment 5. In the Show section choose one of the following rules - Has purchased product - Has not purchased product 6. Search and select the products you'd like to include in your segment 7. Click the Save & Exit button. Your segment will populate and be available under the Smart Segments tab. 7. Click your created segment to view your segmented subscribers Note: Most segments will update within a minute, but larger lists can take up to 5 mins to populate.
http://docs.smartrmail.com/en/articles/662735-create-a-segment-of-customers-who-ve-purchased-specific-products
2020-08-03T14:54:27
CC-MAIN-2020-34
1596439735812.88
[array(['https://downloads.intercomcdn.com/i/o/79220718/8eba309f7d46e0777c1a88e5/smart-segments-customers-tab.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/79220719/e2c7c6a554e37287069d5aa1/include-lists.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/79220720/952103fe3f4fc8b19a72d4ce/has-purchased-product-rule.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/79220721/c9c5f136a5d13dd40e71451a/vegemite-product-select.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/79220722/5c7f6363f94f264f857cba97/smart-segments-customers-tab.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/79220724/fa8abf3a1aee0797619edafd/product-purchased-segment-vg.png', None], dtype=object) ]
docs.smartrmail.com
debops.postgresql¶ PostgreSQL is a popular relational open source database. The debops.postgresql role can be used to create and manage PostgreSQL roles and databases on local or remote PostgreSQL servers. To manage the PostgreSQL server itself, you will need to use debops.postgresql_server role. - Getting started - debops.postgresql default variables - Default variable details debops.postgresql - Manage PostgreSQL client.
https://docs.debops.org/en/master/ansible/roles/postgresql/index.html
2020-08-03T14:28:10
CC-MAIN-2020-34
1596439735812.88
[]
docs.debops.org
Scheduling Automatic Power On and Off for VMs You can automatically power VMs on and off at specified times using power schedules. Power schedules are especially useful for the public cloud, where you pay more when VMs are running. It's still possible to manually power on and use a VM during the scheduled power-off period. To set the power schedule for a VM, you add the VM to a power schedule group configured by your administrator. To assign a VM to a power schedule group: - Go to the My Resources page. - Select a service. - From the Actions menu, choose Attributes > Set Power Schedule Group. - In the Set Power Schedule Group dialog, select the appropriate group and click Save. You can see what power schedule group a VM belongs to on the VM's summary page. If you don't have permission to change a VM's power schedule group, you may have permission to see and apply Power Schedule recommendations. Viewing power schedule recommendations Power schedule recommendations are issued for VMs that don't belong to a power schedule group. Note: You can't ignore a Power Schedule recommendation or exclude a VM from Power Schedule recommendations. To view all power schedule recommendations: - Go to Recommendations. All active recommendations for VMs that you own are displayed. Note: Make sure View Active Recommendations is selected. - You may see rightsizing recommendations as well as power schedule recommendations. You can filter the list by typing in the search field, for example, power. To view power schedule recommendations for a single VM: - Go to a VM's summary page. - Look at the Recommendations drop-down panel. If there are current recommendations (either rightsizing or power schedule), the message "This VM has recommendations" is displayed. - Expand the Recommendations drop-down panel to view details. Note: Make sure View Active Recommendations is selected. Applying power schedule recommendations You can apply recommendations from the Recommendations page and from the VM's summary page. - When viewing a power schedule recommendation, click Apply Immediately. - In the Set Power Schedule Group dialog, select a power schedule group and click Save. The recommendation is closed and disappears from the VM's summary page and the Recommendations page.
https://docs.embotics.com/Service-Portal/power_sched_portal.htm
2020-08-03T15:04:03
CC-MAIN-2020-34
1596439735812.88
[]
docs.embotics.com
Quick Start Guide Overview Welcome to InsightOps! To help you maximize your time, we’ve created a checklist of important capabilities to explore. Click the links below for step-by-step directions to help you get up and running. Getting Started Checklist: Have questions? Visit our Rapid7 Support page. How does this all fit together? InsightOps as a product is to allow you to monitor your information on your terms. The features available for you to use (log search, dashboards, etc.) are for you to analyze data in an efficient, convenient way, and to notice and remediate problems as quickly as possible. The insight agent gathers the data, and log search helps you find logs that need watching via LEQL queries. Dashboards and reports let you see and share information you've gathered, and visual search puts the overwhelming amount of log data into a form that is easier to understand at a glance. Tags help you identify unusual information right away, and alerts notify you of anything not quite right.
https://docs.rapid7.com/insightops/quick-start-guide/
2020-08-03T15:50:50
CC-MAIN-2020-34
1596439735812.88
[]
docs.rapid7.com
AppMetrica In this section you will learn how to connect AppMetrica data import. Attention! The Google BigQuery integration has to be enabled to use this feature. Importing data from AppMetrica After enabling this data source, AppMetrica reports information. Application ID Unique ID of your application in AppMetrica. You can find it in “Application Name” > Settings > Application ID Import reports Enables import of raw report data from AppMetrica to BigQuery find imported data Within 24 hours since you’ve enabled AppMetrica data source you should find following tables in your BigQuery dataset if corresponding report was enabled: - appMetricaClicks_{APPLICATION_ID}_{YYYYMMDD} - data for “Clicks” report - appMetricaCrashes_{APPLICATION_ID}_{YYYYMMDD} - data for “Crashes” report - appMetricaDeeplinks_{APPLICATION_ID}_{YYYYMMDD} - data for “Deeplinks” report - appMetricaErrors_{APPLICATION_ID}_{YYYYMMDD} - data for “Errors” report - appMetricaEvents_{APPLICATION_ID}_{YYYYMMDD} - data for “Events” report - appMetricaInstallations_{APPLICATION_ID}_{YYYYMMDD} - data for “Installations” report - appMetricaPostbacks_{APPLICATION_ID}_{YYYYMMDD} - data for “Postbacks” report - appMetricaProfiles_{APPLICATION_ID}_{YYYYMMDD} - data for “Profiles” report - appMetricaPushTokens_{APPLICATION_ID}_{YYYYMMDD} - data for “PushTokens” report - appMetricaRevenueEvents_{APPLICATION_ID}_{YYYYMMDD} - data for “RevenueEvents” report - appMetricaSessionsStarts_{APPLICATION_ID}_{YYYYMMDD} - data for “SessionStarts” report
https://docs.segmentstream.com/datasources/appmetrica
2020-08-03T15:00:57
CC-MAIN-2020-34
1596439735812.88
[array(['/img/datasources.appmetrica.1.png', None], dtype=object)]
docs.segmentstream.com
List view and grid view Most applications manipulate and display sets of data, such as a gallery of images or a set of email messages. The XAML UI framework provides ListView and GridView controls that make it easy to display and manipulate data in your app. Important APIs: ListView class, GridView class, ItemsSource property, Items property Note ListView and GridView both derive from the ListViewBase class, so they have the same functionality, but display data differently. In this article, when we talk about list view, the info applies to both the ListView and GridView controls unless otherwise specified. We may refer to classes like ListView or ListViewItem, but the List prefix can be replaced with Grid for the corresponding grid equivalent (GridView or GridViewItem). ListView and GridView provide many benefits for working with collections. They are both easy to implement and provide basic UI; interaction; and scrolling while still being easily customizable. ListView and GridView can be bound to existing dynamic data sources, or hard-coded data provided in the XAML itself/the code-behind. These two controls are flexible to many use cases, but overall work best with collections in which all items should have the same basic structure and appearance, as well as the same interaction behavior - i.e. they should all perform the same action when clicked (open a link, navigate, etc). Differences between ListView and GridView ListView The ListView displays data stacked vertically in a single column. ListView works better for items that have text as a focal point, and for collections that are meant to be read top to bottom (i.e. alphabetically ordered). A few common use cases for ListView include lists of messages and search results. Collections that need to be displayed in multiple columns or in a table-like format should not use ListView, but should look into using a DataGrid instead. GridView The GridView presents a collection of items in rows and columns that can scroll vertically. Data is stacked horizontally until it fills the columns, then continues with the next row. GridView works better for items that have images as their focal point, and for collections that can be read from side-to-side or are not sorted in a specific order. A common use case for GridView is a photo or product gallery. Which collection control should you use? A Comparison with ItemsRepeater ListView and GridView are controls that work out-of-the-box to display any collection, with their own built-in UI and UX. The ItemsRepeater control also is used to display collections, but was created as a building block for creating a custom control that fits your exact UI needs. The most important differences that should impact which control you end up using are below: - ListView and GridView are feature-rich controls that require little customization but offer plenty. ItemsRepeater is a building block to create your own layout control and does not have the same built in features and functionality - it requires you to implement any necessary features or interactions. - ItemsRepeater should be used if you have a highly custom UI that you can’t create using ListView or GridView, or if you have a data source that requires highly different behavior for each item. Learn more about ItemsRepeater by reading its Guidelines and API Documentation pages. Examples Create a ListView or GridView ListView and GridView are both ItemsControl types, so they can contain a collection of items of any type. A ListView or GridView a ListView or GridView You can add items to the ListView or GridView's Items collection using XAML or code to yield the same result. You typically add items through XAML if you have a small number of items that don't change and are easily defined, or if you generate the items in code at run time. Method 1: Add items to the Items Collection Option 1: Add Items through XAML <!-- No corresponding C# code is needed for this example. --> <ListView x: <x:String>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry</x:String> </ListView> Option 2: Add Items through C# C# Code: // Create a new ListView and add content. ListView Fruits = new ListView(); Fruits.Items.Add("Apricot"); Fruits.Items.Add("Banana"); Fruits.Items.Add("Cherry"); Fruits.Items.Add("Orange"); Fruits.Items.Add("Strawberry"); // Add the ListView to a parent container in the visual tree (that you created in the corresponding XAML file). FruitsPanel.Children.Add(Fruits); Corresponding XAML Code: <StackPanel Name="FruitsPanel"></StackPanel> items. This method works better if your ListView or GridView is going to hold custom class objects, as shown in the examples below. Option 1: Set ItemsSource in C# Here, the list view's ItemsSource is set in code directly to an instance of a collection. C# Code: // Class defintion should be provided within the namespace being used, outside of any other classes. this.InitializeComponent(); // Instead of adding hard coded items to an ObservableCollection as shown below, //the data could be pulled asynchronously from a database or the internet. ObservableCollection<Contact> Contacts = new ObservableCollection<Contact>(); // Contact objects are created by providing a first name, last name, and company for the Contact constructor. // They are then added to the ObservableCollection Contacts. Contacts.Add(new Contact("John", "Doe", "ABC Printers")); Contacts.Add(new Contact("Jane", "Doe", "XYZ Refridgerators")); Contacts.Add(new Contact("Santa", "Claus", "North Pole Toy Factory Inc.")); // Create a new ListView (or GridView) for the UI, add content by setting ItemsSource ListView ContactsLV = new ListView(); ContactsLV.ItemsSource = Contacts; // Add the ListView to a parent container in the visual tree (that you created in the corresponding XAML file) ContactPanel.Children.Add(ContactsLV); XAML Code: <StackPanel x:</StackPanel> Option 2: Set ItemsSource in XAML You can also bind the ItemsSource property to a collection in the XAML. Here, the ItemsSource is bound to a public property named Contacts that exposes the Page's private data collection, _contacts. XAML <ListView x: C# // Class defintion should be provided within the namespace being used, outside of any other classes. // These two declarations belong outside of the main page class. private ObservableCollection<Contact> _contacts = new ObservableCollection<Contact>(); public ObservableCollection<Contact> Contacts { get { return this._contacts; } } // This method should be defined within your main page class. protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); // Instead of hard coded items, the data could be pulled // asynchronously from a database or the internet. Contacts.Add(new Contact("John", "Doe", "ABC Printers")); Contacts.Add(new Contact("Jane", "Doe", "XYZ Refridgerators")); Contacts.Add(new Contact("Santa", "Claus", "North Pole Toy Factory Inc.")); } Both of the above options will result in the same ListView, shown below. The ListView only shows the string representation of each item because we did not provide a data template. Important With no data template defined, custom class objects will only appear in the ListView with their string value if they have a defined ToString() method. The next section will go into detail on how to visually represent simple and custom class items properly in a ListView or GridView. For more info about data binding, see Data binding overview. Note If you need to show grouped data in your ListView, you must bind to a CollectionViewSource. The CollectionViewSource acts as a proxy for the collection class in XAML and enables grouping support. For more info, see CollectionViewSource. Customizing the look of items with a DataTemplate A data template in a ListView or GridView defines how the items/data are visualized. By default, a data item is displayed in the ListViewView/GridView are displayed, you create a DataTemplate. The XAML in the DataTemplate defines the layout and appearance of controls used to display an individual item. The controls in the layout can be bound to properties of a data object, or have static content defined inline. Note When you use the x:Bind markup extension in a DataTemplate, you have to specify the DataType ( x:DataType) on the DataTemplate. Simple ListView Data Template In this example, the data item is a simple string. A DataTemplate is defined inline within the ListView definition to add an image to the left of the string, and show the string in teal. This is the same ListView created from using Method 1 and Option 1 shown above. XAML <!--No corresponding C# code is needed for this example.--> <ListView x: <ListView.ItemTemplate> <DataTemplate x: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="47"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Image Source="Assets/placeholder.png" Width="32" Height="32" HorizontalAlignment="Left" VerticalAlignment="Center"/> <TextBlock Text="{x:Bind}" Foreground="Teal" FontSize="14" Grid. </Grid> </DataTemplate> </ListView.ItemTemplate> <x:String>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry</x:String> </ListView> Here's what the data items look like when displayed with this data template in a ListView: ListView Data Template for Custom Class Objects In this example, the data item is a Contact object. A DataTemplate is defined inline within the ListView definition to add the contact image to the left of the Contact name and company. This ListView was created by using Method 2 and Option 2 mentioned above. <ListView x: <ListView.ItemTemplate> <DataTemplate x: <Grid> <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Image Grid.</Image> <TextBlock Grid. <TextBlock Grid. </Grid> </DataTemplate> </ListView.ItemTemplate> </ListView> Here's what the data items look like when displayed using this data template in a ListView: Data templates are the primary way you define the look of your ListView. They can also have a significant impact on performance if your list holds a large number of items. Your data template can be defined inline within the ListView/GridView definition (shown above), or separately in a Resources section. If defined outside of the ListView/GridView itself, the DataTemplate must be given an x:Key attribute and be assigned to the ItemTemplate property of the ListView or GridView using that key. For more info and examples of how to use data templates and item containers to define the look of items in your list or grid, see Item containers and templates. Change the layout of items When you add items to a ListView or GridView, Important>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry</x:String> </ListView> The resulting list looks like this. In the next example, the ListView lays out items in a vertical wrapping list by using an ItemsWrapGrid instead of an ItemsStackPanel. Important>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry<>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry<. This is shown in the example below, along with an option to select all items. XAML <StackPanel Width="160"> <Button Content="Select all" Click="SelectAllButton_Click"/> <Button Content="Deselect all" Click="DeselectAllButton_Click"/> <ListView x: <x:String>Apricot</x:String> <x:String>Banana</x:String> <x:String>Cherry</x:String> <x:String>Orange</x:String> <x:String>Strawberry< Item containers and - Demonstrates the ListView and GridView controls. - XAML Drag and drop sample - Demonstrates drag and drop with the ListView control. - XAML Controls Gallery sample - See all the XAML controls in an interactive format.
https://docs.microsoft.com/en-us/windows/uwp/design/controls-and-patterns/listview-and-gridview
2020-08-03T16:10:28
CC-MAIN-2020-34
1596439735812.88
[array(['images/listview-grouped-example-resized-final.png', 'A list view with grouped data'], dtype=object) array(['images/gridview-simple-example-final.png', 'Example of a content library'], dtype=object) array(['images/listview-basic-code-example2.png', 'A simple list view'], dtype=object) array(['images/listview-basic-code-example-final.png', 'A simple list view with ItemsSource set'], dtype=object) array(['images/listview-w-datatemplate1-final.png', 'ListView items with a data template'], dtype=object) array(['images/listview-customclass-datatemplate-final.png', 'ListView custom class items with a data template'], dtype=object) array(['images/listview-simple.png', 'A simple list view'], dtype=object) array(['images/gridview-simple.png', 'A simple grid view'], dtype=object) array(['images/listview-horizontal2-final.png', 'A horizontal list view'], dtype=object) array(['images/listview-itemswrapgrid2-final.png', 'A list view with grid layout'], dtype=object) array(['images/listview-horizontal-groups.png', 'A grouped horizontal list view'], dtype=object)]
docs.microsoft.com
Murano is composed of the following major components: They interact with each other as illustrated in the following diagram: All remote operations on users’ servers, such as software installation and configuration, are carried out through an AMQP queue to the murano-agent. Such communication can easily be configured on a separate instance of AMQP to ensure that the infrastructure and servers are isolated. Besides, Murano uses other OpenStack services to prevent the reimplementation of the existing functionality. Murano interacts with these services using their REST API through their python clients. The external services used by Murano are: Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/murano/rocky/reference/architecture.html
2020-08-03T15:30:52
CC-MAIN-2020-34
1596439735812.88
[]
docs.openstack.org
Configure SSO with AzureAD or AD FS as your Identity Provider This task describes how to set up SSO for Splunk deployments if you have configured AzureAD or ADFS as your Identity Provider (IdP). Prerequisites Verify that your system meets all of the requirements. See Configure single sign-on with SAML. SAML does not support encryption, regardless of Id].!
https://docs.splunk.com/Documentation/Splunk/latest/Security/ConfigureSSOAzureADandADFS
2020-08-03T15:49:14
CC-MAIN-2020-34
1596439735812.88
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
An Act to create 302.107 and 950.04 (1v) (vg) of the statutes; Relating to: providing to a victim notification when an offender's extended supervision or parole is revoked. (FE) 2015 Wisconsin Act 354 (PDF: ) 2015 Wisconsin Act 354: LC Act Memo Bill Text (PDF: ) Fiscal Estimates LC Bill Hearing Materials Wisconsin Ethics Commission information 2015 Senate Bill 377 - Hold (Available for Scheduling)
http://docs.legis.wisconsin.gov/2015/proposals/AB652
2020-08-03T15:28:16
CC-MAIN-2020-34
1596439735812.88
[]
docs.legis.wisconsin.gov
This page applies to an earlier version of the AppDynamics App IQ Platform. See the latest version of the documentation. On this page: AppDynamics automatically detects requests to servlet-based entry points and generates business transactions based on the requests. For general information on how to customize automatic transaction detection, see URI Based Entry Points. Frameworks Supported as Servlets or Servlets Filter AppDynamics supports many web frameworks based on servlets or servlet filters. The servlet configuration settings in AppDynamics apply to these frameworks as well as to plain servlets. Frameworks include: - Spring MVC - Wicket - Java Server Faces (JSF) - JRuby - Grails - Groovy - Tapestry - ColdFusion Custom Match Rules for Servlet Transactions Custom match rule let you control how business transactions are generated for Servlet-based requests. To configure custom match rules, in the Transaction Detection tab, click the add icon (+) in the Custom Match Rules panel. Choose Servlet and click Next, and configure the match conditions that select the incoming requests that you want to be associated with the business transaction. Match Conditions Match conditions can be based upon the URI, HTTP method, hostname, servlet name, or other characteristics of the request. For HTTP Parameter conditions, you can add more than one. If you configure more than one HTTP Parameter match criteria, they must all be met by a request to be subject to this business transaction identification and naming rule, as must all conditions you configure for the rule. Some of the options have NOT conditions that you can choose to negate the configured condition. Choose this option in the gear icon next to the condition. Splitting Transactions using Request Data or Payload If you match by URI, you can have the transaction identified (or split) based on values from the request, such as URI patterns, request data, or payload. See URI Based Entry Points for information on naming or splitting transactions by the request elements. When you create the rule, it appears in the Custom Match Rule list, where you can enable, disable, edit or remove the rule. See Split Servlet Transaction by Payload Examples for use case examples. POST Request Body Parameter Matching Considerations You can configure Servlet-based custom match that match POST requests based on HTTP body parameter values. To avoid interfering with the operation of the application, the Java Agent applies some special processing measures and limits on how it accesses the body parameters. This behavior can affect whether requests can be matched by POST body parameters for your particular application. If you are having trouble configuring matching by HTTP parameter in POST requests, you should understand this behavior. Transaction Naming Match Criteria For transaction match criteria, match conditions for HTTP parameters work under these conditions: - If the parameter is a query string in the URL of the POST request, as opposed to the request body. - If the Servlet Request body parameters have already been read and parsed by the application prior to the invocation of the servlet. A Servlet Filter, for example, may do this. Otherwise, custom servlet match rules do not match HTTP parameter values in POST requests. Transaction Splitting For transaction split rules based on parameter matching in POST requests, the Java Agent defers transaction naming for incoming requests until the application servlet has accessed the request parameters. The Java Agent considers the parameters "accessed" once the application invokes getParameterMap(), getParameter(), getParameterNames(), or getParameterValues() on the ServletRequest object. If the servlet does not call one of these methods, the agent will not apply the split rule and the transaction will not be named. Exclude Rules for Servlets To prevent specific Servlet methods from being monitored, add a custom exclude rule. The controls for selecting Servlets to exclude are the same as those for custom match rules.
https://docs.appdynamics.com/pages/viewpage.action?pageId=35456726
2020-08-03T15:24:26
CC-MAIN-2020-34
1596439735812.88
[]
docs.appdynamics.com
When you analyse a website using WebCopy, WebCopy. Many of the advanced functionality of WebCopy requires a link map to be present. If you disable the saving of link map data, some functionality may be impaired. Important If link information is not preserved in a crawler project, or is cleared prior to starting a crawl, features such as only downloading new or updated files will not be available. To toggle the saving of the link map in a project file - From the Project Properties dialog, expand the Advanced category select the Link Map category - Check or uncheck the Save link information in project field - Optionally, check or the Include headers option to save HTTP request and response headers Note Saving of headers may cause project files to be much larger, and performance of open/save operations may be affected. Required information, such as content type or content size is always stored regards of if all header data is stored. To toggle the clearing of link maps before analysing a web site - From the Project Properties dialog, expand the Advanced category select the Link Map category - Check or uncheck the Clear link information before scan - Setting cookies - Setting the web page language - Specifying a User Agent - Specifying accepted content types - Using Keep-Alive
https://docs.cyotek.com/cyowcopy/current/advlinkmaps.html
2020-08-03T15:49:39
CC-MAIN-2020-34
1596439735812.88
[]
docs.cyotek.com
Detail ToolTip A detail tooltip is a popup window that appears while hovering over the master-detail expand button of a master row. The tooltip displays captions of all detail Views associated with the master row. The end-user can click a caption to open the corresponding detail View. The following table lists the main properties affecting element appearance. See Also Feedback
https://docs.devexpress.com/WindowsForms/564/controls-and-libraries/data-grid/visual-elements/master-detail-mode-related-elements/detail-tooltip
2020-08-03T15:19:13
CC-MAIN-2020-34
1596439735812.88
[array(['/WindowsForms/images/visualelems_masterdetail_detailtooltip2807.png', 'VisualElems_MasterDetail_DetailTooltip'], dtype=object) ]
docs.devexpress.com
Target frameworks NuGet uses target framework references in a variety of places to specifically identify and isolate framework-dependent components of a package: - project file: For SDK-style projects, the .csproj contains the target framework references. - . - packages.config: The targetframeworkattribute of a dependency specifies the variant of a package to install. generalized to TxM to allow a single reference to multiple frameworks. The NuGet clients support the frameworks in the table below. Equivalents are shown within brackets []. Note that some tools, such as dotnet, might use variations of canonical TFMs in some files. For example, dotnet pack uses .NETCoreApp2.0 in a .nuspec file rather than netcoreapp2.0. The various NuGet client tools handle these variations properly, but you should always use canonical TFMs when editing files directly. Deprecated frameworks The following frameworks are deprecated. Packages targeting these frameworks should migrate to the indicated replacements. Precedence A number of frameworks are related to and compatible with one another, but not necessarily equivalent: NET Standard .NET library.
https://docs.microsoft.com/en-us/nuget/reference/target-frameworks
2020-08-03T16:32:29
CC-MAIN-2020-34
1596439735812.88
[]
docs.microsoft.com
This is a setting hack that can be enabled on the "Settings" page of the Mods List. This hack enables the Lens Flare to work on PC just like the console releases of the game. Showcase Base Game Lens Flare Hack Command Line Arguments This hack is affected by certain Command Line Arguments for the Mod Launcher. Version History 1.23.4 Removed an assert when failing to create the occlusion query when using -testing. 1.23.2 Made the locking of the render targets (only used when not using Direct3D 9) read only. 1.23 - Added a description to this hack. - Made this hack work differently (using an occlusion query instead of a lockable render target) when using Direct3D 9. - Also added a -noocclusioncommand line argument to opt out of this. - Also added a -occlusionsleepcommand line argument. 1.20 - Made Lens Flares not get enqueued if their World Sphere is deactivated. - Also added the -deactivatedworldspherelensflarescommand line argument to suppress this behaviour. - This fixes an issue where deactivating a world sphere would not deactivate its lens flare. 1.19 Added this hack.
http://docs.donutteam.com/docs/lucasmodlauncher/hacks/lens-flare
2020-08-03T15:19:28
CC-MAIN-2020-34
1596439735812.88
[array(['/img/lucasmodlauncher/hacks/lens-flare/disabled.png', 'lens flare disabled'], dtype=object) array(['/img/lucasmodlauncher/hacks/lens-flare/enabled.png', 'lens flare enabled'], dtype=object) ]
docs.donutteam.com
TOPICS× Using Translation with AEM Content Fragments AEM 6.3 introduces the ability to translate Content Fragments. Mixed-media assets and Asset collections associated with a Content Fragment are also eligible to be extracted and translated. Content Fragment Translation Use Cases Content Fragments are a recognized content type that AEM will extract to be sent to an external translation service. Several use cases are supported out of the box: - A Content Fragment can be selected directly in the Assets console for language copy and translation - Content Fragments referenced on a Sites page will be copied to the appropriate language folder and extracted for translation when the Sites page is selected for language copy - Inline media assets embedded inside a content fragment are eligible to be extracted and translated. - Asset collections associated with a content fragment are eligible to be extracted and translated Translation Configuration Options The out of the box translation configuration supports several options for translating Content Fragments. By default inline media assets and associated asset collections are NOT translated. To update the translation configuration navigate to . There are four options for translating Content Fragment assets: - Do not translate (default) - Inline Media Assets only - Associated Asset Collections only - Inline Media Assets and Associated Collections
https://docs.adobe.com/content/help/en/experience-manager-learn/sites/content-fragments/content-fragments-translation-feature-video-use.html
2020-08-03T16:13:04
CC-MAIN-2020-34
1596439735812.88
[array(['/content/dam/help/experience-manager-learn.en/help/sites/content-fragments/assets/classic-ui-dialog.png', None], dtype=object) ]
docs.adobe.com
Applying Presentation Server certificate to IT Data Analytics BMC Confidential. The following information is intended only for registered users of docs.bmc.com. Where to go from here Once you import the Presentation Server certificate into the IT Data Analytics truststore, you can check if you need to import the Presentation Server certificate to any other TrueSight Operations Management components. For more information, see Implementing private certificates in the TrueSight Presentation Server. If you have completed importing Presentation Server certificates into all the relevant component truststores, you can go to Implementing private certificates in TrueSight Operations Management and check the certificate implementation process for other TrueSight Operations Management components. Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/TSOperations/113/applying-presentation-server-certificate-to-it-data-analytics-843620346.html
2020-08-03T15:55:29
CC-MAIN-2020-34
1596439735812.88
[]
docs.bmc.com
, ~/.var/app/$FLATPAK_ID, and $XDG_RUNTIME_DIR/app/$FLATPAK_ID. Only the latter two being, overrides x11socket permission - minimum portals as an alternative to blanket filesystem access, wherever possible. - Using read-only access wherever possible, using the :rooption. - If some home directory access is absolutely required, using XDG directory access only. The full list of available filesystem options can be found in the Sandbox Permissions Reference. a wrapper script that sets it to $XDG_RUNTIME_DIR/app/$FLATPAK_ID. -. dconf access¶ As of xdg-desktop-portal 1.1.0 and glib 2.60.5 (in the runtime) you do not need direct DConf access in most cases. As of now this glib version is included in org.freedesktop.Platform//19.08 and org.gnome.Platform//3.34. If an application existed prior to these runtimes you can tell Flatpak (>= 1.3.4) to migrate the DConf settings on the host into the sandbox by adding --metadata=X-DConf=migrate-path=/org/example/foo/ to finish-args. The path must be similar to your app-id or it will not be allowed (case is ignored and _ and - are treated equal). If you are targeting older runtimes or require direct DConf access for other reasons you can use these permissions: --filesystem=xdg-run/dconf --filesystem=~/.config/dconf:ro --talk-name=ca.desrt.dconf --env=DCONF_USER_CONFIG_DIR=.config/dconf With those permissions glib will continue using dconf directly.
https://docs.flatpak.org/en/latest/sandbox-permissions.html
2020-08-03T14:14:23
CC-MAIN-2020-34
1596439735812.88
[]
docs.flatpak.org
Common Crawling Issues Questions about crawling? Check out the top questions we've received from customers below: Does InsightAppSec Crawl SWF/Flash Files? Yes, InsightAppSec can test the client and server traffic crated by Flash. You'll need to record the traffic in the Rapid7 AppSec Traffic Recorder or through a 3rd-party tool. InsightAppSec supports the following formats: - AppSec toolkit Traffic Files (*.trec) - Burp Files (*.xml) - Paros Files (*.txt) - WebScarab Files (conversationlog) - HAR (HTTP Archive) Files (*.har) - Fiddler Files (*.saz) You can upload a traffic file to InsightAppSec by going to Scan Scope > Macro, Traffic & Selenium > Add Traffic File(s). How Can I Increase Scan Speed? In many cases, InsightAppSec will scan duplicate content to avoid missing anything. In some cases, this can create scans that are long. You can modify the following settings in the Advanced Options screen How do I record specific sequences and execute them in the same order? Let's say that you want to record an exact sequence of events, such as filling out the shipping page during the checkout process, and attack that sequence. You can use the Use Sequence option that is available in the Macros section to record and replay events in sequential order. The Use? InsightAppSec has extensive capabilities to allow users to crawl certain parts of an application and not others. You can use whitelists and blacklists to control the parts of the application that you scan. This includes capabilities with wildcards and regular expressions. You can also use Macro, Traffic, and Selenium files to restrict your scan to a specific set of actions on your application. You can read more about this capability in the Scan Scope documentation. of is the difference between the acme.com/* literal match and acme.com/* wildcard? * is a wildcard but it only works as a wildcard if the match type is wildcard, otherwise it is considered. InsightAppSec monitors server responsiveness and slows down or increases its speeds if it looks like InsightAppSec is DOSsing the site. You can manually change InsightAppSec concurrent threads and delays between requests in the GUI while a scan is running to slow InsightAppSec down. Sometimes, sections of a website, such as database queries, can be slower than other sections of a website which will be more CPU intensive for the server hosting the website. So InsightAppSec may be able to scan one section of a site with no problems, but then encounter an area of the site that is slower and causes more failed requests. Will InsightAppSec test pages that are linked to from the website being tested? No, it will not. InsightAppSec follows the guidelines set in the Crawl Restrictions and will not test any pages that you do not configure it to test.
https://docs.rapid7.com/insightappsec/common-crawling-issues/
2020-08-03T15:37:46
CC-MAIN-2020-34
1596439735812.88
[]
docs.rapid7.com
Use this guide along with the Data Tab Configuration guide to configure a Other LDAP-integrated SecureAuth IdP realm. 1. Have an on-premises Other LDAP data store 2. A service account with read access (and optional write access) for SecureAuth IdP 1. In the Membership Connection Settings, select Other LDAP
https://docs.secureauth.com/plugins/viewsource/viewpagesrc.action?pageId=27755312
2020-08-03T16:03:30
CC-MAIN-2020-34
1596439735812.88
[]
docs.secureauth.com
This section is your key to getting started with Unity. It explains the Unity interface, menu items, using, creating scenesA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary, and publishing builds. When you are finished reading this section, you will understand how Unity works, how to use it effectively, and the steps to put a basic game together. You can install the Unity Editor in the following ways: 2018–06–12 Page amended
https://docs.unity3d.com/2018.4/Documentation/Manual/UnityBasics.html
2020-08-03T16:11:14
CC-MAIN-2020-34
1596439735812.88
[]
docs.unity3d.com
GetSocial Webhooks¶ GetSocial webhooks allow you to receive Smart Invites and Smart Links events in real time, which can be used for attribution. We have 2 events: app_install and referral_data_received. Introduction¶ Webhooks are delivered as HTTP post requests where the request body contains the JSON encoded event. We expect your webhook endpoint to respond with a 200 OK status within 5 seconds. Any other situation is considered to be a failure. When a failure is encountered, we stop sending requests and retry again after a short delay. This delay starts with 1 second and increases gradually up to 1 minute. At that point, we keep retrying every 1 minute. We might stop retrying permanently if your endpoint remains unavailable for an extended period of time (days, typically). Security¶ Webhook requests made by GetSocial servers include a header, X-Getsocial-Signature. You can (and should) use this header to verify the origin and integrity of the payload. The signature is the SHA256 hash of the payload, in lowercase hexits, using the Shared Secret in your GetSocial Dashboard as the key. See Sample Request and Validation below for an example. If you are limiting the calls from external IP addresses to your backend, you have to whitelist the following IP addresses: 54.156.214.206/32 34.227.235.196/32 Event Details¶ App Install Event¶ This event is triggered when a user installs the app as a result of a link click. Referral Data Received Event¶ This event is triggered when the referral data is successfully retrieved on the SDK side. Learn how to receive Referral Data. Register Webhook URL¶ To register a webhook URL: - Login to your account on the GetSocial Dashboard. Go to Data Export section → Webhooks tab. Enter your Webhook URL and click the save icon in the lower right corner. - (Optional) For testing purposes you can use hookbin.com to create a temporary URL which you can use as a webhook. - (Optional) You can use Shared Secret to validate webhook requests. Example is below. Sample Request and Validation¶ Consider the following app_install event. Request HTTP headers: Request Body: Note that the request body is sent as a single line. For demonstration purposes, see a formatted version below: We can verify and parse this with the following code (use the language selection on the sidebar). If you are having trouble validating or parsing the request, please feel free to contact us. We would be happy to help.
https://docs.getsocial.im/guides/exporting-data/webhooks/
2018-06-18T01:27:55
CC-MAIN-2018-26
1529267859923.59
[]
docs.getsocial.im
Recognize Unicode escape sequences in string literals .UNICODE . . . [.NOUNICODE] Discussion The .UNICODE preprocessor directives provide a way to create strings with embedded Unicode (UTF‑16) characters by enabling recognition of the Unicode escape sequences in string literals for the rest of the source file. .NOUNICODE turns .UNICODE off. The presence or absence of .UNICODE determines whether the escape sequences are recognized or are interpreted literally, respectively. If .UNICODE is active, a string literal that contains a “\u” followed by a four‑digit hexadecimal number inserts the UTF‑16 character into the string, and the string itself is made up of UTF‑16 characters. For example, the following Unicode string literal creates a 16‑bit Unicode character (UTF‑16) called the double‑prime (") whose value is 2033 hex: strvar = "This is a Unicode \u2033 string literal" If .UNICODE is not active, “\u” is not recognized as an escape sequence, and the string literal is created verbatim, including the “\u” and any digits that follow.
http://docs.synergyde.com/lrm/lrmChap5UNICODE-.NOUNICODE.htm
2018-06-18T02:11:30
CC-MAIN-2018-26
1529267859923.59
[]
docs.synergyde.com
Whenever changes to system records are made, the system can be configured so that a reason for the change has to be entered or selected from a list of reasons. This section of the Configuration Module manual explains how to create the predefined reasons that can be selected. To enforce the use of these reasons, when making changes, is done using the Audit tag options that are available when selecting Security from the Setup, Server Options dropdown menu. To view all the system Reasons, proceed as follows: To add a new Reason to the list, proceed as follows: To change an existing Reason in the list, proceed as follows: To remove an existing Reason from the list, proceed as follows: Permalink: Viewing Details:
http://docs.tnasoftware.com/Configuration_Module/System_Reasons
2018-06-18T02:05:02
CC-MAIN-2018-26
1529267859923.59
[]
docs.tnasoftware.com
Introduction to Exporting Data¶ We believe that is extremely valuable for you to be able to combine your existing analytics with GetSocial insights. Therefore we provide different export options for the different use case. Check which one makes sense for you and please contact us if you need extra help. - Webhooks: receive Smart Invites and Smart Links events in real time, server to server, to any endpoint you choose. - Integrations: we integrate with other analytics services like Adjust or AppsFlyer, so you can see the power of GetSocial everywhere. - Export Users: export users details and their social behavior in CSV to merge into your backend or any other services. - Export Events: export raw events in CSV to merge into your backend or any other services.
https://docs.getsocial.im/guides/exporting-data/introduction/
2018-06-18T01:23:57
CC-MAIN-2018-26
1529267859923.59
[]
docs.getsocial.im
Create a service connection point with System Center Configuration Manager and Microsoft Intune Applies to: System Center Configuration Manager (Current Branch) When you have created your subscription, you can then install the service connection point site system role that lets you connect to the Intune service. This site system role will push settings and applications to the Intune service. The service connection point sends settings and software deployment information to Configuration Manager and retrieves status and inventory messages from mobile devices. The Configuration Manager service acts as a gateway that communicates with mobile devices and stores settings. Note The service connection point site system role may only be installed on a central administration site or stand-alone primary site. The service connection point must have Internet access. Configure the service connection point role In the Configuration Manager console, click Administration. In the Administration workspace, expand Site Configuration, then click Servers and Site System Roles. Add the Service connection point service connection point role. Then, on the Home tab, in the Server group, click Add Site System Roles to start the Add Site system Roles Wizard. On the System Role Selection page, select Service connection point, and click Next. - Complete the wizard. How does the service connection point authenticate with the Microsoft Intune service? The service connection point extends Configuration Manager by establishing a connection to the cloud-based Intune service that manages mobile devices over the Internet. The service connection point authenticates with the Intune service as follows: When you create an Intune subscription in the Configuration Manager console, the Configuration Manager admin is authenticated by connecting to Azure Active Directory, which redirects to the respective ADFS server to prompt for user name and password. Then, Intune issues a certificate to the tenant. The certificate from step 1 is installed on the service connection point site role and is used to authenticate and authorize all further communication with the Microsoft Intune service.
https://docs.microsoft.com/en-us/sccm/mdm/deploy-use/create-service-connection-point
2018-06-18T01:51:02
CC-MAIN-2018-26
1529267859923.59
[]
docs.microsoft.com
This section describes the setup and configuration steps needed to allow an HP StorageWorks MSA1510i storage system to communicate with ESXi hosts. Procedure - 2. - When prompted, enter the default access permissions. User name: root root - When prompted, set a unique user name and password. - Using the wizard, complete the following actions. - Click Finish to apply the configuration settings. Results Wizards are available for basic configuration tasks only. Use the Manage and Configure tabs to view and change your configuration. What to do next).
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-C9890E33-1E45-4104-A401-AF3712D5F09D.html
2018-06-18T01:48:17
CC-MAIN-2018-26
1529267859923.59
[]
docs.vmware.com
The Graylog index model explained¶ Overview¶ Graylog is transparently managing a set of indices to optimize search and analysis operations for speed and low resource utilisation. The system is maintaining an index alias called graylog_deflector which is always pointing to the current write-active index. There is always exactly one index to which new messages are written until the configured rotation criterion (number of documents, index size, or index age) has been reached. A background task checks regularly if the rotation criterion has been reached and a new index is created and prepared when that happens. Once the index is considered to be ready to be written to, the graylog_deflector index alias is atomically switched to the it. That means that all writing nodes can always write to the deflector alias without even knowing what the currently write-active index is. Almost every read operation is performed with a given time range. Because Graylog is only writing sequentially it can keep a cached collection of information about which index starts at what point in time. It selects a lists of indices to query when having a time range provided. If no time range is provided it will search in all indices it knows. Eviction of indices and messages¶ There’s a configuration setting for the maximum number of indices Graylog is managing. Depending on the configured retention strategy, the oldest indices will automatically closed, deleted, or exported when the maximum number of indices has been reached. The deletion is performed by the Graylog master node in a background thread that is continuously comparing the number of indices with the configured maximum. The following index rotation settings are available: - Message count: Rotates the index after a specific number of messages have been written. - Index size: Rotates the index after an approximate size (before optimization) has been reached. - Index time: Rotates the index after a specific time (e. g. 1 hour or 1 week). The following index retention settings are available: - Delete: Delete indices in Elasticsearch to minimize resource consumption. - Close: Close indices in Elasticsearch to reduce resource consumption. - Do nothing - Archive: Commercial feature, see Archiving. Keeping the metadata in synchronisation¶ Graylog will notify you when the stored metadata about index time ranges has run out of sync. This can for example happen when you delete indices by hand or delete messages from already “closed” indices. The system will offer you to just re-generate all time range information. This may take a few seconds but is an easy task for Graylog. You can easily re-build the information yourself after manually deleting indices or doing other changes that might cause synchronisation problems: $ curl -XPOST This will trigger a systemjob: INFO : org.graylog2.system.jobs.SystemJobManager - Submitted SystemJob <ef7057c0-5ae3-11e3-b935-4c8d79f2b596> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Re-calculating index ranges. INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of [graylog2_56] in [640ms]. INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of [graylog2_18] in [66ms]. ... INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Done calculating index ranges for 88 indices. Took 4744ms. INFO : org.graylog2.system.jobs.SystemJobManager - SystemJob <ef7057c0-5ae3-11e3-b935-4c8d79f2b596> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 4758ms. Manually cycling the deflector¶ Sometimes you might want to cycle the deflector manually and not wait until the configured rotation criterion for in the latest index has been reached. You can do this either via an HTTP request against the REST API of the Graylog master node or via the web interface: $ curl -XPOST This triggers the following log output: INFO : org.graylog2.rest.resources.system.DeflectorResource - Cycling deflector. Reason: REST request. INFO : org.graylog2.indexer.Deflector - Cycling deflector to next index now. INFO : org.graylog2.indexer.Deflector - Cycling from <graylog2_90> to <graylog2_91> INFO : org.graylog2.indexer.Deflector - Creating index target <graylog2_91>... INFO : org.graylog2.indexer.Deflector - Done! INFO : org.graylog2.indexer.Deflector - Pointing deflector to new target index.... INFO : org.graylog2.indexer.Deflector - Flushing old index <graylog2_90>. INFO : org.graylog2.indexer.Deflector - Setting old index <graylog2_90> to read-only. INFO : org.graylog2.system.jobs.SystemJobManager - Submitted SystemJob <a05e0d60-5c34-11e3-8df7-4c8d79f2b596> [org.graylog2.indexer.indices.jobs.OptimizeIndexJob] INFO : org.graylog2.indexer.Deflector - Done! INFO : org.graylog2.indexer.indices.jobs.OptimizeIndexJob - Optimizing index <graylog2_90>. INFO : org.graylog2.system.jobs.SystemJobManager - SystemJob <a05e0d60-5c34-11e3-8df7-4c8d79f2b596> [org.graylog2.indexer.indices.jobs.OptimizeIndexJob] finished in 334ms.
http://docs.graylog.org/en/2.1/pages/index_model.html
2018-06-18T02:21:35
CC-MAIN-2018-26
1529267859923.59
[array(['../_images/index_model_writes.png', '../_images/index_model_writes.png'], dtype=object) array(['../_images/index_model_reads.png', '../_images/index_model_reads.png'], dtype=object) array(['../_images/index_settings.png', '../_images/index_settings.png'], dtype=object) array(['../_images/recalculate_index_ranges_2016.png', '../_images/recalculate_index_ranges_2016.png'], dtype=object)]
docs.graylog.org
Use basic authentication credentials for a MID Server You can enforce basic authentication on each request. Before you beginRole required: adminThe MID Server is not able to communicate through a proxy server if the proxy server supports only NTLM authentication. You can use basic authentication with a proxy server or create an exception for the MID server host. About this task set basic authentication credentials for SOAP requests. Each SOAP request contains an Authorization header as specified in the Basic Authentication protocol. Note: The setting for enforcing strict security controls how ServiceNow. Procedure Navigate to System Properties > Web Services. Select the check box for Require basic authorization for incoming SOAP requests. Click Save. To provide basic authentication credentials for a MID Server, navigate to C:\Program Files\ServiceNow\<MID Server name>\agent and edit the config"/>.
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/product/mid-server/task/t_UseBasicAuthCredForMIDSvr.html
2018-06-18T02:13:24
CC-MAIN-2018-26
1529267859923.59
[]
docs.servicenow.com
Mitaka Series Release Notes¶ 1.0.0¶ New Features¶ - We no longer support soft-delete in senlin database. Marking an entity as soft-deleted is causing more problems than bringing any convenience. - Action list now can be filtered by its ‘status’ property. - Added support of multi-tenancy for actions. - Added senlin.policy.affinity-v1.0 which can be used to control how VM servers are placed based on nova servergroup settings. - New actions for checking and recovering nodes/clusters are added. - Added new APIs for cluster/node check and recover. - Added support to limit number of clusters per project. - Clusters now have a new ‘RESIZING’ status when its scale is being changed. - The built-in deletion policy can handle cross-region and cross-zone nodes selection. - Supporting engine status check, with senlin-manage command. - Event list can now be filtered by its ‘level’ property. - Added support to multi-tenancy for event resources. - Profile types and policy types are explicitly versioned now. We have the version string appended to the type names for easier detection. - New health check daemon is introduced to do periodical cluster status checking. It collaborates with health policy on cluster monitoring. - Improved action scheduler so that it can pick an action that is READY to be executed from DB. - Added LBaaS health monitor support to load-balancing policy v1.0. - Added support to steal a lock from dead engine. - Added support to multi-tenancy (aka. project_safe checking) when finding resources. - Use ‘sort’ instead of ‘sort_keys’ and ‘sort_dir’ for object sorting. - Nova server profile now supports block device mapping v2 (BDMv2). - Enabled update to the ‘flavor’ of a nova server profile. - Enabled update to the ‘name’ of a nova server profile. - Added profile property checking regarding whether they are updatable. - Support to ‘senlin-manage purge_deleted <age> [<unit>]’ is added. - New abstraction ‘receiver’ has been added as a generic way to notify the senlin engine that something interesting has happened. - An experimental policy for placing nodes across multiple regions. - Removed support to ‘trigger’ abstraction. - Make sure ‘spec’ of a profile is immutable after a profile object is created. The only way to “update” a profile is to create a new one. - Added command ‘senlin-manage service list’ to show the status of engine. - Added command ‘senlin-manage service clean’ to clean the dead engine records. - Add support to update image property of a Nova server. - Added support to updating network properties of a nova server. - Both image ID and image name are supported when creating os.nova.server profile. Upgrade Notes¶ - Database tables have to be recreated as we have removed soft-delete support from both the DB layer and the engine layer. - Senlin API has removed ‘tenant_id’ from its endpoint. This means users have to recreate their keystone endpoints if they have an old installation. - Webhook abstraction is removed. New usage model of webhooks is through the ‘receiver’ abstraction. - Node actions NODE_JOIN and NODE_LEAVE are removed from API surface. - Removed cluster policy enable/disable support. We will use more generic interface cluster policy update for these use cases. - Removed permission property from profiles. We will devise an RBAC mechanism as an alternative. - Status DELETED is removed from clusters and nodes. - Timestamp fields like ‘created_time’ and ‘udpated_time’ are renamed to ‘created_at’ and ‘updated_at’ respectively. - As a side-effect of the rework of action dependency, a new table has been added to the database. - Senlin binaries are all made as console script entries. Bug Fixes¶ - When referenced objects are not found in an API request, 400 is returned now. - Reworked action status check so that a cluster action will always return from WAITING status. - Fixed profile type checking error when attaching affinity policy. - Fixed parsing of default values for ‘max_size’ and ‘min_size’ properties of a cluster. - Fixed race condition in service deletion. - Fixed APIs that spawn asynchronous operations to return 202 as status code. - Fixed a bug related to setting the next_index property of a cluster after new nodes have joined or existing nodes have left. - Fixed cluster-list function so that ‘global-project’ can be specified. - Removed useless parameters (‘user’, ‘project’, etc.) from filtering when listing clusters. - Added parameter sanitization for cluster-policy-attach. - Fixed RC/role checking in the setup-service script. - Enforce multi-tenancy checking when a non-admin user attempting to list resources from projects other than that of the requesting user. - Fixed a bug related to using the ‘name’ property of a nova server profile. - Fixed parameter checking when listing resources, such as sort and filters. - Added parameter checking for policy-create API calls. - Added parameter checking for cluster-policy-detach API invocation. - Added parameter checking for cluster-policy-update API invocation. - Reworked action dependency to avoid indefinite waiting problem. - Fixed trust usage when interacting with keystone. This enables senlin to be deployed on a Juno version OpenStack. Other Notes¶ - DB isolation level defaults to READ_COMMITTED in order to solve concurrency problems encountered in action dependency checking. - Added documentation for senlin.policy.deletion-v1.0. - Added configuration option for enforcing name uniqueness. - Added documentation for lb policy, affinity policy, scaling policy, zone placement policy and region placement policy. - Senlin API documentation merged into api-site and published. - Added user documentation for ‘receiver’. - Added developer documentation for ‘receiver’. - Removed documentation for ‘webhook’. - The property ‘priority’ and ‘level’ are removed from policy create/update. - Command senlin-manage purge_deleted is removed. - Ensure there are no underscores (‘_’) in resource names exposed through RESTful API - User documentation for events and actions have been added. - User documentation (including developer docs) are published on official site.
https://docs.openstack.org/releasenotes/senlin/mitaka.html
2017-02-19T21:07:22
CC-MAIN-2017-09
1487501170253.67
[]
docs.openstack.org
Once a docket template has been set up in the dashboard, the below outlines how this then translates and is used within the FieldWorker App. Workers will log in to the app and select Allocations from the main menu Click on the specific job card that the docket will be for Inside the job card, will be an option for dockets. Click on the docket option to view the docket template(s) that have been added on a client level. To fill out the docket, the worker simply needs to click on the docket and fill out the details. Things to note: If a worker is allocated to a "supervisor" task (supervisor box has been ticked within that task), upon clicking on the docket, it will show other workers listed on the order. To select all workers, click "All" found in the top right hand corner. To be more selective, click on each check box next to the workers on the far right. It will also pre populate worker times based on allocation times in the docket on the next screen. Once all applicable workers are selected, click Next found at the bottom of the screen. The next screen will then show the pre-populated details based off the order and workers can continue filling out the docket. If a worker is not allocated to a "supervisor" task, details will manually be entered in for other workers on the same order by clicking on the Add button beneath the worker group of questions. Once the docket has been completed, hit Get it Signed. By clicking on this, all question fields will be locked/greyed out and a signature name and pad will appear for the client to fill out and sign. If amendments need to be made, click cancel signing and the question fields will unlock and can be amended from there. 2. To submit: Click on send & submit to submit the completed docket. A confirmation message will appear; by clicking on submit this will: Send the digital docket directly to the dashboard If a docket contact has been added, a copy will also be sent directly to that email address. The app will then take you back to the the job card the docket was for. 3. Creating time sheets from a docket. Once a docket has been submitted, workers can then use this to create a time sheet. Workers can then input the docket details to match or use the data pre-populated from the job card. All completed and submitted dockets can be found in the job card, under completed dockets Workers can View docket details (all fields are locked and cannot be edited from the app once signed) Create a time sheet based on details from this docket or keep data from job card. Depending on your workers method of submitting time sheets, the button will either say check out or create time sheet No matter the method, the button will always appear in the bottom right corner of the docket. Once selected, the normal time sheet screen will appear. Details will populate per the job card as usually however the digital docket will automatically attach to the time sheet. Workers are still able to take photos of any other dockets and upload as well by clicking on the Upload docket below the attached docket. Once happy with the time sheet, workers can then hit add to submit into the dashboard.
https://docs.assignar.com/en/articles/2088226-implementing-digital-dockets-fieldworker-app
2021-07-24T08:50:44
CC-MAIN-2021-31
1627046150134.86
[array(['https://downloads.intercomcdn.com/i/o/66182964/7ed2761ef6cf17445bd84cce/screenshot-mobile.assignar.com-2018.07.04-14-54-38.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66183055/4198d6ceef9b1eda454838c3/screenshot-mobile.assignar.com-2018.07.04-14-56-19.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66183327/8b3a3bd398265e2d4f7cac22/screenshot-mobile.assignar.com-2018.07.04-14-50-42.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66238696/245f0c78c3852c6ac7f7300f/screenshot-mobile.assignar.com-2018.07.04-21-23-47.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66239551/6abf9acdbb8860424c367c7d/screenshot-mobile.assignar.com-2018.07.04-21-31-30.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66239267/571ca97acad9418c0b7a9e68/screenshot-mobile.assignar.com-2018.07.04-21-39-35.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66239203/4d05516882f22a0248f82c2a/screenshot-mobile.assignar.com-2018.07.04-21-40-08.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66239235/402c8822b766da908dd8ad94/screenshot-mobile.assignar.com-2018.07.04-21-41-15.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66241859/850e98af256af926ff779e67/Screen+Shot+2018-07-04+at+9.51.30+pm.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66241945/ece98c99b082770dee2e4b4c/Screen+Shot+2018-07-04+at+10.06.16+pm.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66242271/8cb2f10973e1028cb67c7fb7/screenshot-mobile.assignar.com-2018.07.04-22-08-07.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66243232/fe32424d9ab45ce640fd970c/screenshot-mobile.assignar.com-2018.07.04-22-12-19+%281%29.png', None], dtype=object) ]
docs.assignar.com
1 Introduction This how-to explains how you can configure a list of items and view the item details selected in this list. This how-to will teach you how to do the following: - Create a new page - Configure a list view - Configure a data view that shows the details of an item selected in the list view The how-to describes the following use case: Sales Representatives in your company would like to view a list of opportunity contacts – potential customers. When Sales Representatives click a row in this list, the details of the corresponding opportunity contact are displayed next to the list: . Make sure your domain model is configured the following way: 3 Adding the Master Detail Page You would like to open a page with opportunity contact list and its details from your home page. Do the following: Open your home page and navigate to the Toolbox > Widgets. Search for Open Page button and drag and drop it to the page. Open the button properties and follow the steps below: Set Page as an on-click action and click the Page property. In the Select Page dialog box, click New Page. In the Create new page dialog box, fill in the page title. Select the page template by clicking Master Detail in the side bar and choose Master Detail: Click Create. The page is created. In the responsive (Desktop) view, a list is displayed on the left and list item details are displayed on the right: 4 Configuring the List The page is created, now you need to configure it. First of all, you need to connect data to the list. Do the following: Select the list view and click the Entity option in its properties: In the Select Entity dialog box, select OpportunityContact and confirm your choice by clicking Select. Now the list is connected to the OpportunityContact entity. To display the name of each report per company, do the following: Select the Name text in the list view and open the Properties tab. In the Content property, delete Name and click Add attribute: In the Select Attribute dialog box, choose Name and click Select. Delete the image from the list and the column where this image is placed, as now the image displays a a user image that does not correspond to opportunity contacts you are displaying. As the goal of the new page is to merely display data, delete the New button above the list view together with the container it is placed in: Now the list view will display the list of opportunity contacts by their name: 5 Configuring Report Details Now you need to configure opportunity contact details displayed next to the list. The idea is when you select the name from the list, the details of the selected contact will be displayed. The Master Details page template which your page is based on has a preconfigured data view that listens to the list view. That means that the data view shows data of the opportunity contact selected in the list view . Now you need to configure widgets inside the data view to show attributes of the InspectionReport entity, or in other words to show all the details that an opportunity contact has, such as title, name, job title, phone, email. To displayed all the details that a contact has, do the following: Delete the empty column and Edit, Send Email, and Delete buttons inside the data view as you will only display data, not change it: Double-click the User Details text widget (which is displayed as a data view heading) and rename it to Opportunity Contact Details. Open the Toolbox and search for Radio Buttons, drag and drop it inside the data view above the Name text box. Open radio buttons properties and click Data Source > Attribute. In the Select Attribute dialog box, choose Title and click Select: Select the Name text box and click Data Source > Attribute in its properties. In the Select Attribute dialog box, choose Name and click Select. Repeat steps 6 and 7 to set the Phone attribute for the Phonenumber text box, the Email attribute for the Email text box, DateCreated for the Birthday text box, and EstimatedValue for the Bio text box. You lack information on the contact’s job title and status. To add the job title information, open the Toolbox, search for a Text Box, drag and drop it inside the data view below the Name text box: Open text box properties and click Data Source > Attribute. In the Select Attribute dialog box, choose JobTitle and click Select. To add the information on the opportunity contact’s status, open the Toolbox, search for for Radio Buttons, drag and drop it inside the data view below the Estimated Value text box. Open the radio buttons properties and click Data Source > Attribute. In the Select Attribute dialog box, choose Status and click Select. Congratulations! You have a page that displays a list of opportunity contacts and the details of the selected contact: You can now preview your app and test your page. For more information on how to preview your page, see Previewing & Publishing Your App. You can also work on the page details, for example, add a dynamic image to the list to display a profile picture of an opportunity contact next to their name. For more information on dynamic images, see Images & Files.
https://docs.mendix.com/studio-how-to8/pages-how-to-configure-list
2021-07-24T08:04:57
CC-MAIN-2021-31
1627046150134.86
[array(['attachments/pages-how-to-configure-list/configured-page.png', None], dtype=object) array(['attachments/pages-how-to-configure-list/master-details.png', None], dtype=object) array(['attachments/pages-how-to-configure-list/list-configured.png', 'Configured List'], dtype=object) array(['attachments/pages-how-to-configure-list/configured-page.png', 'Configured Page'], dtype=object) ]
docs.mendix.com
Export Lists All data in Performance Analytics can be exported in a number of formats via the download feature. In this document we’ll look at the general process of exporting Performance Analytics data, then we’ll examine the specific example of exporting a list of users that have directly opened a notification or rich message. In this tutorial, you will first learn how to export report data, then follow steps to export specific types of report data. Export Data In this section we look at the general process of exporting Performance Analytics data. - Go to Reports » Performance Analytics. - Select a dashboard and find the report you would like to download. - If you are on a default dashboard, click the ellipsis icon and select Download Data…. If you clicked the tile to open the report or opened the report from a Space, instead click Download Results. - Specify Download Options: - File Format: The format of the download: TXT, Excel Spreadsheet, CSV, JSON, HTML, Markdown, or PNG (Image of Visualization). - Results: With visualization options applied or As displayed in the data table. - Download and save the file. If you’d like to view the data in the browser instead, click Open in Browser, and it will open in a new browser tab. Notifications Generate a list of users that directly opened a specific push notification. - Go to Reports » Performance Analytics. - Select the Revenue dashboard. - In the Messages report, locate the row for the message you want to analyze, then click its value in the Direct Response Count column. - Download the data as described in Export Data. Rich Pages Generate a list of users that directly opened a specific rich message. Go: This configuration is necessary to ensure you capture rich message reads occurring in every time zone.. Go to Reports » Performance Analytics. Select the Revenue dashboard. In the Custom Events Table report, locate the purchased row and click its value in the Event Count column. Click Explore from Here. Make these changes in the left side menu: - In Custom Events » Dimensions, click the row for Platform. - In User IDs » Dimensions, click the row for Device ID to unselect it, and click the row for Ad ID. - Click Filter for Limit Ad Tracking Enabled to add the row to the Filters menu, then set the value of is equal to to “No.” Download the data as described in Export Data. (OptionaL) Create a custom audience on Facebook. Follow Facebook’s instructions to create a Custom Audience or Lookalike Audience using the downloaded file. File must be in TXT or CSV format. Export Audience Lists You can segment your audience using an Uploaded ListA reusable audience list that you create. Uploaded lists are static and updatable. In the API, they are referred to as static lists. . First, follow these steps to download a CSV of device identifiers. Second, upload the list of users to your project, using the file you just downloaded. This example downloads all your Daily Active Users, - Go to Reports » Performance Analytics. - Select the Overview dashboard. - In the Daily Active Users report, click its value. - Click Explore from Here. - Download the data as described in Export Data, using these values: - File Format: CSV - Values: Unformatted - Limit: All Results Categories
https://docs.airship.com/guides/messaging/user-guide/data/analytics/export/
2021-07-24T08:44:58
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
Delete the Application, when you are sure you no longer need it. Clicking on Delete Application will not delete your application if you have workflows in the application. If your Application contains workflows in the Workflow Editor. So,when you Click on Delete Application, you will see the following prompt. Click on View Workflows to view and delete your workflows in the application. To delete the workflows in your application, you must first delete all the pipelines(CD Pipeline, CI Pipeline or Linked CI Pipeline or External CI Pipeline if there are any). After you have deleted all the pipelines in the workflow, you can delete that particular workflow. Similarly, delete all the workflows in the application. Now, Click on Delete Application to delete the application.
https://docs.devtron.ai/user-guide/deleting-application
2021-07-24T07:49:12
CC-MAIN-2021-31
1627046150134.86
[]
docs.devtron.ai
- All Categories Getting Started - Getting Started Can I Schedule a Demo? Yes, we love demos! We have a great pre-recorded videos here. Additionally, for any specific questions we have docs on all HelpDesk+ features. Connecting your Jira and Slack accounts Learn how we connect to existing accounts and create and manage JSD customer accounts with HelpDesk+ Understand user types Understanding users and permissions Create a new request Creating a support ticket from Slack is easy and we provide a number of ways to do this. Add to channels Add/remove your bot from public and private channels
https://docs.helpdeskplus.ai/getting-started
2021-07-24T06:54:53
CC-MAIN-2021-31
1627046150134.86
[array(['https://files.helpdocs.io/so6zv82ryw/other/1615308187245/avatar-grady.jpeg', 'Brian Mohr'], dtype=object) array(['https://files.helpdocs.io/so6zv82ryw/other/1615308187245/avatar-grady.jpeg', 'Brian Mohr'], dtype=object) array(['https://files.helpdocs.io/so6zv82ryw/other/1615308187245/avatar-grady.jpeg', 'Brian Mohr'], dtype=object) array(['https://files.helpdocs.io/so6zv82ryw/other/1615308187245/avatar-grady.jpeg', 'Brian Mohr'], dtype=object) array(['https://files.helpdocs.io/so6zv82ryw/other/1615308187245/avatar-grady.jpeg', 'Brian Mohr'], dtype=object) ]
docs.helpdeskplus.ai
Interface EventContextElement - All Known Subinterfaces: MappingKey<PBM,M> public interface EventContextElementAn atomic element of context when an event occurs: a mapped type, or an index, or a field path, ... Method Detail render String render() - Returns: - A human-readable representation of this context. The representation should use brief, natural language to refer to objects rather than class names, e.g. "index 'myIndexName'" rather than "ElasticsearchIndexManager{name = 'myIndexName'}". The representation may change without prior notice in new versions of Hibernate Search: callers should not try to parse it.
https://docs.jboss.org/hibernate/search/6.0/api/org/hibernate/search/util/common/reporting/EventContextElement.html
2021-07-24T07:42:21
CC-MAIN-2021-31
1627046150134.86
[]
docs.jboss.org
To interact with external systems (such as SCADA), the server implements the data access features via OPC UA. OPC UA – is a modern technology for communication in the fields of industrial automation, Internet of Things (IoT), etc. Technology provides opportunities for encryption of transmitted data and authentication/authorization of users. Available features and OPC UA settings are described below.
https://docs.monokot.io/hc/en-us/articles/360034374312-Overview
2021-07-24T07:17:20
CC-MAIN-2021-31
1627046150134.86
[]
docs.monokot.io
Store Pages In this article: What can I use Pages for? How to edit Pages on the Navigation module Re-order Pages on the Navigation module Add a Page to the Navigation module Remove a Page from the Navigation module What are Pages? The Store Pages add an additional page or section to your store, separate from your product storefront. This feature allows you to add more information and depth to your store. These can be used to tell customers more about yourself and your business. Plus, it's a convenient way for customers to reach out to you, such as through the Contact page. These pages are shown in your Navigation module, or store menu, next to the store logo. What can I use Pages for? We offer a Contact Page to help get you started with your store. But, you can always add any customized Page that suits your store and brand! Here are some examples of what you can do with your Pages: - Terms - You can create Terms and Conditions for your site and products. You can require that customers agree to these Terms before they can purchase. The URL slug to this Page must be "/terms". See how to make a Terms page in the next section! We recommend that you save or store your Terms content for your own records, as well as leaving it available on your store for your customers. - Testimonials - You can screenshot, quote, embed, or otherwise display your customer reviews. If you're offering a workout plan or diet plan, you could even display submitted customer results of Before and After. - Feedback - Create a Google Form or other survey and add the link to your Store page. You can encourage your customers to participate. - Specific Product Explanations - If you're using the default About page to describe yourself as a creator, or your business, you can create a more specific page to talk about your products. If you're offering a subscription, it may be helpful to explain what they'll be getting each interval. Or if you deliver customized services, you could detail the process. - FAQ - If you have a lot of commonly asked questions, you can help both you and your customer by creating a list of these questions with answers. - Download Help - If you're offering large files, a variety of file types, or if you find customers need a little extra help, you can offer some tips on a Download Help page. Feel free to view or share our content as well, read more here. - Blog Content - If you have a blog that you'd like customers to visit, read, and follow, try including some article links and embed some content. - YouTube Content - You can embed a YouTube video directly on your page or provide links to encourage customers to watch and subscribe. If you're selling e-book with recipes, you can bet your customers will want to follow your cooking on YouTube! How to edit Store Page content All pages have a rich text editor, which means you can customize each page by adding: - Links - Embedded content - Images - Horizontal lines - <p>, <img>, <h1>, <h2> embed codes Add a new Page - Navigate to your Store Customizer - Select +Add page - Fill out the Title and URL slug - Use +Add module to create content - Select Publish In this example, we'll show how to make a Terms page. If you require that your customers agree to the Terms before they purchase, you'll need to make sure to use the slug of "/terms". Edit an existing Page - Open your Store Customizer - Select the page you want to edit. - Click on your existing Page modules or select +Add Module - Make your edits - Click Publish How to delete a page Deleted pages can not be recovered! If you're unsure of whether or not you want to delete the page, you can make the page "hidden", by removing it from your Navigation. To delete a Page from your store: - Open your Store Customizer - Select a Page from the left-hand menu - Click Delete page How to edit Pages on the Navigation module The menu on your Store, where the Pages are displayed is called your Navigation module. With this module, you can list your Pages, Categories, Products, and Links. Re-order Pages on the Navigation module - Open your Store Customizer - Select the Navigation module - Drag and drop the Pages in your left-hand menu - Click Publish Add a Page to the Navigation module New Pages are automatically added to your Navigation module. But, if you've deleted a Page link from your Navigation and need to reconnect them, you can do so by using the following steps. - Open your Store Customizer - Select the Navigation module - Select +Add link - Enter the link title - Choose the Page from the drop-down menu - Click Save link - Click Publish Remove a Page from the Navigation module - Open your Store Customizer - Select the Navigation module - Select the pen icon on the Page - Select the Delete link - Click Publish FAQs about Store pages A: Yes! It's possible to embed iframes or other videos. See an example here. We recommend that you save or store your Terms content for your own records, as well as leaving it available on your store for your customers.
https://docs.sellfy.com/article/181-store-pages
2021-07-24T08:15:40
CC-MAIN-2021-31
1627046150134.86
[]
docs.sellfy.com
If you have an empty/first application, you will be asked by the assistant to add a device when you log in. To do so, follow these steps: 1. Select Use a tracker or smartphone 2. Select your device model from the list 3. If your device has already generated traffic to the server, you will see the connection down below. You can click on the box and it will turn blue. 4. Click on the Connect button to finally add it. Congratulations! Now your new device is added to the platform!
https://docs.assignar.com/en/articles/4892740-adding-your-first-device-using-the-assistant
2021-07-24T08:01:02
CC-MAIN-2021-31
1627046150134.86
[array(['https://downloads.intercomcdn.com/i/o/298694989/891a49594b8dfd4b4d4d989b/adddevice.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/298695121/d44e59bcc98b0cb2d6ef3866/connect_tracker.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/298695494/c17da6c41e81b505de86e360/connect_tracker_2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/298697120/3871afb5adf0e5a82584e10e/tracker_online.png', None], dtype=object) ]
docs.assignar.com
Creating a Testimonial Creating a Testimonial is a pretty straigh forward process. Use the “Testimonials” Custom Post Type to add your Testimonials. - In the wordpress Admin Area goto. The Single Testimonials Item The Single Team Member item has not so it draft. - The Screen options. Enable / disable what you want to have visible in this custom posttype. Once all settings and contentonials list to appear. Note : Testimonials since they don’t have a single post view mode do not have design options.
https://docs.rtthemes.com/document/creating-a-testimonial/
2021-07-24T07:52:44
CC-MAIN-2021-31
1627046150134.86
[]
docs.rtthemes.com
Install and Configure SSM Agent on Amazon EC2 Windows Instances SSM Agent is installed by default on instances created from Windows Server 2016 and Windows Server 2019 Amazon Machine Images (AMIs), and on instances created from Windows Server 2003-2012 R2 AMIs published in November 2016 or later.. page on GitHub. If necessary, you can manually download and install the latest version of SSM Agent on your Amazon EC2 Windows instance by using the following procedure. Important This procedure applies to installing or reinstalling SSM Agent on an Amazon EC2 Windows instance. If you need to install the agent on an on-premises server or a virtual machine (VM) so it can be used with Systems Manager, see Install SSM Agent for a Hybrid Environment (Windows).. Run the downloaded AmazonSSMAgentSetup.exefile to install SSM Agent. Start or restart SSM Agent.
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-win.html
2019-06-15T23:01:31
CC-MAIN-2019-26
1560627997501.61
[]
docs.aws.amazon.com
FIND Returns the starting position of one text string within another text string. FIND is case-sensitive. Syntax FIND(<find_text>, <within_text>[, [<start_num>][, <NotFoundValue>]]) Parameters Property Value/Return value Number that shows the starting point of the text string you want to find.. FIND does not support wildcards. To use wildcards, use SEARCH. Example The following formula finds the position of the first letter of the product designation, BMX, in the string that contains the product description. =FIND("BMX","line of BMX racing goods")
https://docs.microsoft.com/en-us/dax/find-function-dax
2019-06-15T23:18:23
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
Let's assume that you have a configuration that sends the request to the Stock Quote service and changes the response value when the symbol is WSO2 or CRF. Let's also assume that you want to temporarily change the configuration so that if the symbol is CRF, the ESB just sends the message back to the client without sending it to the Stock Quote service or performing any additional processing. To achieve this, you add the Respond mediator at the beginning of the CRF case as shown below. All configuration after the Respond mediator is ignored, so you can safely leave the rest of the CRF case configuration intact, allowing you to easily revert to the original behavior in the future simply by removing the Respond mediator. >
https://docs.wso2.com/display/ESB481/Respond+Mediator
2019-06-15T23:27:05
CC-MAIN-2019-26
1560627997501.61
[]
docs.wso2.com
choose the one you like best or the one you would like to use for your new theme. Download and extract the theme somewhere on your computer. After you've extracted the theme right click the folder and click Kloudstores. A new browser tab should be opened pointing to. That's it. You can now update the resources in the template and refresh to see the changes., right click the folder and run Kloudstores. Your theme will be available for preview at. Questions • Merchant User Agreement • Terms of service • Copyright 2019 Kloudstores
http://docs.kloudstores.com/kloudstores-tools-win/
2019-06-15T22:51:25
CC-MAIN-2019-26
1560627997501.61
[]
docs.kloudstores.com
Installing BMC Digital Workplace in a cluster This topic describes the process to install BMC Digital Workplace in a cluster. Other options are described in Performing the BMC Digital Workplace installation. Before you begin Complete the steps described in Preparing for installation of Digital Workplace. In addition to steps that are required for all installations, some steps are required for high availability. To install BMC Digital Workplace in a cluster Install BMC Digital Workplace on the primary server. To use the GUI wizard, see Installing BMC Digital Workplace on a single server. To use the silent installer, see Installing BMC Digital Workplace silently. Specify the In a New Cluster option during installation. See Installation worksheets for Digital Workplace. Install BMC Digital Workplace on the secondary server. Specify the In an Existing Cluster option during installation. See Installation worksheets for Digital Workplace. - Create your MyIT administrator account, as described in Setting up administrator authentication. On all nodes, make sure that API token and secret values are the same in the connect-dwp.properties file located at apacheFolder\Tomcat\external-conf\. Start the DWP Application service for all BMC Digital Workplace servers in the cluster. Where to go from here Verifying the BMC Digital Workplace installation i thought the form "SHR:SHR_KeyStore" (point 4) doesn't matter any more since the mongo db was removed Is the import of "myit-chg-91.def definition file" into 1811 really still necessary? Hello Stefan, You are correct about both of the points. Thank you for the comments. The content is updated as per your feedback. The changes will become public later this week.
https://docs.bmc.com/docs/digitalworkplacebasic/current/installing-bmc-digital-workplace-and-smart-it-in-server-group-825201750.html
2019-06-15T23:59:04
CC-MAIN-2019-26
1560627997501.61
[]
docs.bmc.com
Connect PagerDuty to Datadog in order to: @pagerdutyin your post Once you have Pagerduty integrated, you can check out Datadog’s custom Pagerduty Incident Trends. The PagerDuty integration does not include any metrics. Your PagerDuty Triggered/Resolved events appears in your Event Stream The PagerDuty integration does not include any service checks. You must include the PagerDuty notification in the {{#is_recovery}} context of your Say what’s happening section of your monitor as follows: {{#is_recovery}} This notification will only occur if the monitor resolves. If @pagerduty-trigger was triggered for the alert, it'll be resolved as well {{/is_recovery}} Use @pagerduty-[serviceName] in your monitor message. If you start typing it in your monitor Say what’s happening section, you should see it autocomplete. Datadog has an upper limit on your monitor notification lengths sent to PagerDuty. The limit is at 1024 characters.
https://docs.datadoghq.com/integrations/pagerduty/
2019-06-15T23:01:39
CC-MAIN-2019-26
1560627997501.61
[]
docs.datadoghq.com
Contents IT Business Management Previous Topic Next Topic Resource events Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Resource.Change the resource event colorEach event type is represented with a specific color. PPS admin can change the colors at any time. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/resource-management/concept/c_ResourceEvents.html
2019-06-15T23:16:00
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Contents ServiceNow Store - IT Operations Management Previous Topic Next Topic ServiceNow Store - IT Operations Management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics Share ServiceNow Store - IT Operations Management IT Operations Management pattern content enhances Discovery and Service Mapping. Visit the ServiceNow Store website to view all the available apps and for information about submitting requests to the store. Apigee Edge discoveryDiscovery finds Apigee Edge Enterprise edition versions 4.x.x using the APIGee pattern. Amazon DynamoDB discoveryDiscovery and Service Mapping use the Amazon AWS DynamoDB pattern to find components of DynamoDB. AWS API gateway discoveryDiscovery and Service Mapping can find AWS API gateways and connections to other entities. AWS Lambda discoveryDiscovery and Service Mapping can find and map Lambda functions that run in your AWS cloud.AWS S3 discoveryThe Amazon AWS S3 pattern uses a set of REST API calls to find public and non-public storage buckets of Amazon Simple Storage Service.Azure Application Gateway discoveryDiscovery uses the Azure Application Gateway HD pattern for discovering this product, while Service Mapping discovers Application Gateway using Azure Application Gateway TD (LBS) pattern.Azure Functions discoveryDiscovery and Service Mapping uses the Microsoft Azure Functions pattern for discovering Azure Functions service and mapping this service in the context of application services. ColdFusion discoveryDiscovery finds Adobe ColdFusion servers and the instances of ColdFusion applications running on them. Only the 2016 version of ColdFusion is supported.EMC Isilon discoveryDiscovery uses the EMC Isilon pattern to find components of Dell EMC Isilon. Google Cloud Platform discoveryDiscovery finds Google Cloud Platform (GCP) GCP API v1 components using the Google Cloud Platform pattern. IBM Cloud Platform discoveryDiscovery finds IBM Cloud Platform components (Softlayer API v3 and v3.1 and Bluemix API v2) using the IBM Cloud Platform pattern.IBM Informix Dynamic Server discoveryDiscovery and Service Mapping can find and map the Informix Dynamic Server.Kubernetes discoveryDiscovery finds Kubernetes versions 1.5, 1.6, 1.7, 1.8, 1.9, and 1.10 using the Kubernetes pattern. Kubernetes event discoveryDiscovery finds Kubernetes events and frequently updates the CMDB to reflect the dynamic Kubernetes environment. Discovery finds Kubernetes events for versions 1.5, 1.6, 1.7, 1.8, 1.9, and 1.10 using the Kubernetes Event pattern. Oracle GoldenGate discoveryDiscovery and Service Mapping find Oracle GoldenGate version 12c components using the Oracle GoldenGate pattern.Pivotal Cloud Foundry discoveryDiscovery finds Pivotal Cloud Foundry (PCF) version 2.4 components using the Pivotal Cloud Foundry pattern.Pure Storage FlashBlade discoveryDiscovery uses the FlashBlade Pure Storage pattern to find FlashBlade components. Red Hat JBoss Fuse discoveryDiscovery and Service Mapping can find and map the Fuse application server using the JBoss Fuse pattern.Veritas Cluster ServerDiscovery uses the Unix Cluster – VERITAS Cluster pattern to find Veritas Cluster Server components. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/store-it-operations-management/page/product/itom/concept/store-it-operations-management.html
2019-06-15T23:17:33
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Table: Get Rows Node The Data Table: Get Rows Node allows for querying your application’s data tables and storing the results of the query on your workflow’s payload. Node Properties The Data Table: Get Rows Node takes a number of parameters, most of which are optional. Required Fields - Table ID Template: The table on which the query should be performed. You may select one of your application’s tables or enter a string template, which should resolve to a valid table ID. - Result Path: A payload path defined at the bottom of the editor for where the result of the query will be stored. This result will include the query itself, the items in the query, and some additional meta information; or, the result will include an error if the query is invalid. Query Fields. If you selected a data table directly instead of using a template, there will be a section detailing all of the columns in the table and their data types to assist you in building your query. The query section is optional. If you do not include any queries, all rows will be returned and then limited by the filter fields defined below. However, if you do include any piece of a query (e.g. the Property), you must also include the other two pieces (the Operator and the Value). Query Option Fields Optionally, you may also include some parameters for sorting, limiting and paginating the results of the query. All of these fields are templatable. - Limit Template: must be a number between 1 and 1000, or a template that resolves to such a number. Default is 1000. - Offset Template: must be a number greater than or equal to 0, or a template that resolves to such a number. This is similar to requesting a specific page of results, except you are defining the number of records to skip instead of a page to return. Default is 0. - Sort By Template: is the field by which the results should be sorted. If using a template, this should resolve to one of your table’s column names or one of the default columns. Default is id. - Sort Direction Template: must be either asc, descor a template that resolves to one of those values. Default is asc. - Fields To Include Template: allows you to control the exact columns returned for the resulting rows. By default, all columns are returned. You can restrict to specific columns by providing a comma deliminited list of column names. Node Example Given a result path of rowsResult, a successful row query result might look like: { ... "rowsResult": { "totalCount": 8, "count": 2, "sortColumn": "id", "sortDirection": "asc", "offset": 0, "limit": 2, "query": {}, "items": [ { "id": "5ae087427d9cda0007f35520", "createdAt":"2018-04-25T13:48:50.327Z", "updatedAt":"2018-04-25T13:48:50.327Z", "myColumn": "value1" }, { "id": "5ae0874821d0160005aa731c", "createdAt":"2018-04-25T13:48:50.327Z", "updatedAt":"2018-04-25T13:48:50.327Z", "myColumn": "value2" } ] } ... } Node Errors Given a response path of rowsResult, a failed row query result might look like: { ... "rowsResult": { "error": { "type": "NotFound", "message": "Data Table was not found" } }, ... } There are two common types of errors that can occur with this node: - NotFound: This means that the data table you requested to query not found. It may have been deleted elsewhere, or you may be dynamically resolving to a data table ID that does not exist. - Validation: This means there is an issue with your query or query options. For example, you might be referencing a column that does not exist, or using an operator not valid for the type of column. In the case of a validation error, the message property will have more details on the particular source of the error.
http://docs.prerelease.losant.com/workflows/data/table-get-rows/
2019-06-15T23:51:32
CC-MAIN-2019-26
1560627997501.61
[array(['/images/workflows/data/get-rows-overview.png', 'Get Rows Node Get Rows Node'], dtype=object) array(['/images/workflows/data/get-rows-query.png', 'Get Rows Query Get Rows Query'], dtype=object) array(['/images/workflows/data/get-rows-filter.png', 'Get Rows Query Option Fields Get Rows Filter'], dtype=object)]
docs.prerelease.losant.com
We take the utmost care in ensuring that changes to our APIs do not break your integrations. We consider the following changes to be backward-compatible: - Adding new API resources/endpoints. - Adding new, optional request fields/parameters to existing APIs. - Adding new fields to existing API responses. - Adding new, optional HTTP request headers. - Adding new HTTP response headers. - Increasing the length of existing string field values. - Changing the format of identifiers including changing or removing prefixes. - Adding new webhook event types (you will need to explicitly opt into these). Note that the above also applies to changes in our webhook event schemas. In the event that a breaking change is completely unavoidable (e.g., for compliance reasons), we will contact you in advance to ensure that you have sufficient time to update your integration. June 6, 2019 - The response to requesting a payment now includes two new optional fields: processing.retrieval_reference_numberand processing.acquirer_transaction_id. - The response to retrieving payment actions now includes two new optional fields: processing.acquirer_reference_numberand processing.acquirer_transaction_id. - The payment_approved, payment_declined, card_verificationand card_verification_declinedwebhooks have two new optional fields: processing.retrieval_reference_numberand processing.acquirer_transaction_id. - The payment_capturedwebhook notification has two new optional fields: processing.acquirer_reference_numberand processing.acquirer_transaction_id. May 16, 2019 The version of 3D Secure used for authentication is now provided in the 3ds.version field of the payment retrieval response. March 22, 2019 Payment actions are now idempotent and therefore allow you to safely retry capture, refund and void requests. Find out more. January 21, 2019 The response to retrieving a payment using a session ID will now include an action object, which provides a summary of the actions performed for that payment. January 10, 2019 The payment_approved, payment_declined, card_verifed, and card_verification_declined webhook notifications now include an approved flag. This allows you to easily know if the authorization or capture was a success without having to use the status or response code. January 9, 2019 An approved flag has been added to the payment retrieval response, bringing it in line with the payment creation response. Primarily used in the redirection flows (3-D Secure and APMs), this allows you to easily know if the authorization or capture was a success without having to use the status or response code. November 14, 2018 - Our payments endpoint is now idempotent, allowing you to safely retry payments without a duplicate payment taking place. Find out more. - We now provide you with the scheme transaction ID ( scheme_id) in the response when creating a payment. October 24, 2018 Support for payments using pre-decrypted Apple Pay tokens has been added to the payments endpoint. October 18, 2018 - Support for network token payments has been added to the payments endpoint. - Support for 3-D Secure payments where the cardholder is authenticated using a third-party MPI has been added to the payments endpoint. - The ECI value that the payment was authorized with is now provided in the response to 3-D Secure and Visa token payments. October 11, 2018 We now provide the payment_account_reference for digital wallet and Visa token payments. This is a reference to the underlying card and allows you to know if two tokens reference the same underlying card. August 14, 2018 We’ve updated the Events API so you can search using payment_id and reference. This will allow you to easily find events related to a particular payment. Find out more. June 26, 2018 previousChargeId now supports Visa’s scheme transaction ID. Find out more. April 2, 2018 To support the new requirements from Visa and Mastercard for payments using stored card details, a new cardOnFile and a new previousChargeId field must be included in charge requests where applicable. Find out more. March 21, 2018 Financial institutions now need to provide the recipientDetails in their charge requests when processing any domestic UK transactions. Find out more. February 2, 2018 We have introduced new chargeback and retrieval webhooks to help keep you up to date on your payments. You can subscribe to them by updating your configuration through our API or via the Hub. Thanks for using Checkout.com. If you need any help or support, then message our support team at [email protected].
https://docs.checkout.com/docs/changelog
2019-06-15T23:32:04
CC-MAIN-2019-26
1560627997501.61
[]
docs.checkout.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Instance hierarchies Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Instance non-production instance has a parent instance. Instances that have the same parent instance are peer instances. The shared parent instance becomes the central hub, or repository, and all peer instances synchronize to it. Related TasksTeam Development processRelated ConceptsTeam Development overviewTeam Development setupCode reviewsCode review notificationsCode review workflowExclusion policiesPulls and pushesTeam Development rolesVersionsVersions and local changes On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-application-development/page/build/team-development/concept/c_InstanceHierarchies.html
2019-06-15T23:12:17
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Known and corrected issues Was this page helpful? Yes No Submitting... What is wrong with this page? Confusing Missing screenshots, graphics Missing technical details Needs a video Not correct Not the information I expected Your feedback: Send Skip Thank you Last modified by Shweta Patil on May 02, 2018 reference workarounds issues known release_notes corrected technical_bulletins Comments 8.2.00 features Getting started
https://docs.bmc.com/docs/PATROL4BSA/82/known-and-corrected-issues-489034830.html
2019-06-16T00:01:43
CC-MAIN-2019-26
1560627997501.61
[]
docs.bmc.com
Customizing Visio Web Drawings in the Visio Web Access Web Part Applies to: SharePoint Server 2010 In this article Getting Started Programming the Visio Web Access Web Part Creating a Web Parts Page Adding a Visio Web Access Web Part to the Web Parts Page Adding a Content Editor Web Part to the Web Parts Page Visio Services ECMAScript API Objects in the Visio Services ECMAScript API Samples in the SharePoint 2010 SDK Note The information in this topic applies to SharePoint 2013 as well as to SharePoint 2010. The samples provided in the SharePoint 2010 SDK also work in SharePoint 2013, but are not included in the SharePoint 2013 SDK. Visio Services in Microsoft SharePoint Server 2010 enables you to load, display, and interact programmatically with Microsoft Visio 2010 documents that are hosted in an instance of the Visio Web Access Web Part on a Microsoft SharePoint Server 2010 page. This article provides information about how to add a Visio Web Access Web Part to a SharePoint Server 2010 Web Parts page, display a Visio Web drawing in the Web Part, and interact with that drawing programmatically by using the Visio ServicesJavaScript API. Getting Started Programming the Visio Web Access Web Part Before your solution can interact programmatically with a Visio Web drawing on a SharePoint Server 2010 Web Parts page, you must add a Visio Web Access Web Part to the page, open a Visio drawing published as a .vdw file in the Web Part, and add a Content Editor Web Part to the page to contain your ECMAScript (JavaScript, JScript) code. This procedure assumes you have the appropriate administrative rights as a page designer in SharePoint Server 2010. To get started programming the Visio Web Access Web Part In Visio, create the drawing you want to display, and then save it to a SharePoint document library as a .vdw file. Create the JavaScript (.js) file that contains the code you want to use to interact with the Web drawing, and save it to the same document library that holds the .vdw file. Create a SharePoint Server 2010 Web Parts page to display your drawing and contain your code. Add a Visio Web Access Web Part to the page, and display the Web drawing in the Web Part. Add a Content Editor Web Part to the page, and link it to the JavaScript file you created earlier. Refresh the page in your browser. The following sections provide more detail about some of these steps. You can find information about publishing Visio drawings as .vdw files in Visio Help. You can create an JavaScript file in Microsoft Visual Studio 2010 or any other text or code editor. Note There are many books and online articles available that provide general guidelines for coding in JavaScript, which is beyond the scope of this article. Creating a Web Parts Page After you have published your Visio drawing as a .vdw file, saved it to a document library, and created your JavaScript (.js) file and saved it to the same library, the next step is to create a Web Parts page. To create a Web Parts page In the SharePoint site where you want to post your drawing, on the Site Actions menu, click More Options. Under Pages and Sites, click Web Part Page. On the New Web Part page, in the Name box, type a file name for the page. Choose a layout template and the location where you want to save the page file, and then click Create. Adding a Visio Web Access Web Part to the Web Parts Page Before you can interact programmatically with a Web drawing on a SharePoint Server 2010 Web Parts page, you must add a Visio Web Access Web Part to the Web Parts page you created, and open a Visio drawing published as a .vdw file in the Web Part. To add a Visio Web Access Web Part to a Web Parts page In the Server ribbon on the SharePoint Server 2010 Web Parts page, click Edit Page. In the zone where you want to place the Web Part, click Add a Web Part. In the Categories list, click Business Data. Click Visio Web Access, and then click Add. Click the arrow next to Web Part Menu, and then click Edit Web Part. Type the URL of the Web drawing (.vdw file) you want to open, and then click OK. Adding a Content Editor Web Part to the Web Parts Page The Content Editor Web Part serves two purposes: it holds your JavaScript code, and it provides a display and control interface that enables you to interact in real time with the .vdw file in the Visio Web Access Web Part. To add a Content Editor Web Part to a Web Parts page If the page is not already in edit mode, on the Server ribbon on the SharePoint Server 2010 Web Parts page, click Edit Page. In the zone where you want to place the Content Editor Web Part, click Add a Web Part. In the Categories list, click Media and Content. In the Web Parts list, click Content Editor, and then click Add. Click the arrow next to Content Editor Web Part Menu, and then click Edit Web Part. Type the URL of the .js file you want to open, and then click OK. On the ribbon, click Stop Editing. Visio Services ECMAScript API The JavaScript object model in Visio Services gives you programmatic access to Visio drawings displayed as .vdw files in the Visio Web Access Web Part. Using the Visio ServicesJavaScript object model, you can access shape data, hyperlinks, and shape bounding box coordinates. You can also create mashups that target specific diagram pages, select and highlight shapes, place markup overlays on the diagram, respond to mouse events, and change the panning and zooming properties of the viewport. (A mashup is an application that enables you to combine functionality or data from multiple sources into a single, integrated service, application, or medium.) As with many JavaScript APIs, the Visio Services ECMAScript API is event-based. To program the Visio Web Access Web Part, you write handlers that call functions in response to events raised in the diagram. Objects in the Visio Services ECMAScript API The Visio ServicesJavaScript API contains only four objects and their respective members: In addition, the Visio ServicesJavaScript API contains four enumerations: Vwa.DisplayMode Enumeration Vwa.OverlayType Enumeration Vwa.HorizontalAlignment Enumeration Vwa.VerticalAlignment Enumeration VwaControl Object Sys. Sys.Application.add_load(onApplicationLoad) var webPartElementID = "WebPartWPQ3"; var vwaControl; function onApplicationLoad() { try{ vwaControl= new Vwa.VwaControl(webPartElementID) vwaControl.addHandler("diagramcomplete", onDiagramComplete); } catch(err){ } } subsequently. function onDiagramComplete() { try { vwaPage = vwaControl.getActivePage(); vwaShapes = vwaPage.getShapes(); vwaShape = vwaShapes.getItemAtIndex(0); vwaControl.addHandler("shapeselectionchanged", onShapeSelectionChanged); } catch(err) { } } This code example shows how to accomplish with data from the server. Page Object. ShapeCollection object). Shape Object. Samples in the SharePoint 2010 SDK Server 2010\Samples\Visio Services. Annotations Sample. Custom Error Messages Sample. Mouse Interaction Sample. See Also Concepts Code Sample: Custom Error Messages Code Sample: Mouse Interaction Objects in the Visio Services JavaScript API Visio Web Access Web Part in Visio Services Samples
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ff394649(v=office.14)
2019-06-15T23:15:07
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
Reseleases' Notes Table of Contents PLATFORM FEATURES Labeeb IoT platform provides the following features: - Secure sending and retrieving data through the support of https and SSL/TLS protocols. - Support connections from IoT devices through MQTT, MQTT over Websocket, HTTP Restfull interface and COAP - Storage in SQL and No-SQL data bases. - Analytics to retrieve and aggregate data from our big data. - Business Rules Manager. - Restful API to build your applications and manage your devices. RELEASE 4,0 This release mainly introduces a new UI/UX with the dynamic dashboard that users can build and the support on a new communication protocol to connect IoT devices. NEW FEATURES - [LBBMZM-422] - Labeeb IoT Web Portal migration and enhancements - [LBBMZM-423] - Support for users defined dashboards - [LBBMZM-521] - DCM support of JSON over HTTP RELEASE 3.1 This release mainly introduces enhancements on the rules engine and the support of sending/storing multimedia. NEW FEATURES - [LBBMZM-285] - Add capability to share data with external sources (external MQTT brokers, SQL databases, REST web services and SMS) - [LBBMZM-409] - Support for Multimedia Unstructured Data - [LBBMZM-419] - Usage restriction alerts via email BUG FIXES - [LBBMZM-446] - Signup issues when enterprise name has capital letters Documentation History
https://docs.qmic.com/display/LIOTD/Labeeb+IoT+Documentation
2019-06-15T22:46:03
CC-MAIN-2019-26
1560627997501.61
[]
docs.qmic.com
Contents Now Platform User Interface Previous Topic Next Topic Delete filters Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share DeleteRelated ConceptsPersonal listsRelated TopicsConfigure the list layout On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-platform-user-interface/page/use/using-lists/task/t_DeletingFilters.html
2019-06-15T23:08:54
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Emergency Console¶ If you have lost SSH access to your Cloud server, you can access it by the console in order to see what the problem is and fix it. Activating the console¶ To activate the console, navigate to the “Emergency Console” tab on your server’s management page, and then click on the “Activate console” button. Once the console is activated, you can connect to it via ssh : $ ssh [email protected] Note The password that you provide will be the one that you chose during the initial creation of your server. Once you are done using the emergency console, you can deactivate it from the same page within your account. Troubleshooting¶ The emergency console will work if the VPS is running, the version of the console depends on the kernel version. To get a working emergency console, you will have to update the console version in regard to the kernel, on the Gandi.net control panel, go to the advanced mode of the system disk, find below the list of corresponding consoles for the concerned kernel : 2.6.18 | xvc0 2.6.27 | tty1 2.6.32 | hvc0 2.6.36 | hvc0 3.2 | hvc0 3.10 | hvc0 grub | hvc0 raw | hvc0 Restart the VPS in order to apply the changes, if you change to the right console, it will work fine, you will be able to login in your virtual server. Escape commands and SysReq¶ The escape commands are available by typing simultaneously on ~? after a line return (Type Enter first) : root@ai2:~# ~?.) The Sysreq commands are available by typing simultaneously on CTRL + O (o, not zero) then type on the letter H like root@ai2:~# SysRq : HELP : loglevel0-8 reBoot tErm Full kIll saK aLlcpus showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks Unmount shoW-blocked-tasks Replace H by the uppercase letter of the wanted command or a number for the loglevel. To restart your VPS for example, type on CTRL + O then B. SSH Key Fingerprint¶ The SSH key fingerprint for console.gandi.net is : 2d:e6:c0:48:5c:41:3b:95:81:0c:16:c1:32:a6:ef:8e
https://docs.gandi.net/en/cloud/common_operations/emergency_console.html
2019-06-15T22:44:30
CC-MAIN-2019-26
1560627997501.61
[]
docs.gandi.net
Wait for Condition Timed Out PROBLEM You may encounter the Wait for condition has timed out error after your test performs a click action. Failure Information: ~~~~~~~~~~~~~~~ Wait for condition has timed out InnerException: System.TimeoutException: Wait for condition has timed out at ArtOfTest.WebAii.Core.Browser.WaitUntilReady() This indicates the test timed out waiting for the browser to return to a "ready" state after the click command was sent to it. SOLUTION This can often be overcome by setting the applicable step's SimulateRealClick property to True. Find it under the Behavior heading in the Properties pane. ![SimulaterRealClick][1] When set to True, Test Studio will not wait for the browser to return to a "ready" state. Also, it will not automatically refresh its local copy of the DOM. Instead it sends the click command to the browser (as a live user would) and immediately moves on to the next test step. [1]: /img/troubleshooting-guide/test-execution-problems-tg/wait-for-condition-timed-out/fig1.png
https://docs.telerik.com/teststudio/troubleshooting-guide/test-execution-problems-tg/wait-for-condition-timed-out
2019-06-15T23:41:13
CC-MAIN-2019-26
1560627997501.61
[]
docs.telerik.com
Pass A Pass state ( "Type": "Pass") passes its input to its output, without performing work. Pass states are useful when constructing and debugging state machines. In addition to the common state fields, Pass states allow the following fields. Result(Optional) Treated as the output of a virtual task to be passed to the next state, and filtered as specified by the ResultPathfield (if present). ResultPath(Optional) Specifies where (in the input) to place the "output" of the virtual task specified in Result. The input is further filtered as specified by the OutputPathfield (if present) before being used as the state's output. For more information, see Input and Output Processing. Parameters(Optional) Create a collection of key-value pairs that will be passed as input. Values can be static, or selected from the input with a path. For more information, see InputPath and Parameters. this: { "georefOf": "Home" } Then the output would be this. { "georefOf": "Home", "coords": { "x-datum": 0.381018, "y-datum": 622.2269926397355 } }
https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-pass-state.html
2019-06-15T23:48:17
CC-MAIN-2019-26
1560627997501.61
[]
docs.aws.amazon.com
Deutsch LRT User Manual English user guide for all modules of the full LRT suite and Link Detox Front Page Must Read Latest pages Latest comments LRT User Manual 01 Introduction 01 Introduction What… more » Things you might not find in other Products Anchor Text Breakdown You can view the anchor text by count or by the metric of your choice. Click on a specific anchor text and you will see the filtered results in the link detail table below. Go after the links with the best anchor text! Filter by… more » Work Philosophy LRT is a report-oriented system which allows all data to be crawled on demand. This way, you will always see the real-time data. Taking all data into consideration results in a highly precise analysis that may take some time to complete, depending on… more » Structure of this manual Examples In order to present the tools and their corresponding functions in an understandable and simple way, we are using the following URLs as examples to perform the analysis: Main domain (“our domain”): “Competitors”:… more »
https://docs.linkresearchtools.com/en/01-introduction/
2019-06-15T22:37:05
CC-MAIN-2019-26
1560627997501.61
[]
docs.linkresearchtools.com
22. Adapting R2 to your needs¶ Or how you can optimize R2 for your specific data analysis 22.1. Scope¶ - This tutorial describes the adaptable settings within R2. These are basically all items under the My Settings menu-item. Through these you can personalize the use of R2 - First a couple of regular settings will be treated: changing colors, setting parameters - Next we’ll delve into the practical adaptation of R2; uploading your dataset, adding your personal genesets (categories), creating/exporting /uploading your own tracks and maintaining a user community 22.2. Step 1: Adapt your settings¶ Personalizing R2 starts with selecting the My Settings menu item (Figure 1). First item under there is the general ‘My Settings’; choose that. Figure 1: Personalizing R2: the My Settings menu-item In the User Defined Settings window several parameters for analyses in R2 can be adapted to your preferences. For most analyses these are set to appropriate values, but of course you want to set your favorite dataset here! Figure 2: The User Defined Settings window; most parameters are appropriate; you want to change your preferential dataset however. Next item in the My Settings submenu are the ‘Megasampler Presets’ (Figure 1), these are of relevance when you’ve built a specific Preset in an analysis Across Datasets. ( (chapter: Multiple datasets overview with Megasampler). The ‘Upload New Dataset’-, ‘Categories’-, ‘Tracks’- and ‘Communities’-menu-items will be elaborated further upon later in this tutorial. The ‘Timeseries colors’ allows you to set the colors for the graphs of specific experiments in a series (Figure 3). Figure3: Setting time seriescolors 22.3. Step 2: Upload your data¶ The extensive functionality of R2 can be best experienced when you have your own data available. In order to do so it first has to be uploaded and processed. Click the Upload New Dataset (Figure 1), the upload form appears (Figure 4). This has to be filled out according to MIAME guidelines as (amongst others) used at the gene expression database GEO (). If you have any specific annotations or wishes or microarray samples other than Affymetrix you’d best contact r2 support directly ([email protected]). If you would like to see a publicly accessible dataset in R2, then send an email to [email protected] with a link to the microarray data, or in the case of a Gene Expression Omnibus dataset, the GSE**** identifier, and we will take care of the rest and send a notification email. Figure4: Dataset upload form in R2; fill out the form in a MIAME GEO compliantway. 22.4. Step 3: Create your own genesets; a.k.a. categories¶ Another way in which R2 can be adapted to your specific needs is by introducing your own set of genes, which are called categories in R2; hover over the ‘Category’-item and select ‘Build Category’ (Figure 5) Figure 5: Categories related menu-items; select Build Category to make your own. The ‘Custom Category’ window pops up (Figure 6). Default this is set to upload from a text file formatted as a single column of official gene symbols (obtained from NCBI gene). The uploaded data can be stored temporary (24 hrs) or in your personal tracks database. We’re now going to use the input text field for data upload. Select the ‘Input Box’ option in the ‘how’ dropdown box and select ‘personal track’ in the ‘Where’ dropdown. Figure 6: Building a Custom Category: Uploading a file. The input box allows you to paste a bunch of genes to upload as a category for use in analyses in R2. In the example a set of genes, mutated in tumors are pasted. Selecting the ‘personal track’ option guarantees that this set will remain available for you (for people in your community, see later in this tutorial). Click “next” to upload the set (Figure 7), you’ll receive a message when everything has succeeded. Your set of genes is now available as a Category for all analyses within R2. Go back to the main page to see where you can use this set. Figure 7: Using the Input Box to upload your category of genes. We’re going to lookup your category, an example is available in the GeneSet view. In the main menu in Field 3 select ‘View a Geneset’ and click “next”( Figure 8) Figure 8: Using a category; select View Geneset In the GeneSetView your Category is privately available for yourself for similar analyses as with any other public geneset present in R2. Select My GeneCategories to choose from your categories (currently only one of course) #: and click Next (Figure 9). Figure 9: Selecting your genecategories In the next window you can specify which geneset you want to view. The Category ‘Changed Genes’ we just made above is now available (Figure 10). For now we end here, later on we’ll see the category again in the context of #:Tracks. Figure 10: Your geneset is available as category. We now return to the My Settings menu to find out how we can manage the categories we just build. From the My Settings menu hover over ‘Categories’ and click ‘Manage Categories’. Figure 11: The Category Manager The Category Manager (Figure 11) allows you to keep track of the categories; sets of genes you’ve stored within R2. Existing Categories can be adapted or deleted. New Categories can be based on existing ones, and to keep track of large amounts of categories these can be stored in collections. As an example we’re going to update the Category we just made. Click the ‘Copy/Delete/Rename/Edit Categories’ button. Figure 12: Select the procedure for a Category; in this case ‘update In the next screen you’ll be asked what type of procedure is needed for your category of choice; in our case we want to keep the category and update it. Select it and click ‘Execute’. Figure 13: The Category we just built can be adapted in all details. The details of the Category we built in the former steps are available for adaptation. In this way you can keep track and adapt the sets you use for your analyses 22.5. Step 4: Tracks in R2: create your own data annotation¶ Another important feature in R2 that can be adapted to your needs is the Tracks feature. Tracks in R2 give you the opportunity to divide your samples in groups for further analysis. They can be created as a result of analyses within R2 and stored, or you can create them yourself based on sample characteristics. We’ll first start with a K-means analysis that results in a division of the samples in two groups, for more about this analysis see the available tutorials on this subject) . On the main page of R2 select the K-means analysis in Field 3 (Figure 14) Figure 14: Selecting a K-means analysis In the settings window for the K-means analysis (Figure 15) you can choose the Category created above to cluster the current set of samples. In our case this is called stud06-ChangedGenes. Set the number of draws to 10x10, click ‘next’ Figure 15: Settings for K-means; the Category built above is available for clustering The resulting clustering in two groups might not be ultimately convincing (Figure 16), but for our testing purposes this is alright. What is important is that the resulting groups can be stored as a track; click the button ‘store as a track’. Figure 16: Clustering result of the Neuroblastoma dataset with the Category built in the former steps R2 now shows all samples as a long table with radio buttons indicating which group each sample belongs to. These can be adapted if you want to. Scroll down the window to find the fields that have to be set in order to store this as a track (Figure 17). You may want to change the group names into something more informative, and potentially also change the name to something you could easily relate to. Figure 17: Storing the current groups as Track; Track name has been adapted, the track will be stored in our personal track database. Clicking the Build set button will store this as a track. In the custom tracks manager we can adapt this track again. From the My Settings menu select ‘Manage Custom Tracks’ (Figure 18). Figure 18: Selecting the Manage Custom Tracks In the next screen keep the default selection; your current dataset. Tracks are, of course, defined based on a specific dataset; for each dataset you can store your own tracks. Click ‘Continue’. Figure 19: Tracks are defined per dataset; keep the current selection. In the next screen you’re able to adapt the Track we just generated. Of interest in here is the option “drawtrack”, which will result in the display of the information underneath the YY-plots. The tracks can also be assigned to collections to make large sets of tracks manageable. We leave the deletion of the track to the imagination of the reader. Now we’ll pay attention to the default tracks for this dataset. Figure 20: The track we just generated can be adapted from here. For a start set the Drawtrack propery to ‘yes’; we want to see this track in the graphs we create! Select Manage Default Tracks from the My Settings menu (Figure 21) Figure 21: Selecting the Default Tracks Manager In the next screen the dataset has to be defined; keep the defaults and click Continue. You’ll end up in the Default Tracks Manager (Figure 22). Basically all annotation provided with this dataset is available as a track, we’ll select the additional annotations of gender and recurrence as visible, they’ll show up in further analyses. Be sure to click the ‘Update Tracks’ button for these changes to take effect. When collections of tracks are used these we’ll show up conveniently as separate groups of tracks under the graph. Figure 22: Selecting the default tracks for this dataset 22.6. Step 5: Upload your own tracks¶ R2 also allows you to build your own tracks from scratch. You’ll be able to assign each sample to a group of your choice. To illustrate this select ‘Build Custom Track’ in ‘My Settings’ (Figure 21). The Custom Track window appears. R2 also provides the possibility to upload a custom track from a prefab text file; we’ll shortly show this; click ‘Upload a Track’ (Figure 23). Figure 23: The Custom Track dataset choice window. In the Upload Custom Track window you can either select a tab delimited txt file built with a tool like Excel, or alternatively paste tab or semicolon delimited text in the paste-box; that provides R2 with the proper assignment of each sample to a specific value. Based on that, R2 creates the groups for you. Using this procedure you can create tracks with as many groups as you like. If you intend to create a track with a limited number of groups, then an easier way is provided through the user interface, which we will now try. Click back button in your browser to return to Figure 23 Figure 24: Uploading a track described in a text file; for each sample a description has to be provided. In the Custom Track window(Figure 23) the number of groups desired has to be given in advance. We try 3. The Gene/PS option allows you to order the samples based on the expression of a gene, and make sub-groups that range in expression. Leave that field open and click Submit In the next window a convenient grouping of all annotation parameters and there values is available to create groups. In this example we select the low grade(1,2,3) vs high grade(4) vs special (4s) tumor types in the Neuroblastoma dataset as known from the INSS classification. Tick the appropriate boxes in the appropriate group columns. It is also convenient to recapitulate the resulting groups in a separate column so tick that box also (Figure 25). Figure 25: In the inss row the stage 1-2-3 tumors are selected to form group 1, stage 4 forms group 2 and stage 4s group 3 in a new track. Click “next”, all samples appear in a table with check boxes to assign them to the appropriate group. Scroll down to create the annotation for these groups (). Names have been adapted, show track is set to yes, the track is set to be stored a personal track and colors per group have been adapted. Click ‘Build set’ to store the set, you’ll receive a message accordingly in the next window. Of course you now finally want to see all our track manipulation in an actual graph. Go to the R2 main page again to see how the data of a gene will be plotted using the new tracks. Figure 26: Setting the custom track properties. - A frequently used approach is using a track based on bins of values. To avoid labour intensive excel usage you can also use the expression treshhold option from the pulldown menu. Each time an expression level has been entered, a new box is generated for the next value. Click next to save the track. Figure 27: Creating bins. Select View a gene in groups in Field 3 of the main page, type MYCN as gene name in Field 4 (Figure 27). Click ‘next’. Figure 28: Selecting a gene to view in the newly created tracks. The track created in the Custom Track manager is available for selection: u-lowgradvs4vs4s. Note that the other Track created in the K-means analysis, is also available here. Choose it and click ‘Next’. Figure 29: Select the Track created in the Custom track manager; u-lowgradvs4vs4s The different groups we created as part of our track in the previous steps are available for selection. We want to see them all, keep the settings and click ‘Next’. Figure 30: The groups created in the track are available for selection The expression of MYCN is plotted in the different groups of the track that was created (Figure 30). Note that the other track of mutated genes has a large overlap with the stage 4 group. There is also overlap with the “recurrence_or_progression” Default Track that we set to visible. Figure 31: Tracks created are visualized underneath the graph Another convenient option from the “custom track manager” is the export function which enables you to manipulate your tracks manually outside R2. This could be of use when your want to share tracks with other users or create new custom tracks. One reason you want to use the export function is to fix the ordering of your sample when generating a heatmap (see Error: Reference source not found) . Make sure you already have a personal custom track (Not a temporary track, 24h ). Select Manage default Tracks from the My Settings menu (Figure 21) and click next. Here select the dataset of interest , only datasets which have a corresponding personalized track are represented in the pulldown menu. Click the “Copy/delete/rename/export Tracks” button. Here select the personal track , “export” operation and r2_track at “export as” . Click execute” and download link with the track name can be loaded by clicking the right mouse button. 22.7. Step 6: Cooperate through R2: sharing tracks, creating communities¶ Cooperation is of great importance in scientific research. You probably want to share the tracks created above with other people in your group. For this reason R2 features the Communities. Creating a community is done by clicking ‘Community’ in the ‘My Settings’ menu (Figure 31). Figure 32: Community in the My Settings menu Since this will be the first time in the community section, there are no communities yet; click the ‘Start a new Community’ link (Figure 32). Figure 33: Starting a community In the Community window a name has to be set and a short description for people invited as members for this group (Figure 33). Through a community you can share GeneCategories, Tracks and Settings. Figure 34: Setting the Community group name and description. Click ‘Next’; you’ll be notified that the group has been created; return to the Communities Center by clicking the Community link again in the My Settings menu (Figure 32). The TestGroup has been created (next to the already existing MyTestGroup for this user). Click the link to start adding users (Figure 35). Figure 35: The available Communities for this user You have to add users by their R2 username; we’ll add user “pietmolenaar”. He’ll receive a message in the R2 startup page as soon as he logs on the next time. Click “next” to add the user. ‘Figure 36: Add a user by their R2 user name R2 returns with a message that the user has been invited, he or she has to accept your invitation first before he will see what you are sharing. Figure 37: R2 return message; user is invited; but not yet visible The perspective of the invited user after logon; he or she can accept the invitation (Figure 37). ‘Figure 38: The invited user receives a notification on the main page where he or she can accept the membership of the group When the invitation has been accepted the user is available in this community. When we add a category, track or preset the next time, it is possible to make this available to this community. Figure 39: The user is available in the TestGroup. When a Category is created there is now a possibility to make it available to a Community (Figure 39) Managing the tracks, gene categories and megasampler presets is done in a similar way as has been shown in the user tracks and user categories at the beginning of this tutorial. pietmolenaar, as a member of this group, can manage the tracks that have been shared with him via his default track manager Figure 40: As an example here the creation of a category and the assignment to a Community.
https://r2-tutorials.readthedocs.io/en/latest/Adapting_R2.html
2019-06-15T23:02:30
CC-MAIN-2019-26
1560627997501.61
[array(['_images/AdaptingR2_Trackproperties.png', 'Figure 26: Setting the custom track properties.'], dtype=object) array(['_images/AdaptingR2_Trackdescribed_bin.png', 'Figure 27: Creating bins.'], dtype=object) ]
r2-tutorials.readthedocs.io
As the RelatedDataField is using different controls to display related items in backend and widgets in the frontend, you can configure the related data field to use a custom widget for both. In order to change the frontend widget with a custom one, do one of the following: Register the custom widget via full type name If you choose this approach, the custom widget you create must inherit from SimpleView and IRelatedDataView. You also must override the LayoutTemplatePath. You have access to the controls through GetControl method of the current container. you bound the controls on PreRender using the extension methods exposed for related data. Calling this.DisplayRelatedData() configures the current view and you can retrieve the related items using the GetRelatedItems method. Code-behind Widget markup NOTE: If the widget does not implement ContentView or DynamicContentView, or if it does not have a property ControlDefinition, the widget must be replaced from the widget template inserted on the page. Register the custom widget via virtual path to the widget If you choose this approach, the custom widget you create must inherit from IRelatedDataView: Back To Top
https://docs.sitefinity.com/82/example-related-datarelated-data-and-custom-widgets
2018-08-14T10:34:44
CC-MAIN-2018-34
1534221209021.21
[]
docs.sitefinity.com
Remote Development: Configuring Your Repository After installing the client, the next step is to update the configuration files in your code repository. Update the Dockerfile The Dockerfile needs to be updated so that the container supports remote development: FROM nginx COPY . /usr/share/nginx/html RUN apt-get update && apt-get install -y rsync && apt-get clean RUN ln -s /usr/share/nginx/html /app This update adds two RUN instructions. The first installs rsync using the apt-get package manager. The second adds a symlink “/app” to point to the application code so that the remote client knows where to deploy updates to. You will then need to push this update to your code repository. Note If you are using one of the ContinuousPipe images then rsync will already be configured for installation. Update the continuous-pipe.yml If you are using the continuous-pipe.yml configuration from previous steps in the quick start guide, there is no further configuration required so you can go ahead and create a remote environment. However, if you have progressed to adding pipelines to your configuration you will need to ensure that the pipeline conditions are not so restrictive that they prevent a tide running for a remote development branch. Remote development branches are by convention in the format “cpdev*“. The following pipeline configuration will allow “cpdev*” branch pushes to trigger a tide: tasks: images: build: services: web: image: docker.io/pswaine/hello-world deployment: deploy: cluster: hello-world services: web: specification: accessibility: from_external: true pipelines: - name: Production condition: 'code_reference.branch in ["uat", "production"]' - name: Remote condition: 'code_reference.branch matches "/^cpdev/"' You will then need to push this update to your code repository.
https://docs.continuouspipe.io/quick-start/remote-development-configuring-your-repository/
2018-08-14T10:16:14
CC-MAIN-2018-34
1534221209021.21
[]
docs.continuouspipe.io
Add a role to an existing role When you add a new role to an existing role for a user, the user inherits the access that is granted by the new role. Open the existing role and click Edit in the Contains Roles related list. Use the slushbucket to add one or more roles to the existing role. Click Save. The users with the existing role inherit the access that is granted by the new role. Related TasksCreate a role
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/roles/task/t_AddARoleToAnExistingRole.html
2018-08-14T10:41:44
CC-MAIN-2018-34
1534221209021.21
[]
docs.servicenow.com
12.17.5 ElevCodeEnum Specifies values for the type of local or permanent reference datum for vertical gravity-based (i.e., elevation and vertical depth) and measured depth coordinates within the context of a well. This list includes local points (e.g., kelly bushing) used as a datum and vertical reference datums (e.g., mean sea level).
http://docs.energistics.org/WITSML/WITSML_TOPICS/WITSML-500-283-0-R-sv2000.html
2019-03-18T21:28:34
CC-MAIN-2019-13
1552912201707.53
[]
docs.energistics.org
8.5.104.08 Chat Server Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release includes only resolved issues. Resolved Issues This release contains the following resolved issues: If Chat Server is switched from backup mode to primary mode, Chat Server no longer exits if inactivity control is enabled and the alert timer expires. (ESR-11548) Upgrade Notes No special procedure is required to upgrade to release 8.5.104.08. This page was last modified on April 6, 2016, at 10:28. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/mm-cht85rn/mm-cht8510408
2019-03-18T21:24:58
CC-MAIN-2019-13
1552912201707.53
[]
docs.genesys.com
This section describes how to create presentations. There are three types of presentation you can create in BrightAuthor: BrightAuthor Version 4.7 - Full-screen presentations: A looping playlist of media–images, videos, HTML pages, etc.–that occupies the entire presentation display. - Multi-zone presentations: Content can occupy different zones on the screen, allowing you to display multiple images, videos, HTML pages, and text widgets in a customizable layout. - Interactive presentations: Playback, whether full-screen or multi-zone, is responsive to interactive events such as button presses, GPIO input, or UDP commands. Besides image, video, audio, and HTML files, presentations can also feature a number of BrightAuthor states, such as Live Text, Media RSS feeds, and HDMI input.
http://docs.brightsign.biz/display/DOC/Creating+Presentations
2019-03-18T21:47:18
CC-MAIN-2019-13
1552912201707.53
[]
docs.brightsign.biz
You can use the Protection window to create and manage mirror relationships, vault relationships, and mirror and vault relationships and to display details about these relationships. The Protection window does not display load-sharing (LS) relationships and transition data protection (TDP) relationships. System Manager does not display any storage virtual machine (SVM) that is configured for disaster recovery (DR) in the Create Protection Relationship dialog box. For a vault relationship, mirror and vault relationship, or version-flexible mirror relationship, you can modify the relationship type by modifying the policy type.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-950/GUID-29712FD0-B93A-42E3-B844-7B208EE37225.html
2019-03-18T21:54:21
CC-MAIN-2019-13
1552912201707.53
[]
docs.netapp.com
Job concurrency in Lifecycle Manager Select the concurrency level at which jobs are deployed in LCM. Jobs run faster when more nodes are deployed concurrently during a job. Because nodes become unavailable at certain times during the deployment process, higher concurrency levels can interrupt service due to the inability of a cluster to respond to queries. The concurrency level options in the LCM job dialogs provide granularity for the aforementioned tradeoff. The following concurrency levels are available when running jobs at the cluster or datacenter level: -. Tip: Run jobs during off-peak hours if using a concurrency level that potentially or definitely interrupts service. Hover on a list option to view its tooltip description:
https://docs.datastax.com/en/opscenter/6.5/opsc/LCM/opscLCMjobConcurrency.html
2019-03-18T22:26:54
CC-MAIN-2019-13
1552912201707.53
[array(['../images/screenshots/jobConcurLevelTips.png', 'Concurrency level options and tooltips for LCM jobs'], dtype=object) ]
docs.datastax.com
Microsoft Azure web hosting options Microsoft Azure provides three main cloud solutions where you can host your Kentico applications. Each of them has its own benefits and drawbacks, and they differ mainly in the amount of control they offer you over the site and machine. - Web Apps – least control, easy management - Cloud Services – moderate control, moderate management - Virtual Machines – most control, demanding management You can run Kentico in all three hosting platforms. When deciding the particular platform which you want to use for your project on, consider the amount of control you will need, as well as the amount of maintenance work you are prepared to spend on maintaining the machine infrastructure. Azure web hosting options Azure Web Apps Azure Web Apps are quick to create and easy to set up and manage. However, they do not offer as many customization options as the other two options. You can deploy your application to Web Apps through several ways (FTP, Git, Visual Studio, etc.). However, you will never have direct server access. Projects can be either websites or web applications, and can run on multiple versions of .NET. Deployment of changes is done via FTP or Web Deploy, so the changes are easily rolled out and applied to the application. This makes Kentico development a fast and simple process where changes can be completed and deployed from within Visual Studio. You can also utilize an integration to an online source control (Visual Studio Online, GitHub, Bitbucket, etc.). Azure Cloud Services Cloud Services was the first option to which you could deploy your application before the other options were introduced. The main disadvantage of this platform is that you need to set your application as Kentico Azure project (or convert a web application project to Azure project) prior to deployment. Additionally, you need to package your application as a snapshot and then deploy it to the cloud. Azure pulls from this snapshot to create the application. This creates a scenario where every change to the application requires a full deployment. Therefore, we believe that this model is now outdated and that you should rather consider using Azure Web Apps because of a quicker deployment process. Azure Virtual Machines With Azure Virtual Machines, you can choose from a gallery of images for the virtual machine operating system. Microsoft provides options for Windows, Linux, Ubuntu, CoreOS, and others. Developers have the option to choose nearly any configuration of SQL Server, BizTalk Server, or even Visual Studio for a cloud-hosted development machine. Kentico applications that require a lot of customization may benefit from a Virtual Machines deployment because the developer has the ability to control every aspect of the environment. Changes to the code can be deployed for example using Visual Studio or through other ways. Additionally, complex services and integrations can be easily managed via remote desktop. Also, legacy Kentico applications that cannot be converted to a web application can easily be set up on the desired OS and managed like an on-premise solution. Comparison from Kentico perspective Conclusion We recommend that you consider Azure Web Apps as the first cloud option for your applications. Only if you necessarily need a remote desktop access to the computing machine or you need the Windows Services and startup tasks to be running, should you consider using the Azure Virtual Machines or Azure Cloud Services instead. Other useful and related sources Official Microsoft Azure documentation Microsoft Azure services comparison DevNet articles Deploying Kentico to Microsoft Azure – Know your web hosting options Deploying Kentico to Microsoft Azure – Know your database options Was this page helpful?
https://docs.kentico.com/k11/running-kentico-on-microsoft-azure/microsoft-azure-web-hosting-options
2019-03-18T22:31:18
CC-MAIN-2019-13
1552912201707.53
[]
docs.kentico.com
Tamper Switch Interface Callbacks Detailed Description These callbacks are contributed by the Tamper Switch Interface plugin. Function Documentation This function is called whenever the tamper switch detects that it has entered the enclosure, thereby activating tamper monitoring. This function is called when the plugin detects that the enclosure has been opened.
https://docs.silabs.com/thread/2.8/group-tamper-switch-callbacks
2019-03-18T21:49:00
CC-MAIN-2019-13
1552912201707.53
[]
docs.silabs.com
To configure OnCommand Insight (OCI) for user authentication and authorization from an LDAP server, you must be defined in the LDAP server as the OnCommand Insight server administrator. You must know the user and group attributes that have been configured for Insight in your LDAP domain. OnCommand Insight supports LDAP and LDAPS via Microsoft Active Directory server. Additional LDAP implementations may work but have not been qualified with Insight. This procedure assumes that you are using Microsoft Active Directory Version 2 or 3 LDAP (Lightweight Directory Access Protocol). LDAP users display along with the locally defined users in thelist. ldap://<ldap-server-address>:port or, if using the default port: ldap://<ldap-server-address> When entering multiple LDAP servers in this field, ensure that the correct port number is used in each entry. DC=<enterprise>,DC=com
http://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-acg/GUID-4D594A29-2B6B-4AD2-BE78-E7E8620ADB38.html
2019-03-18T21:55:23
CC-MAIN-2019-13
1552912201707.53
[]
docs.netapp.com
Example: Services Using AWS PrivateLink and VPC Peering An AWS PrivateLink service provider configures instances running services in their VPC, with a Network Load Balancer as the front end. Use intra-region VPC Peering (VPCs are in the same Region) and inter-region VPC Peering (VPCs are in different Regions) with AWS PrivateLink to allow private access to consumers. The service consumer or the service provider can complete the configuration. For more information, see the following examples.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peer-region-example.html
2019-03-18T22:07:17
CC-MAIN-2019-13
1552912201707.53
[]
docs.aws.amazon.com
Contents Customer Service Management Previous Topic Next Topic Roles installed with Special Handling Notes Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Roles installed with Special Handling Notes The user roles included with the Special Handling Notes plugin (com.sn_shn). Role Description sn_shn.admin Can read, create, update, and delete special handling notes. This role contains the sn_shn.editor role. The sn_customerservice_manager role contains the sn_shn.admin role. sn_shn.editor Can read and update special handling notes. This role contains the sn_shn.user role.The sn_customerservice_agent role contains the sn_shn.editor role. sn_shn.user Can read special handling notes. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-customer-service-management/page/product/customer-service-management/reference/r_RolesInstWSpecHandlingNotes.html
2019-03-18T22:22:14
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Contents IT Service Management Previous Topic Next Topic Creating execution plan tasks Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Creating execution plan tasks An execution plan contains one or more task templates. Each task template defines work that can be completed by a specific fulfillment group. Execution plans are associated to catalog items. When the relevant catalog item is requested, these task templates are used to generate tasks. That is, tasks to be performed as part of the request fulfillment process for that requested item. Each generated task within that requested item is assigned a catalog task number. Example The execution plan for an executive desktop computer catalog item could define the following task templates in the execution plan: Obtain managerial approval Order hardware Install standard corporate applications Deliver computer to requester When this catalog item is ordered, the following request, requested item, and tasks are then created: Request REQ0002 -- 1 PC Requested Item ITEM0004 -- 1 X executive desktop Catalog Task0001 -- Obtain managerial approval Catalog Task0002 -- Order hardware Catalog Task0003 -- Install standard corporate application Catalog Task0004 -- Deliver computer to requester Define task templates Each execution plan contains one or more task templates that define actions that must be taken to fulfill a request. About this task After creating the execution plan, define these task templates. When the relevant catalog item is ordered, request tasks are generated for that requested item, based on this information. To define an execution plan task template: Procedure Navigate to Service Catalog > Catalog Policy > Execution Plans. Open an execution plan. In the Execution Plan Tasks related list, click New. Fill in the fields on the Execution Plan Task form (see table). Click Submit. Figure 1. Cat DP Task1 Table 1. Defining Task Templates Field Description Name Name of the task. This name becomes the created task name. Fulfillment group The group that performs the task. Whenever a user requests a catalog item associated with this execution plan, the task is automatically assigned to the fulfillment group. Leave blank if automatic assignment to a group is not required. Assigned to The individual who performs the task. Leave blank if automatic assignment to a user is not required. SLA The service level agreement that applies to catalog items associated with this execution plan. Note: This field is normally left blank, as the functionality was superseded by the service level management functionality. Delivery plan The parent execution plan for this task. Order A number representing the task sequence in the execution plan. It is good practice to "leave gaps" between order numbers (for example, 100, 200, 300). That way you can insert new tasks without changing the order number of existing tasks. (If the order for several execution plan tasks is the same, each of these tasks starts at the same time.) Delivery time Amount of time the task is expected to take. This value becomes a component of the overall time to complete the execution plan. Condition Condition under which the task is performed (if the condition is not met, the task is skipped). Short description Brief description of the task activity. This information populates the created task short description field. Instructions Details of the activities to be performed for the task. This information populates the created task description field. Work notes A journal field for entering comments about the task template. Note: this information is separate from the created task work notes field. Conditions to run execution plan 2. Applying Conditions to Execution Plan Tasks In this example, the Deliver to IT Labs step does not run if the request itself is in Atlanta. There is no need to deliver something to the IT lab if it is already there. Condition scripts to run tasks Administrators can use a condition script in addition to or instead of any condition to determine whether a task runs. Note: If you are using both a condition (via the condition field) and a condition script, both must be true before the task runs. To use a script, you must configure the Execution Plan Task form to add the "Condition script" field. If the script returns true, the task runs. If the script returns false, the task does not run. Ensure that you add the variable used in the script to the execution plan task. Figure 3. Use Condition Scripts On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/service-catalog-management/concept/c_CreatingExecutionPlanTasks.html
2019-03-18T22:31:26
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Setting up your Integration in Salesforce This page hosts all the guides for administrators to configure the integration in Salesforce: - Setting up a Connection to Jira - Configuring Visualforce Components - Configuring Lightning Experience Components - Configuring Email Notifications to Notify Salesforce Users - Understanding Salesforce API Call Usage
https://docs.servicerocket.com/salesforce-jira/administrator-guide/setting-up-your-integration-in-salesforce
2019-03-18T21:41:48
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicerocket.com
You must add SSL VPN server settings to enable SSL on a NSX Edge interface. Procedure - In the SSL VPN-Plus tab, Server Settings from the left panel. - Click Change. - Select the IPv4 or IPv6 address. - Edit the port number if required. This port number is required to configure the installation package. - Select one or more encryption methods or ciphers.Note: Backward compatibility issue may occur with some browsers if any of the following GCM cipher is configured at SSL VPN server: AES128-GCM-SHA256 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES256-GCM-SHA38 - (Optional) From the Server Certificates table, select the server certificate that you want to add. - Click OK.
https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.3/com.vmware.nsx.admin.doc/GUID-7077E765-4041-4AA8-AAA1-709E8DAA59F7.html
2019-03-18T21:36:17
CC-MAIN-2019-13
1552912201707.53
[]
docs.vmware.com
You can configure the FC protocol and the FCoE protocol on the storage virtual machine (SVM) for SAN hosts. LIFs are created on the most suitable adapters and are assigned to port sets to ensure data path redundancy. Based on your requirements, you can configure either the FC protocol or the FCoE protocols, or both the protocols by using System Manager. If the protocols are not allowed on the SVM, you can use the Edit Storage Virtual Machine window to enable the protocols for the SVM. The data LIFs and port sets are created with the specified configuration. The LIFs are distributed accordingly among the port sets. The FCP service is started if all of the LIFs are successfully created for at least one protocol. If LIF creation fails, you can create the LIFs and start the FCP service from the FC/FCoE window.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-950/GUID-A8D48BB6-ADBF-446D-A0B8-8F3A43D8FCF5.html
2019-03-18T22:09:46
CC-MAIN-2019-13
1552912201707.53
[]
docs.netapp.com
Getting Started with AWS Data Pipeline: - ShellCommandActivity Reads the input log file and counts the number of errors. - S3DataNode (input) The S3 bucket that contains the input log file. - S3DataNode (output) The S3 bucket for the output. - Ec2Resource The compute resource that AWS Data Pipeline uses to perform the activity. Note that if you have a large amount of log file data, you can configure your pipeline to use an EMR cluster to process the files instead of an EC2 instance. - Schedule Defines that the activity is performed every 15 minutes for an hour. Create the Pipeline. Under Pipeline Configuration, leave logging enabled. Choose the folder icon under S3 location for logs, select one of your buckets or folders, and then choose Select. If you prefer, you can disable logging instead. Under Security/Access, leave IAM roles set to Default. Click Activate. If you prefer, you can choose Edit in Architect to modify this pipeline. For example, you can add preconditions. Monitor the Running Pipeline After you activate your pipeline, you are taken to the Execution details page where you can monitor the progress of your pipeline. To monitor the progress of your pipeline Click Update or press F5 to update the status displayed. Tip If there are no runs listed, ensure that Start (in UTC) and End (in UTC) cover the scheduled start and end of your pipeline, and then click Update. When the status of every object in your pipeline is FINISHED, your pipeline has successfully completed the scheduled tasks. If your pipeline doesn't complete successfully, check your pipeline settings for issues. For more information about troubleshooting failed or incomplete instance runs of your pipeline, see Resolving Common Problems. View the Output. Delete the Pipeline.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-getting-started.html
2019-03-18T22:11:32
CC-MAIN-2019-13
1552912201707.53
[]
docs.aws.amazon.com
Rules and actions You can create rules on Citrix Analytics to help you perform actions on user accounts when unusual or suspicious activities occur. Rules let you automate the process of applying actions such as disable a user, add users to a watchlist, and so on. When you apply these rules, the action is applied immediately after an anomalous event occurs and the rule condition is met. You can also manually take actions on user accounts with anomalous activities. What are Rules?What are Rules? A rule can be defined as a set of conditions that must be met for an action to be executed. A rule can contain a single condition and one or more actions. You can create a rule with multiple actions that can be applied to a user’s account. Conditions such as Risk score and Risk score change are global conditions. Global conditions can be applied to a specific user for a specific data source. You can keep a watch on user accounts that show unusual activity. Other conditions are specific to data sources and their risk indicators. For example, if your organization works with sensitive data, you might want to restrict the amount of data shared or accessed by users internally. But if you have a large organization, it wouldn’t be feasible for a single administrator to manage and monitor many users. You can create a rule wherein, anyone who shares sensitive data excessively can be added to a watchlist or have their account disabled immediately. Note Rules with identical conditions return an error. In such a scenario, users see the following error: “(Name of the rule created) has the same condition. Modify condition and try again.” What are Actions?What are Actions? Actions help you respond to suspicious events and prevent future anomalous events from occurring. You can take action on user accounts that display unusual or suspicious behavior. You can either configure rules, Citrix Smart Tools Agent is required to perform the Log Off User and the Start Session Recording actions. Configure rules and actionsConfigure rules and actions For example, following the steps below, you can create an “excessive file sharing” rule. Using this rule, when a user in your organization shares an unusually large amount of data, the share links are automatically expired. You are notified when a user shares data that exceeds that user’s normal behavior. By applying the “excessive file sharing” rule, and taking immediate action, you can prevent data exfiltration from any user’s account. To create a rule, do the following: After signing in to Citrix Analytics, on the toolbar, go to Settings > Rules. On the Rules dashboard, click Create Rule. From the IF THE FOLLOWING CONDITION IS MET list box, select the risk indicator condition upon which you want an action applied. From the THEN DO THE FOLLOWING list box, select one or more actions and click Apply. In the Rule Name text box, provide a name and enable the rule using the toggle button provided. Click Create Rule. Apply an action manuallyApply an action manually Consider a user, Sallie Linville who shares excessive files from her Content Collaboration account. To monitor her account since her behavior is unusual, you can use the Notify administrator(s) action. To apply the above mentioned action to the user manually, you must: Navigate to the Sallie Linville’s profile and select the appropriate risk indicator. From the Actions menu, select the Notify administrator(s) action and click Apply. Due to the unusual and suspicious activity on Sallie Linville’s account, an email notification is sent to all Citrix Cloud administrators to monitor her account. The action applied is added to her risk timeline, and the action details are displayed on the right pane of the risk timeline page. Manage rulesManage rules You can view the Rules dashboard to manage all the rules created on Citrix Analytics to monitor and identify inconsistencies on your network. On the Rules dashboard, you can: View the list of rules Details of the rule Name of the rule Status – Enabled or disabled. Duration of the rule – Number of days the rule been active or inactive. Hits – The number of times the rule is triggered. Modified – Timestamp, only if the rule has been modified. Delete the rule To delete a rule, you can select the rule you want to delete and click Delete. Or you could click on the rule’s name to be directed to the Modify Rule page. click Delete Rule. In the dialog, confirm your request to delete the rule. - Click on a rule’s name to view more details. You can also modify the rule when you click on its name. Other modifications that can be done are as follows: Change the name of the rule. Conditions of the rule. The actions to be applied. Enable or disable the rule. Delete the rule. Note - If you don’t want to delete your rule, you can choose to disable the rule. - To re-enable the rule on the rules dashboard, do the following: - On the Rules dashboard, click the Status slider button to green. - On the Modify Rule page, click the Enabled slider button on the bottom of the page.
https://docs.citrix.com/en-us/citrix-analytics/security-analytics/rules-and-actions.html
2019-03-18T22:31:34
CC-MAIN-2019-13
1552912201707.53
[array(['/en-us/citrix-analytics/media/create-rule.png', 'Create rule'], dtype=object) array(['/en-us/citrix-analytics/media/create-rule-error.png', 'Create rule error'], dtype=object) array(['/en-us/citrix-analytics/media/action-list.png', 'Action list'], dtype=object) array(['/en-us/citrix-analytics/media/action-applied.png', 'Action applied'], dtype=object) ]
docs.citrix.com
Removing a field map DataStax Apache Kafka™ Connector processing fails if any mapped Kafka topic fields are missing from the record. If a field is missing or if the schema change is known ahead, remove the topic-column mapping. Warning: DataStax database table schema have PRIMARY KEY columns that uniquely identify and partition rows. See CQL data modeling. A NULL is not allowed in a PRIMARY KEY column. Therefore, each PRIMARY KEY column must have a field-column map. Procedure - On a DataStax database node, use the CQLSH DESCRIBE TABLE command to verify that the field is not a PRIMARY KEY (PK) column.Note: If the field is a PK column, map a different field to the column. - Remove the field to column mapping from the configuration. See Mapping kafka topics to DataStax Enterprise database tables. - Update the configuration with the new settings. See Updating the DataStax Apache Kafka Connector configuration. - Start the Kafka producer that had the schema change.
https://docs.datastax.com/en/kafka/doc/kafka/operations/kafkaRemoveField.html
2019-03-18T22:20:46
CC-MAIN-2019-13
1552912201707.53
[]
docs.datastax.com