content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
If a serious error occurs in TE when processing transport imports a system error will be generated. This will stop the import processing for the affected import queue until the issue is resolved. Errors can be due to issues like: - RFC connection problems - Authorisation issues - Import process failures - Transport file access issues - Program terminations - BW renaming issues - Manual steps not complete These can be viewed in the System & RFC Errors option: A list of all system errors will be presented. Further information about the reason for the error can be seen by highlighting the relevant item: Once resolved the import status for the affected transport will need to be reset to allow imports to resume. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.basistechnologies.com/activecontrol-user-guide/7.0/en/topic/system-errors
2019-11-12T09:38:28
CC-MAIN-2019-47
1573496664808.68
[array(['https://manula.r.sizr.io/large/user/3588/img/systemrfcerrors.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/3588/img/expresso-windows-gui-system-errors-2.png', None], dtype=object) ]
docs.basistechnologies.com
Filter¶ The filter in the Statistics manages the contents of the reports. The combination of the configured conditions in the Filter is a global condition for all reports in the Statistics. It means that the conditions stay the same as you navigate between the reports. It works in the same way as other filters in our interface, but the statistics Filter has its peculiarities: The reports will not be created without the Period filter. It means that you have to specify the date, for which you want to see the statistics. You can set the condition exclude for the Statistics filters. For example, to display the statistics on all offers, except 003.RU: - Open the filter by offers. - Check All. - Uncheck the offer 003.RU. - Click Apply. To rebuild the report using the filter, click Apply. If you click Reset, all previously set conditions will be deleted. The Period condition will get its default value of 30 days. Period¶ The period you set in the Filter, will be applied to all your reports. Till you change it manually. To set the period, we have created the Calendar: In the Calendar , you set the period, for which the statistics will be displayed. You can: Select the date by clicking the calendar. Here you select the absolute date. It means that it will be saved as you set it. Tomorrow, the day after tomorrow and in a month you will see the statistics for the dates you have selected. Select the period from the suggested ones (a day, 7 days, 30 days, etc.). Here you select the relative period. It means that each period will be counted from the current date. For example, you have selected 7 days. If you look at the statistics today, the data will be displayed for the last seven days. If you look at the statistics tomorrow, the period will be counted from tomorrow and include the last seven days Enter the date manually. This date is also absolute (see the explanation in Item 1). You can also use the Compare feature to see the statistics on two selected periods at the same time. You can select the period for comparison from the ones suggested earlier. Or specify the period for comparison manually. My Reports¶ Of course, you have your favorite combinations of the Filter, conditions that are used often. For example, you often analyze your traffic in Russia for the last week. In order not to perform the same settings every time, we have added the feature that allows to save your favorite reports. You can save as many reports as you want. All of them will be displayed in.To view the saved report, just click it. Save Report¶ Tip When you configure and save Filters using the Save Report function, in addition to the Filters, the report where you have saved the filter will be also saved. Therefore, we recommend to select the most convenient report for each filter to be saved. To save your favorite combination of filters once: - Go to the report you use most often for the traffic analysis. - You can also set up the Optional parameter, Dinamics and the type of event date, if required; these parameters will be also saved. - Set up the conditions: period in the calendar and selection by filters. - Click the Save report button, then name the filter you are saving. - The report that you have just saved will appear in thetab.
http://userdocs.cityads.com/docs/udocs/en/latest/content/statistics/filter.html
2019-11-12T08:18:46
CC-MAIN-2019-47
1573496664808.68
[]
userdocs.cityads.com
Software upgrade You can use the Software Upgrade option to upgrade your NetScaler SD-WAN Center software to the latest version. The software upgrade process places NetScaler SD-WAN Center into maintenance mode. If a database migration is required, this process can take several hours. During this time, no statistics data will be collected from the Virtual WAN, and all NetScaler SD-WAN Center functionality will be unavailable. Important Running the upgrade during maintenance hours is recommended. Note Download the appropriate NetScaler SD-WAN Center software package to your local computer. You can download this package from Downloads page. To upload and install a new version of the NetScaler SD-WAN Center software In the NetScaler SD-WAN Center web interface, click the Administration tab. Click Global Settings and then click Software Upgrade. Click Browse to open a file browser, and select the software package you want to upload. Click Upload to upload the selected software package to the current NetScaler SD-WAN Center virtual machine. After the upload completes, click Install. When prompted to confirm, click Install. In the dialog box that appears, select the I accept the End User License Agreement checkbox, and then click Install.
https://docs.citrix.com/en-us/netscaler-sd-wan-center/10/administration/how-to-perform-software-upgrade.html
2019-11-12T10:00:11
CC-MAIN-2019-47
1573496664808.68
[]
docs.citrix.com
1.3. About S3 Clusters¶ Virtuozzo Infrastructure Platform allows you to export cluster disk space to customers in the form of an S3-like object-based storage. Virtuozzo Infrastructure Platform is implemented as an Amazon S3-like API, which is one of the most common object storage APIs. End users can work with Virtuozzo Infrastructure Platform as they work with Amazon S3. You can use the usual applications for S3 and continue working with it after the data migration from Amazon S3 to Virtuozzo Infrastructure Platform. More details on S3 clusters are provided in the Administrator’s Guide and Administrator’s Command Line Guide. Version 3.0.3 — Sep 24, 2019
https://docs.virtuozzo.com/virtuozzo_infrastructure_platform_3_staas_integration_guide/introduction/about-s3-clusters.html
2019-11-12T08:50:22
CC-MAIN-2019-47
1573496664808.68
[]
docs.virtuozzo.com
MQTT some other computer or software needs to be a MQTT "broker". It is designed for connections with remote locations where a "small code footprint" is required or the network bandwidth is limited. For example, it has been used in sensors communicating to a broker via satellite link, over occasional dial-up connections with healthcare providers, in Facebook Messaging, and in a range of home automation and small device scenarios. It is also ideal for mobile applications because of its small size, low power usage, minimized data packets, and efficient distribution of information to one or many receivers.. The model is as follows: A computer is the message broker, other computers are the message clients. Each client can publish messages and receive messages to a message broker: A client first creates a topic by sending the topic name to the broker, and then posts messages (publishes) with that topic to the broker. Other clients will register their interest in a topic (subscribe) to the broker server. If there is a match between a client's registered interest topic and another client's posted messages topics, the client receives all the messages of that topic. There is no history of messages kept, so all history is lost. MQTT Client DAT is a client server and can post and receive messages, but it needs to be connected to a broker server. MQTT is released under the (Eclipse Public License 10.0). The source code repository for MQTT is here. See also: MQTT Client DAT, MQTT home page, MQTT in Wikipedia, TCP/IP DAT, PAHO-MQTT independent Python client library. Unlike a Wire that connects nodes in the same Operator Family, a Link is the dashed lines between nodes that represent other data flowing between nodes, like CHOP Exports, node paths in parameters, and expressions in parameters referencing CHOP channels, DAT tables and other nodes.
https://docs.derivative.ca/index.php?title=MQTT&oldid=9357
2019-11-12T09:43:12
CC-MAIN-2019-47
1573496664808.68
[]
docs.derivative.ca
The USS Roosevelt (DDG-80) Commissioned on October 14, 2000 in a ceremony at her home port in Mayport, Florida, the USS Franklin and Eleanor Roosevelt (DDG-80) is an Arleigh Burke -class Aegis guided missile destroyer. The USS Franklin and Eleanor Roosevelt was named after America's 32nd president and his wife to honor Franklin Roosevelt's achievements as Commander-in-Chief and Eleanor Roosevelt's commitment and contributions to securing worldwide human rights. Franklin and Eleanor Roosevelt's granddaughter, Nancy Roosevelt Ireland, was the ship's sponsor and officially christened her.
http://docs.fdrlibrary.marist.edu/ussroos.shtml
2017-07-20T14:37:18
CC-MAIN-2017-30
1500549423222.65
[]
docs.fdrlibrary.marist.edu
- System Requirements - Starting or Exiting XenCenter - Uninstalling XenCenter To start a XenCenter session, do one of the following: If XenCenter was configured in an earlier session to restore your server connections on startup and a master password was set, you will be prompted to enter this password before continuing. See Store Your Server Connection State to find out more about how to set your server reconnection preferences. Note that it is possible to run only one XenCenter session per user. To exit the current XenCenter session: on the File menu, click Exit. Any servers and VMs that are running when you exit will continue running after the XenCenter application window closes. If there are any XenCenter tasks running, such as importing or exporting VMs, you will be warned when you try to exit. You can choose to exit anyway, in which case unfinished tasks may not complete successfully, or wait until the unfinished tasks have completed.
http://docs.citrix.com/en-us/xencenter/6-5/xs-xc-intro-welcome/xs-xc-intro-start.html
2017-07-20T14:29:45
CC-MAIN-2017-30
1500549423222.65
[]
docs.citrix.com
This is an overview of how scheduling works in nova from Pike onwards. For information on the scheduler itself, refer to Filter Scheduler. For an overview of why we've changed how the scheduler works, refer to Scheduler Evolution. The scheduling process is described below. Note This is current as of the 16.0.0 Pike release. Any mention of alternative hosts passed between the scheduler and conductor(s) is future work. As the above diagram illustrates, scheduling works like so: Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/nova/queens/reference/scheduling.html
2020-09-19T00:39:40
CC-MAIN-2020-40
1600400189264.5
[]
docs.openstack.org
TileLayout Overview The Blazor TileLayout is based on the two-dimensional CSS grid and is able to display content in tiles. They can be dragged around and rearranged as desired by the user. The tiles can be resized to change the way they span across the rows and columns too. This allows you to build customizable dashboards for your end users whose state they can save. This article contains the following sections: - First Steps - Component Reference - Appearance Settings - Tile Contents First Steps To create a basic dashboard with the TileLayout component: Add the TelerikTileLayouttag. Set the desired number of Columnsfor the layout. - Optionally, configure the Width, Height, ColumnWidthand/or RowHeightto define the desired dimensions for the layout and the base size for the individual tiles. Read more in the Appearance Settings section below. Under its TileLayoutItemstag, add TileLayoutIteminstances whose Contenttag you can populate with the desired content. - Optionally, set the RowSpanand ColSpanparameters of the tiles to values larger than 1to increase their size in the grid. Optionally, set the Resizableand Reorderableparameters to trueto allow the user to alter the layout. Read more about storing it in the State article. Basic Tile Layout with its core features <TelerikTileLayout Columns="3" ColumnWidth="200px" RowHeight="150px" Resizable="true"> The result from the code snippet above Component Reference You can use the component reference to get or set its state. <TelerikTileLayout @ > @code{ TelerikTileLayout TileLayoutRef { get; set; } } Appearance Settings We recommend that you get familiar with the concept of a CSS Grid Layout first - the TileLayout component is based on it as underlying implementation and core properties. The main feature that the component exposes are divided into two levels: Main Element The main element defines the number of Columns, the Width and Height of the layout, as well as the ColumnWidth and RowHeight. The ColumnWidth and RowHeight define the maximum dimensions for each column and row of the main layout. As the overall component dimensions change (e.g., because of different viewports), the column and row heights might decrease to provide even distribution. A single tile can span more than one column or row. Generally, you should use settings that allow the desired number of columns and rows (depending on their width and height) to fit in the set width and height of the entire component. You do not, however, have to set Width and Height - the main measure is the Columns and it will suffice to create a layout. Since the Tile Layout is a block element, its width defaults to auto in the browser, and the actual width is distributed evenly between the number of Columns. Setting Height="100%" can let the component take up its parent dimensions in terms of height as well. If the width and height dimensions are insufficient to accommodate the defined row height and column width that the current tiles create, the actual row height and/or column width will decrease so that the appointed number of columns fit in the available width and the existing number of rows fit in the available height. Columns, Width and Height have no default values. ColumnWidth and RowHeight default to 1fr. There are two other settings you should take into account if you set explicit dimensions to the main element - the ColumnSpacing and RowSpacing - they are CSS units that define the gaps between the individual columns and rows and count towards the total dimensions of the component. They default to 16px. Lastly, you can also set the Class parameter that renders at the main wrapping element of the tile layout so you can cascade custom CSS rules through it. Individual Tiles Each tile provides settings that define how many columns and rows its takes up - the ColSpan and RowSpan parameters. It also provides a Class parameter so you can cascade CSS rules through it.. Tile Contents To set the tile contents, you have the following options: The HeaderTextis a parameter on the individual tile that renders a simple string in its header portion. The HeaderTemplatetag lets you define custom content, including components, in the header portion of the tile. The Contentis a RenderFragmentwhere you put the content of the tiles - it can range from simple text, to comlex components. Examples of setting content in tiles <TelerikTileLayout ColumnWidth="200px" RowHeight="150px" Width="700px" Columns="3" Resizable="true" Reorderable="true"> <TileLayoutItems> <TileLayoutItem HeaderText="Simple Header Text, no content"> </TileLayoutItem> <TileLayoutItem HeaderText="Simple Header Text, some content" ColSpan="2"> <Content>You can put components in the tiles too.</Content> </TileLayoutItem> <TileLayoutItem ColSpan="2"> <HeaderTemplate> <strong>Bold</strong> header from a template </HeaderTemplate> <Content><p>As with other render fragments, you can put <strong>any</strong> content here</p></Content> </TileLayoutItem> </TileLayoutItems> </TelerikTileLayout> The result from the code snippet above Content Scrollbars The Tile Layout component targets modern web development and thus - responsive dimensions for the content. Therefore, we expect that most content will have width: 100%; height: 100%; so that it can stretch according to the size of the tile that the end user chooses. If you want to change that (for example, because you have certain content that requires dimensions set in px), you can use the Class of the individual tile and choose the required setting for the overflow CSS rule of the .k-card-body element in that particular tile. Content scrollbars and overflow behavior in the Tile Layout <TelerikTileLayout ColumnWidth="300px" RowHeight="150px" Columns="3" Resizable="true" Reorderable="true"> <TileLayoutItems> <TileLayoutItem HeaderText="Responsive Content"> <Content> <div style="width: 100%; height: 100%; background: cyan;">Resize this tile - my size fits it</div> </Content> </TileLayoutItem> <TileLayoutItem HeaderText="Static Content"> <Content> <div style="width: 300px; height: 300px; background: yellow;">I will be cut off by default</div> </Content> </TileLayoutItem> <TileLayoutItem HeaderText="Custom Scrollbars" Class="tile-with-overflow"> <Content> <div style="width: 300px; height: 300px; background: yellow;">I am contained in the tile and produce scrollbars</div> </Content> </TileLayoutItem> </TileLayoutItems> </TelerikTileLayout> <style> .tile-with-overflow .k-card-body { overflow: scroll; /* choose a value that fits your needs */ } </style> The result from the code snippet above
https://docs.telerik.com/blazor-ui/components/tilelayout/overview
2020-09-19T00:30:21
CC-MAIN-2020-40
1600400189264.5
[array(['images/tilelayout-overview.png', 'TileLayout first look'], dtype=object) array(['images/tilelayout-tile-content.png', 'tile content settings'], dtype=object) array(['images/tile-content-scrollbars.png', 'Content scrollbar behavior and customization'], dtype=object)]
docs.telerik.com
How to Animate a Cut-out Character Harmony provides some great tools for animating cut-out character models. You can simply animate a character by using the Transform tool to move the drawings and pegs that constitute its cut-out model. With animated keyframes, basic interpolations are automatically done by Harmony. The timing of the movement can be adjusted to make life-like motion, and drawings can be swapped at any point during interpolations, allowing to combine movement and drawing changes to create frame-perfect cut-out animations. We will create a simple cut-out animation by making your character's first pose on the first frame, its second pose on a later frame, and letting Harmony interpolate. - In the you have your first pose. -: Applies the easing parameters to the selected keyframes. - Apply/Previous: Applies the easing parameters to the selected keyframes and then selects the previous keyframe in the timeline. - Apply/Next: Applies the easing parameters to the selected keyframes and then selects the next keyframe in the timeline. - Close: Closes the dialog box. If you did not apply the modifications, they will be cancelled. Navigating Layers Since cut-out animation often involves complex models with hierarchies, learning shortcuts to easily navigate between layers can save a lot of time. One very useful trick to learn is the Center on Selection keyboard shortcut. This shortcut allows you to navigate directly to the selected layer in the Timeline view. Hence, you can use the Camera view to visually select the layer you wish to animate, then use the Center on Selection shortcut to find the layer in the Timeline view, instead of going through your layer's list to locate it. - In the Timeline view, collapse all your layers. - In the Tools toolbar, select the Transform tool. - In the Camera view, select any layer, preferably one that is deep in your character model's hierarchy. - Click on the Timeline view's tab, or anywhere in the Timeline view that will not change your selection. This is because the shortcut works if the Timeline view is focused on. - Press O. The Timeline view is now focused on the selected layer, and all its parents have been automatically expanded. This shortcut can also be used in the Node view. If your selection is inside a group, it will navigate to the inside of this group and the center the Node view on the selected node. If you want to use this shortcut frequently, you might find that having to click on the Timeline or Node view. Otherwise, Harmony also features keyboard shortcuts to quickly change your selection from the currently selected layer to its parent, child, or to one of its siblings, allowing you to quickly navigate your character's hierarchy from one layer to one of its related layers. - In the Tools toolbar, select the Transform tool. - In the Transform Tool Properties view, make sure the Peg Selection mode is deselected. - In the Camera or Timeline view, select a layer or object attached to a hierarchy. - From the top menu, select Animation > Select Parent or press B to select the parent layer. Select Animation > Select Child or press Shift + B to select the child layer or.
https://docs.toonboom.com/help/harmony-14/advanced/getting-started-guide/cut-out.html
2020-09-18T22:51:48
CC-MAIN-2020-40
1600400189264.5
[array(['../Resources/Images/HAR/Stage/Cut-out/an_animationtitleimage.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/HAR11_Animation_001.png', 'Collapse Layers Collapse Layers'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/HAR11_Animation_002.png', 'Select First KeyFrame Select First KeyFrame'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_004_basicanimation_002.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_004_basicanimation_003.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/HAR11_Animation_005.png', 'Select Frame for Second Pose Select Frame for Second Pose'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_004_basicanimation_005.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/HAr11_Easing_001.png', 'Easing - Select Keyframes Easing - Select Keyframes'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/HAr11_Easing_002.png', 'Set Ease for Mulitple Parameters Set Ease for Mulitple Parameters'], dtype=object) array(['../Resources/Images/EDU/HAR/Student/Steps/an2_setease_003.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_005_updownhierarchy_001.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_005_updownhierarchy_002.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/Steps/an_005_updownhierarchy_003.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/select-siblings.png', None], dtype=object) ]
docs.toonboom.com
Ingress control is a core concept in Kubernetes, that is implemented by a third party proxy. Contour is a Kubernetes ingress controller that uses the Envoy edge and service proxy. Tanzu Kubernetes Grid includes signed binaries for Contour and Envoy, that you can deploy on Tanzu Kubernetes clusters to provide ingress control services on those clusters. For general information about ingress control, see Ingress Controllers in the Kubernetes documentation.
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-manage-instance-ingress-contour.html
2020-09-19T00:42:00
CC-MAIN-2020-40
1600400189264.5
[]
docs.vmware.com
Hortonworks Docs » ambari administration 2.7 Configure password policy for users 16 Install the Ambari agents manually on Ubuntu 16 Role based access control Permissions that an Ambari-level administrator assigns each user or group define each role. Use these tables to determine what permissions each role includes. For example: A user with any role can view metrics, but only an Ambari Administrator can create a new Ambari-managed cluster. Table 1. Service-Level Permissions Permissions Cluster User Service Operator Service Administrator Cluster Operator Cluster Administrator Ambari Administrator View metrics View status information View configurations Compare configurations View service alerts Start, stop, or restart service Decommission or recommission Run service checks Turn maintenance mode on or off Perform service-specific tasks Modify configurations Manage configuration groups Move to another host Enable HA Enable or disable service alerts Add service to cluster Table 2. Host-Level Permissions Permissions Cluster User Service Operator Service Administrator Cluster Operator Cluster Administrator Ambari Administrator View metrics View status information View configuration Turn maintenance mode on or off Install components Add or delete hosts Table 3. Cluster-Level Permissions Permissions Cluster User Service Operator Service Administrator Cluster Operator Cluster Administrator Ambari Administrator View metrics View status information View configuration View stack version details View alerts Enable or disable alerts Enable or disable Kerberos Upgrade or downgrade stack Table 4. Ambari-Level Permissions Permissions Cluster User Service Operator Service Administrator Cluster Operator Cluster Administrator Ambari Administrator Create new clusters Set service users and groups Rename clusters Manage users Manage groups Manage Ambari Views Assign permission and roles Manage stack versions Edit stack repository URLs Parent topic: Managing cluster roles © 2012–2019, Hortonworks, Inc. Document licensed under the Creative Commons Attribution ShareAlike 4.0 License . Hortonworks.com | Documentation | Support | Community
https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/administering-ambari/content/amb_roles_and_authorizations.html
2020-09-19T00:04:49
CC-MAIN-2020-40
1600400189264.5
[]
docs.cloudera.com
number of physics shapes for the Sprite. The number of physics shapes for the Sprite. A physics shape is a cyclic sequence of line segments between points that define the outline of the Sprite used for physics. Since the Sprite can have holes and discontinuous parts, its outline is not necessarily defined by a single physics shape.
https://docs.unity3d.com/2019.2/Documentation/ScriptReference/Sprite.GetPhysicsShapeCount.html
2020-09-18T22:37:12
CC-MAIN-2020-40
1600400189264.5
[]
docs.unity3d.com
Hybrid Information-Centric Networking¶ Hybrid Information-Centric Networking (hICN) is a network architecture that makes use of IPv6 or IPv4 to realize location-independent communications. It is largely inspired by the pioneer work of Van Jacobson on Content-Centric Networking, that was a clean-slate architecture whereas hICN is based on the Internet protocol and easy to deploy in today networks and applications. hICN brings many-to-many communications, multi-homing, multi-path, multi-source, group communications to the Internet protocol without replicated unicast. The project implements novel transport protocols, with a socket API, for real-time and capacity seeking applications. A scalable stack is available based on VPP and a client stack is provided to support any mobile and desktop operating system. - Getting started - Core library - VPP Plugin - Introduction - Quick start - Using hICN plugin - Getting started - Setup the host for VPP - Configure VPP - Start VPP - Configure hICN plugin - Example: consumer and producer ping - Example: packet generator - Transport library - Portable forwarder - Face manager - Overview - Developing a new interface - Control plane support - NETCONF/YANG - Release note - Routing plugin for VPP and FRRouting for OSPF6 - Telemetry - Utility applications - Introduction - Using hICN utils applications - Executables - Client/Server benchmarking using hiperf - Applications
https://hicn.readthedocs.io/en/latest/
2020-09-19T00:14:52
CC-MAIN-2020-40
1600400189264.5
[]
hicn.readthedocs.io
Troubleshooting in SQL Server Transactional Replication on *very* busy Subscribers + WindDbg + WPR & WPA Hello again! Moving forward to the subject, I worked in a specific situation where replication latency increased above acceptable values, scenario was the following bellow: - Publisher to Distributor | Log Reader Delivery Latency = Ok - Distributer to Subscriber | Distributor Delivery Latency = Nok - MSdistribution_history was throwing state 2 messages (distrib agent reader thread waits on writer thread ) - REPL Stored Procedures *sp_MSins/upd/del_*** where executing fast (at the microsecond scale) At this point I had all the evidences of the issue being indeed writing to the subscriber but the replication stored procedures were executing fast at the subscriber and the volume of data being replicated hadn't exponentially changed! So... I was intrigued about this... Basically replication was telling me that sql server engine was too slow but sql server was telling me it was executing its stuff fast enough. Well... Lets take full memory dump of DISTRIB.exe and check see if there is anything interesting to find, just took 2 dumps using procdump with 5 minutes delay between them: Looking at the debugger, analyzing the dump, an exercise has to be be done to guess the functionality of the functions based on its name (ok... I have to admit, I miss this part of having source code access and private symbols back when I was at Microsoft to better troubleshoot, but... It can still be done using the public symbols to a certain degree of correctness) One of the first things I do while analyzing dumps is to use "!runaway", "!runaway 2" and "!runaway 4", between the 2 dumps. The main idea is to analyze the delta of user time and kernel time for each thread and see if anything looks wrong. Nothing looked wrong in this 5 minutes delta, i can see that thread 0, 5 and 6 had work to do, with more weight for thread 0 and 6, well, this a sign they are working. Lets check what is this thread 0 and 6, since this are the ones that had higher User Mode Time and Kernel Mode Time. Thread 0, is likely related with the reader thread - sp_MSget_repl_commands that pulls data from the distribution database and stores them in an internal queue. The call stack looks good. Going for Thread 6, this is likely related with the writer thread, basically running the sp_MSIns/Upd/Del in the subscriber database (writing queued commands). The call stack looks good. So at this stage, I can't see any issue with the Distrib.exe (calls rdistcom.dll) process and can't also see any issue with the duration of the Replication stored procedures butwhile I was analyzing dm_exec_procedure_stats I noticed that the cached time vs last execution time was almost the same! This triggered the investigation to classic "plan cache internals" mode and it was quickly found memory pressure. In this case custom replication stored procedures are being used and although the logic is reduced to a minimum,it takes too much time to compile (sys.dm_exec_cached_plans is your friend here, use it and look for the compile time. Compile time > 1.000.000 microseconds Execution time < 100 microseconds Well, there are already many blogs and official Microsoft documentation about plan cache internals and memory pressure and also replication troubleshooting. Driven my curiosity, ran a WPR @ TEST ENV, just to see how would a writer thread latency look like under WPA, here it is bellow (basically modified repl procedures by adding a waitfor delay to simulate a slow procedure execution): The resolution for this situation was around changing memory parameters to cover a very specific scenario. Hope you find this reading interesting. All the best! Paulo Condeça.
https://docs.microsoft.com/en-us/archive/blogs/paulocondeca/troubleshooting-in-sql-server-transactional-replication-on-very-busy-subscribers-winddbg-wpr-wpa
2020-09-19T00:18:51
CC-MAIN-2020-40
1600400189264.5
[array(['https://msdnshared.blob.core.windows.net/media/2017/05/72.jpg', '7'], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/2017/05/8.jpg', '8'], dtype=object) ]
docs.microsoft.com
There are three main modes in Editor: Basic, Advanced and Expert. By default it will open in Basic mode. Users can change the mode by clicking on the dropdown bar in the complexity section of the Home Menu. Basic mode is the first mode. Editor defaults to basic mode. Basic mode allows users to do simple flow control and create simple scenarios. Basic mode includes: Advanced mode contains more nodes and ways to interact with nodes. Advanced mode allows more experienced users to harness more powerful properties of Modest3D to create complex scenarios. Advanced mode includes: Expert mode is best used by people who have experience with programming, as it contains features allowing users to harness to the fullest extent the power of Modest3D, but at the loss of simplicity. Expert mode includes: Programmer mode is the most advanced version of Editor. It is not available by default, as it it too complex for users to use without extensive backgrounds in programming. To enable Programmer Mode, open up the settings windows, and type 1-3-3-7. This will create a new option under modes, called Programmer Mode. Select it to continue in Programmer mode. Programmer Mode allows you to access the Engine behind Modest3D, and use the Editor in more ways than most users, and features considered too complex or unintuitive for an average user. Most conventions in most .NET languages are supported, such as Lists, loops, etc. In programmer mode, the context menu is essentially useless when it comes to picking things out in the menus, because of the vast feature set. Instead, you should get acquainted with typing what you need, after right-clicking. This will search all nodes in programmer mode, allowing you to locate needed nodes with ease.
https://docs.modest3d.com/Editor_Documentation/Basic,_Advanced_and_Expert_Modes
2020-09-19T00:16:50
CC-MAIN-2020-40
1600400189264.5
[]
docs.modest3d.com
Surface Duo management overview Commercial customers can manage Surface Duo using any of various Enterprise mobility management (EMM) solutions that each provide a consistent set of cloud-based, device management capabilities whether managing employee- or company-owned devices. You can manage Duo manage Duo via the Microsoft EMM that uses a unified console -- Microsoft Endpoint Manager – and extensible components like Microsoft Intune. Alternatively, you can use any EMM provider in Google’s Android ecosystem. In some cases, third-party EMM solutions provide additional support to meet specific scenarios that may be useful depending on your environment. To compare EMM solutions, refer to the Android Enterprise Solutions Directory. Endpoint Manager with Intune lets you manage Duo with the latest mobile device management policies as well as earlier technologies such as Exchange ActiveSync. If you already use Exchange ActiveSync settings to manage mobile devices, you can apply those settings to Duo devices with Intune using an Email device-configuration profile. For more information, see Add email settings to devices using Intune. The primary means of managing devices in Intune, profiles provide default settings that you can customize to meet the needs of your organization.
https://docs.microsoft.com/en-us/surface-duo/surface-duo-manage
2020-09-19T00:57:28
CC-MAIN-2020-40
1600400189264.5
[]
docs.microsoft.com
When models and objects are introduced in the setup node, the developer is able to position, rotate, highlight, adjust colors, control visibility, scale and rename the object. By introducing an object in the setup node, you are setting the model or object into the start position for the lesson. A scene model has many properties that can be modified from the setup node. Effects are the visual changes you can make to your model. You can pick an effect type from the Effect Type dropdown menu: To change the color used on the effects you can check the effect color checkbox and choose a color from the color picker. You can also highlight your model by checking the is Highlighted checkbox and choose a highlight color that will be separate from the effect color. A highlighted object will always be visible through any overlaying objects.
https://docs.modest3d.com/Editor_Documentation/Nodes/Setup_Node/Models_Or_Objects
2020-09-19T00:15:20
CC-MAIN-2020-40
1600400189264.5
[]
docs.modest3d.com
- 11 Dec 2019, 9:50 - Up-to-date : 11 Dec 2019, 11:32 PAYDAY lender PiggyBank moved bust making 1000s of consumers in limbo over repayments and settlement. The business specialized in providing money for between Ј100 and Ј1,000 to individuals with woeful credit, battery charging sky-high rates of interest as much as 1,698.1 percent APR. The short term financial loans have to get repaid over periods of between a week and five period. Based on their internet site, the business moved into government on 5 december. It actually was finally reported having around 45,000 consumers from the products. The firm that is struggling already been briefly prohibited from providing money in July this current year over problems it was providing irresponsibly. Oahu is the newest wearing a sequence of payday loan providers starting government following demise of 1 of the UNITED KINGDOM’s most significant brief loan provider, Wonga, in August a year ago. Will you be due a quick payday loan reimbursement? Scores of cash advance consumers might be due refunds. Compensation or refund is frequently offered where financing got mis-sold or where cost inspections just weren’t strict adequate. Listed here is everything you need to understand: - Visitors who may have paid down pay day loans credit can nonetheless state. Even though you’ve paid your financial situation you could nevertheless be in a position to bring a reimbursement in the event that you battled to settle the income at that time. - If you are however settling pay day loan bills you are able to complain still. You can easily grumble if you’ve battled to help make monthly payments. In case your criticism is prosperous it might reduce the total amount you borrowed. - You’ll be able to still state could be the firm not is available. Larger agencies such as for example Wonga and QuidQuick not any longer manage but it doesn’t imply you cannot acquire some cash back. Visitors can certainly still generate problems about organizations which not any longer function, that they will receive a refund as they will have to apply directly to administration firms although it is less likely. This could mean they have to pay back less so it’s still worth complaining although, if their complaint is successful and they still owe debts. Loan provider 247Moneybox closed store a week ago, and QuickQuid, WageDayAdvance and Juo financial loans also known as it every day before this season, plunging an incredible number payday loans online in Illinois of visitors into monetary anxiety. Subscribers just who still owe PiggyBank cash are recommended to carry on creating their own repayments as regular. Normally, they chance harmful their unique credit history or getting strike with further costs just like a punishment for late or payments that are missing. Individuals that have currently provided payment boasts, and people who’re but to, will likely to be included with a lengthy range of creditors that were due earnings. Both tend to be not likely to obtain a payout given that larger loan providers like banking companies and buyers will first be paid. Ideas on how to state settlement from payday loan providers If you believe you will be due payment from the payday loan provider then here is simple tips to state according to revenue writer DebtCamel: You’ll want to illustrate that you could not manage to simply take out of the financing at that time which you lent it. If obtaining the mortgage suggested you couldn’t spend their costs or any other bills you then happened to be irresponsibly lent to. You may even myself eligible to settlement in the event that you took out back to back loans because this shows that you really couldn’t afford to take out a new one if you had any late repayments, or. Check straight back using your e-mails, lender comments and credit score rating reporter for research. You’ll want to create a proper criticism letter to every loan provider outlining the manner in which you are irresponsibly lent to and can include the data. You’ll want to mention loans that are”unaffordable and request a reimbursement associated with the interest and expenses your settled, also the 8 % Ombudsman interest at the top. Render duplicates out of all the proof before turning in instance any such thing occurs for them. Furthermore query for any mortgage getting taken from their credit score. There is a page theme here. Hold off as much as eight months to know straight back from them. The Financial Ombudsman if you’re not happy with the answer, or they don’t get back to you, contact. Wonga customers in the same place need stated that they have since obtained payment winnings actually they have been much smaller than anticipated after it went bust but. Individuals will also be becoming recommended to keep aware of scammers just who can be attempting to make the most of the business’s demise. Visitors are encouraged to disregard e-mail and cellphone phone phone calls that question them to evolve the lender membership which they ordinarily create monthly payments to.
https://docs.securtime.in/index.php/2020/09/04/piggybank-switches-into-management-making-1000s-of/
2020-09-18T22:39:21
CC-MAIN-2020-40
1600400189264.5
[]
docs.securtime.in
Static and Workspaces Directories¶ Last Updated: May 2020 1. Create Static Directory¶ Static files include all files in the public or static directories of Tethys Portal and apps and examples include JavaScript, CSS, and images. As the name implies, static files are not dynamically generated and can be served directly by NGINX, which will be able to do so much more efficiently than the Daphne-Django server could. You will need to collect all of the static files into one directory for NGINX to be able to more easily host them. This can be done as follows: Get the value of the static directory from the STATIC_ROOT setting: tethys settings --get STATIC_ROOT Tip You may set the STATIC_ROOTvariable to point at whichever directory you would like as follows: tethys settings --set STATIC_ROOT /my/custom/static/directory Create the static directory if it does not already exist sudo mkdir -p <STATIC_ROOT> sudo chown -R $USER <STATIC_ROOT> Note Replace <STATIC_ROOT>with the value returned by the previous command (see step 1.1). Collect the static files to the STATIC_ROOTlocation: tethys manage collectstatic 2. Create App Workspaces Directory¶ The app workspaces directory is one location where all app workspaces are collected. The app workspaces typically store files that are generated by the app while it is being used and often this data needs to be preserved. Collecting all of the workspaces of apps to a single location makes it easier to provision storage for the workspaces and backup the data contained therein. Setup the app workspace directory as follows: Get the value of the static directory from the TETHYS_WORKSPACES_ROOTsetting: tethys settings --get TETHYS_WORKSPACES_ROOT Tip You may set the TETHYS_WORKSPACES_ROOTvariable to point at whichever directory you would like as follows: tethys settings --set TETHYS_WORKSPACES_ROOT /my/custom/static/directory Create the workspaces directory if it does not already exist sudo mkdir -p <TETHYS_WORKSPACES_ROOT> sudo chown -R $USER <TETHYS_WORKSPACES_ROOT> Note Replace <TETHYS_WORKSPACES_ROOT>with the value returned by the previous command (see step 2.1). Collect the app workspaces to the TETHYS_WORKSPACES_ROOTlocation: tethys manage collectworkspaces Tip The TETHYS_WORKSPACES_ROOT directory is one of the recommended directories to backup. Tip You can collect both the static files and the app workspaces with a single command: tethys manage collectall
http://docs.tethysplatform.org/en/latest/installation/production/configuration/basic/static_and_workspaces.html
2020-09-18T22:20:44
CC-MAIN-2020-40
1600400189264.5
[]
docs.tethysplatform.org
Repositories Analytics - Introduced in GitLab Premium 13.4. - It’s deployed behind a feature flag, enabled by default. - It’s enabled on GitLab.com. - It’s recommended for production use. - For GitLab self-managed instances, GitLab administrators can opt to disable it.. Enable or disable Repositories Analytics Repositories Analytics is under development but ready for production use. It is deployed behind a feature flag that is enabled by default. GitLab administrators with access to the GitLab Rails console can opt to disable it. To enable it: Feature.enable(:group_coverage_reports) To disable it: Feature.disable(:group_coverage_reports)
https://docs.gitlab.com/ee/user/group/repositories_analytics/index.html
2020-09-19T01:17:01
CC-MAIN-2020-40
1600400189264.5
[]
docs.gitlab.com
9.1.04.001: Patch 1 for BMC Remedy AR System BMC Remedy AR System the ftp sites that are listed in the Knowledge Article 000164912. You must log in with your BMC Support ID to access the knowledge article. Download the ARS9104Patch001_9.1.04.001.zip file. Note If you open this patch package in Windows Explorer, the package contents may not be visible because of the compression algorithm. Due to this, some compressed files might not open. If you face this issue, run a third-party utility such as 7Zip to extract the package contents. Where to go from here - For information about applying the patch, see How to apply the patch. - For information about the issues corrected in this patch, see Known and corrected issues . - Upgrading to Remedy with Smart IT 2.0 - Installing Remedy with Smart IT 2.0 Hello i am busy with an upgrade from 9.0.01 to 18.08. I also have to run this patch hey? Hi Phindiwe Moshoele, You can upgrade from 9.0.01 to 18.08 without applying any patches. For the supported upgrade paths, see Planning an upgrade in the Remedy ITSM Deployment 18.08 documentation. Regards, Sirisha Hi, Here is the answer for the following comments: Regards, Sirisha
https://docs.bmc.com/docs/brid91/en/9-1-04-001-patch-1-for-bmc-remedy-ar-system-9-1-04-825209874.html
2020-11-23T20:07:39
CC-MAIN-2020-50
1606141164142.1
[]
docs.bmc.com
Set-Az VMPlan Sets the Marketplace plan information on a virtual machine. Syntax Set-Az VMPlan [-VM] <PSVirtualMachine> [-Name] <String> [[-Product] <String>] [[-PromotionCode] <String>] [[-Publisher] <String>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] Description The Set-AzVMPlan cmdlet sets the Azure Marketplace plan information for a virtual machine. Before being able to deploy a Marketplace image through the command-line, programmatic access must be enabled or the virtual machine must be deployed by using the Azure portal. Parameters The credentials, account, tenant, and subscription used for communication with azure. Specifies the name of the image from the Marketplace. This is the same value that is returned by the Get-AzVMImageSku cmdlet. For more information about how to find image information, see Navigating and Selecting Azure Virtual Machine images with PowerShell and the Azure CLI () in the Microsoft Azure documentation. Specifies the product of the image from the Marketplace. This is the same information as the Offer value of the imageReference element. Specifies a promotion code. Specifies the publisher of the image. You can find this information by using the Get-AzVMImagePublisher cmdlet. Specifies the virtual machine object for which to set a Marketplace plan. You can use the Get-AzVM cmdlet to obtain a virtual machine object. You can use the New-AzVMConfig cmdlet to create a virtual machine object.
https://docs.microsoft.com/en-us/powershell/module/az.compute/set-azvmplan?view=azps-5.1.0
2020-11-23T19:01:29
CC-MAIN-2020-50
1606141164142.1
[]
docs.microsoft.com
In September 2016, Gartner analysts published a report on application designs that unintentionally expose services to vast amounts of cyber threats. The analysts explain how typical network designs are not built for a complex and interconnected world of applications that live in the cloud. While the public Internet is a “cesspool” of attacks, digital businesses require more interconnectedness than ever before. Attackers who discover services often find vulnerabilities in applications and application programming interfaces (APIs) that bypass firewalls and intrusion prevention systems (IPS). Attackers will target services, users of the services, or both. Services and applications need to be insulated from the dangers of the public Internet using logical isolation of applications with technologies such as software-defined perimeters (SDPs). The Fourth Industrial Revolution is the ongoing automation of traditional manufacturing and industrial practices, using modern smart technology. Industry 4.0 is now characterized by dispersed and remote edge deployments with limited human input for protection, thereby requiring even more protection with SDP capabilities. While analysts believe the best line of defense is to completely isolate endpoints from the Internet, turnkey solutions to solve the problem at scale are rare in the market. Xaptum’s ENF secures critical Industry 4.0 sectors such as transportation, energy, and water. These customers often have tens of thousands of edge devices in dispersed and remote locations, potentially running over an untrusted communication path. To enable them to isolate critical assets from the public Internet, Xaptum has deployed a robust, scalable, secure, multi-tenant, global overlay network. Some of the critical benefits extended are as follows: The ENF uses default-deny firewall rules in isolating remote IP endpoints to mitigate risks of lateral attacks. It provides customers with the macro/microsegmentation capabilities they need to securely accelerate deployments. The overlay provides the same “black cloud” invisibility. Its endpoint-centric firewall enables Zero Trust Networking at scale. Zero Touch Provisioning expedites onboarding, while the use of only familiar networking tools eliminates the need to learn new vendor-specific SDP software or configuration language. As a standards-compliant IP (layer 3) network, the ENF can tunnel multiple industrial automation protocols and is compatible with all device manufacturers, software vendors, and cloud hosts. There are no SDKs or agent platform lock-ins and clients require only standard TLS & crypto libraries. To learn more about the solution’s interoperability, please refer to the Concepts article. Contact us and we’ll get back to you as soon as possible.Contact Us
https://docs.xaptum.com/concepts/use_cases/industry_4.0/
2020-11-23T18:34:20
CC-MAIN-2020-50
1606141164142.1
[]
docs.xaptum.com
Help Merchant Account Provider From Payment Processing Software Library CreditLine Payment Processing Software Bank Help. This site can also be reached at → Looking for better rates? Get a Free Credit Card Processing Cost Comparison! Contents What would you like to do? Contact Live Help All end user support and sales are provided by the CreditLine Dealers Setup Processor See CreditLine Processor Setup Troubleshoot Merchant Account See CreditLine Troubleshooting for information on contacting the processor support and troubleshooting techniques Change Merchant Account The license is attached to the merchant account. Please contact your CreditLine Dealer for support. Contact Processor PC Support See Processor Support Contact List Find Out If Your Software Is Secure and Compliant Please see Security Guide and CreditLine Compliance Find Answers - Enter a keyword into the search box on the left of this page - Browse through the CreditLine FAQ - You can try the Categories on the right
http://docs.911software.com/credit_card_processing_software/index.php?title=Help_Merchant_Account_Provider
2020-11-23T18:40:57
CC-MAIN-2020-50
1606141164142.1
[]
docs.911software.com
After you've figured out who to include in your smart list, learn how to manage the people in it - this includes finding, merging, deleting, and more. - Add Person to Blocklist - Create a Person Manually - Delete People in a Smart List or List - Export People to Excel from a List or Smart List - Filter Activity Types in the Activity Log of a Person - Find All People in a Revenue Stage - Find Duplicate People with Custom Logic - Find and Merge Duplicate People - Database Dashboard - Locate the Activity Log for a Person - Understanding Anonymous Activity and People - Use Members of List in a Smart List - Using the Person Detail Page - Use Quick Find in a List or Smart List
https://docs.marketo.com/display/public/DOCS/Managing+People+in+Smart+Lists
2020-11-23T18:53:29
CC-MAIN-2020-50
1606141164142.1
[]
docs.marketo.com
.Cmdbuffsize R/O Property (32-bit Platforms.
https://docs.tibbo.com/taiko/sock_cmdbuffsize
2020-11-23T19:05:22
CC-MAIN-2020-50
1606141164142.1
[]
docs.tibbo.com
Using Jackrabbit¶ If you maintain a bigger website, it might make sense to use Jackrabbit instead of the doctrine-dbal implementation of Jackalope. Jackrabbit performs better in many cases, and as a bonus it also supports Versioning of content. Installation¶ Use the following command to install the jackrabbit adapter: composer require jackalope/jackalope-jackrabbit In addition to the previous command you also have to make sure that Jackrabbit is running on your server. Configuration¶ Change the config/packages/sulu_document_manager.yaml file to something similar as below. Mind that it is recommended to pass the URL via an environment variable, which can e.g. be set in your .env file. parameters: env(JACKRABBIT_URL): '' env(PHPCR_USER): 'admin' env(PHPCR_PASSWORD): 'admin' sulu_document_manager: sessions: default: backend: type: jackrabbit url: "%env(JACKRABBIT_URL)%" parameters: "jackalope.jackrabbit_version": "%env(JACKRABBIT_VERSION)%" workspace: "%env(PHPCR_WORKSPACE)%" username: "%env(PHPCR_USER)%" password: "%env(PHPCR_PASSWORD)%" live: backend: type: jackrabbit url: "%env(JACKRABBIT_URL)%" parameters: "jackalope.jackrabbit_version": "%env(JACKRABBIT_VERSION)%" workspace: "%env(PHPCR_WORKSPACE)%_live" username: "%env(PHPCR_USER)%" password: "%env(PHPCR_PASSWORD)%" Note The PHPCR_WORKSPACE is something similar as a database name so it is best practice to have a similar value for it, for example: su_myproject in your .env files. The JACKRABBIT_URL needs to point to your jackrabbit backend. Depending on your OS and jackrabbit version, the default should be or. The JACKRABBIT_VERSION allows to enable additional functionality such as UTF-8 support for storing emoticons 🐣. You can use the following curl request to gather the version of your jackrabbit backend: curl -XGET Migration¶ In order to migrate from doctrinedbal to jackrabbit you have to export your data before changing the configuration: bin/adminconsole doctrine:phpcr:workspace:export -p /cmf cmf.xml bin/websiteconsole doctrine:phpcr:workspace:export -p /cmf cmf_live.xml bin/adminconsole doctrine:phpcr:workspace:export -p /jcr:versions jcr.xml Then change the configuration as explained in the above Configuration section, and then execute the following command to initialize the jackrabbit workspaces for sulu: bin/adminconsole cache:clear bin/adminconsole sulu:document:initialize Now executed these commands to clear any previously existing data (first you should make sure that you really don’t need this data anymore). bin/adminconsole doctrine:phpcr:node:remove /cmf bin/websiteconsole doctrine:phpcr:node:remove /cmf # the following command can fail if the node not exist ignore the error then: bin/adminconsole doctrine:phpcr:node:remove /jcr:versions After that you can import the exported data from doctrinedbal into jackrabbit by running the following commands: bin/adminconsole doctrine:phpcr:workspace:import -p / cmf.xml bin/websiteconsole doctrine:phpcr:workspace:import -p / cmf_live.xml bin/adminconsole doctrine:phpcr:workspace:import -p / jcr.xml
https://docs.sulu.io/en/2.2/cookbook/jackrabbit.html
2020-11-23T19:12:34
CC-MAIN-2020-50
1606141164142.1
[]
docs.sulu.io
With the URL Content Redirection feature, URL content can be redirected from the client machine to a remote desktop or published application (client-to-agent redirection), or from a remote desktop or published application to the client machine (agent-to-client redirection). For example, an end user can click a link in the native Microsoft Word application on the client and the link opens in the remote Internet Explorer application, or an end user can click a link in the remote Internet Explorer application and the link opens in a native browser on the client machine. Any number of protocols can be configured for redirection, including HTTP, mailto, and callto. - Web browsers - You can type or click a URL in the following browsers and have that URL redirected. - Internet Explorer 9, 10, and 11 - 64-bit or 32-bit Chrome 60.0.3112.101, Official Build or later - URL Content Redirection does not work for links clicked from inside Windows 10 universal apps, including the Microsoft Edge Browser. - Client system - To use URL Content Redirection with the Chrome browser, you must enable the VMware Horizon URL Content Redirection Helper extension for Chrome. This extension is installed, but is not enabled, when you connect to a Connection Server instance on which URL Content Redirection rules are configured. To enable the extension, restart Chrome after you connect to the Connection Server instance and click Enable Extension when Chrome prompts you to enable the extension. - The first time a URL is redirected from the Chrome browser, you are prompted to open the URL in Horizon Client. You must click Open VMware Horizon Client for URL content redirection to occur. If you select the Remember my choice for VMware Horizon Client links check box, this prompt does not appear again. - Remote desktop or published application - A Horizon administrator must enable URL Content Redirection when Horizon Agent is installed. For information, see the Setting Up Virtual Desktops in Horizon and Setting Up Published Desktops and Applications in Horizon documents. - To use URL Content Redirection with the Chrome browser, a Horizon administrator must install and enable the VMware Horizon URL Content Redirection Helper extension on the Windows agent machine. For information, see the Configuring Remote Desktop Features in Horizon document. A Horizon administrator must also configure settings that specify how Horizon Client redirects URL content from the client to a remote desktop or published application, or how Horizon Agent redirects URL content from a remote desktop or published application to the client. For complete information, see the "Configuring URL Content Redirection" topic in the Configuring Remote Desktop Features in Horizon document.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/2006/horizon-client-mac-installation/GUID-253EEE07-F59D-40EB-B784-DE0D6D7EEA51.html
2020-11-23T19:59:40
CC-MAIN-2020-50
1606141164142.1
[]
docs.vmware.com
Dock Bar Access the Dock Bar Theme Options via Appearance > Customize > Dock Bar or via Appearance > Theme Options > Dock Bar. All of the highlighted areas above are examples of the Dock Bar. The Dock Bar can be positioned in a number of areas and contain Social Icons, Shopping Cart, Search, Menu and the Information Panel Widget. This option controls the position of the Dock Bar. See the diagram above for examples of the various positions. Options with the Float setting will set the Dock Bar to float over page content. This option sets how the Dock Panels are displayed. Dock Panels are the drop panel that appears when various Dock Icons are click i.e. Shopping Cart, Search, Menu. This option will set the display of the Search Icon within the Dock Bar. For Desktop Only: Control the display of the Information Panel. To add content to this area, add widgets to WordPress Admin > Appearance > Widgets > Information Panel. If you have set the Info Dock Panel option to Dock Icon, you can choose which icon represents it here. Visit the Font Awesome site to choose the icon you want. Enter the Class only e.g. fa-info If you have set the Menu In Dock Bar option to On ( See here for where to set that option ), you can choose which icon represents the Menu here. Visit the Font Awesome site to choose the icon you want. Enter the Class only e.g. fa-home
http://you.docs.acoda.com/tag/dockbar/
2020-11-23T19:58:21
CC-MAIN-2020-50
1606141164142.1
[]
you.docs.acoda.com
You upgrade a host interface card (HIC) to increase the number of host ports or to change host protocols. If you are upgrading HICs in a duplex configuration, repeat all steps to remove the other controller canister, remove the HIC, install the new HIC, and replace the second controller canister.
https://docs.netapp.com/ess-11/topic/com.netapp.doc.e-5700-sysmaint/GUID-6E5BBE30-3D6A-4873-923E-43F80AE31A75.html?lang=en
2020-11-23T18:33:36
CC-MAIN-2020-50
1606141164142.1
[]
docs.netapp.com
Depending on whether you are installing Unified Manager on virtual infrastructure or on a physical system, it must meet must not set any memory limits on the VM where Unified Manager is deployed, and you must not enable any features (for example, ballooning) that hinder the software from utilizing the allocated memory on the system. Additionally, there is a limit to the number of nodes that a single instance of Unified Manager can monitor before you need to install a second instance of Unified Manager. See the Best Practices Guide for more details. Technical Report 4621: Unified Manager Best Practices Guide Memory-page swapping negatively impacts the performance of the system and the management application. Competing for CPU resources that are unavailable because of overall host utilization can degrade performance. The physical or virtual system on which you install Unified Manager must be used exclusively for Unified Manager and must not be shared with other applications. Other applications might consume system resources and can drastically reduce the performance of Unified Manager. If you plan to use the Unified Manager backup and restore feature, you must. The physical system or virtual system on which you install Unified Manager must. You can mount /opt/netapp or /opt/netapp/data on an NAS or SAN device. Note that using remote mount points may cause scaling issues. If you do use a remote mount point, ensure that your SAN or NAS network has sufficient capacity to meet the I/O needs of Unified Manager. This capacity will vary and may increase based on the number of clusters and storage objects you are monitoring. If you have mounted /opt/netapp or /opt/netapp/data from anywhere other that the root file system, and you have SELinux enabled in your environment, you must set the correct context for the mounted directories. See the topic SELinux requirements for mounting /opt/netapp or /opt/netapp/data on an NFS or CIFS share for information about setting the correct SELinux context.
https://docs.netapp.com/ocum-95/topic/com.netapp.doc.onc-um-isg/GUID-DBFB20B9-24F5-482E-A93F-706B66C64AB9.html?lang=en
2020-11-23T19:22:47
CC-MAIN-2020-50
1606141164142.1
[]
docs.netapp.com
Socket Behavior in the HTTP Mode When in the HTTP mode, the socket is behaving differently compared to the normal data mode. Incoming connection rejection As was explained in Accepting Incoming Connections, if your device "decides" to reject an incoming TCP connection, it will send out a reset TCP packet. This way, the other host is instantly notified of the rejection. Rules are different for HTTP sockets: If there is an incoming TCP connection to the web server of the device (incoming HTTP connection request), and if your application has one or more sockets that are configured to accept this connection, and if all such sockets are already occupied, then the system will not reply to the requesting host at all. If there is an incoming HTTP connection request, and if your application has no HTTP sockets configured to accept this connection, then the system will still respond with a reset. This behavior allows your application to get away with fewer HTTP sockets. Here is why. If all HTTP sockets are busy and your application sends out a reset, the browser will show a "connection reset" message. If, however, your device does not reply at all, the browser will wait, and resend its request later. Browsers are "patient" — they will typically try several times before giving up. If any HTTP sockets are freed-up during this wait, the repeat request will be accepted and the browser will get its page. Therefore, very few HTTP sockets can handle a large number of page requests in a sequential manner and with few rejections. Other differences All incoming data is still stored in the TX buffer (yes, TX buffer). This data, however, is not passed to your program but, instead, is interpreted as HTTP request. This HTTP request must be properly formatted. The sock object supports GET and POST commands. The RX buffer is not used at all and does not have be allocated any memory. GET and POST commands can optionally contain "request variables". These are stored into the VAR buffer from which your program can read them out later. No on_sock_data_arrival event is generated when the HTTP request string is received into the RX buffer. Once entire request has been received the socket prepares and starts to output the reply. Your program has no control over this output until a BASIC code fragment is encountered in the HTTP file. The on_sock_data_sent event cannot be used as well. When code fragment is encountered in the HTTP file control is passed to it and then your program can perform desired action, i.e. generate some dynamic HTML content, etc. When this fragment is entered, the sock.num is automatically set to the correct socket number. Once HTTP reply has been sent to the client the socket will automatically close the connection, as is a normal socket behavior for HTTP. A special property- sock.httpnoclose- allows you to change this default behavior and leave the connection opened.
https://docs.tibbo.com/taiko/sock_http_behav
2020-11-23T20:14:14
CC-MAIN-2020-50
1606141164142.1
[]
docs.tibbo.com
Licensing for server groups Because servers in a server group use the same database, they share licenses. Each AR System server must have its own AR Server license key, but the server group feature shares all other BMC product licenses with all of the servers in the group. So for any product in a server group, when you add the license, since it gets stored in the database shared by all the servers, it only has to be installed one time. This registers it for all servers in the group. To add a server license, see Adding or removing licenses. All other license types, such as all types of Fixed and Floating user licenses and application licenses, are stored in the database and are therefore shared by all servers in the server group. You can add these other product licenses at any time. However, for all AR System servers, except the first server, the license must be added prior to installing the server.
https://docs.bmc.com/docs/ars2002/licensing-for-server-groups-909634043.html
2020-11-23T20:25:58
CC-MAIN-2020-50
1606141164142.1
[]
docs.bmc.com
Updating Group Membership Server group membership is changed by means of the PUT /pools/default/serverGroupsHTTP method and URI. Description This changes the membership of the groups that currently exist for the specified cluster. It does not permit the creation, removal, or renaming of groups. Every existing node must be specified once, and thereby assigned to an existing group. Groups to which no nodes are assigned must be specified as empty. This node-to-group assignment must be specified as a JSON document: when curl is used, the document can be specified either explicitly, as a parameter-value on the command-line; or as a .json file. The request is transactional: it either succeeds completely, or fails without impact. See Server Group Awareness, for a conceptual overview of groups. Curl Syntax curl -d @<jsonInput> -X PUT -u <administrator>:<password> http://<host>:<port>/pools/default/serverGroups?rev=<number> The syntax includes the following: jsonInput. The new configuration of nodes and groups, provided as a JSON document. When curlis used, the document can be provided either explicitly, as a parameter-value on the command-line; or as a .jsonfile. See below for information on format. rev=<number>. The revision number for the existing configuration. This number can be retrieved by means of the GET /pools/default/serverGroupsHTTP method and URI: see Getting Server Group Information. Note that this number changes whenever the configuration changes. Node-to-Group Assignment The following JSON document assigns all three of a cluster’s nodes to the first of its two groups, thereby leaving the second group empty. { " } ] } The value of the groups key is an array, each of whose elements is an object that corresponds to one of the server groups for the cluster. Each group must be specified with the following: nodes. An array of the nodes in the group. Each node is specified with the key otpNodeand the value ns_1@node-ip-address. uri. The URI for the server group. This can be retrieved with the GET /pools/default/serverGroupsHTTP method and URI: see Getting Server Group Information. Each URI is terminated with the unique UUID string for the group: for the default group, Group 1, this is always 0. Optionally, the name of the group can be specified. This must correspond exactly to the current, established name of the group. This method cannot be used to change that name. Responses Success gives 200 OK, and returns an empty array. Malformed JSON, or failure to address all existing nodes, groups and group-names accurately, gives 400 BAD REQUEST, and returns a Bad input array, containing the submitted JSON document. Failure to authenticate gives 401 Unauthorized. Examples In the following examples, the cluster is considered to have two groups defined: Group 1 contains nodes 10.143.190.101and 10.143.190.102. Group 2 contains the node 10.143.190.103. The JSON document shown above, in Node-to-Group Assignment, can thus be used to locate all nodes in Group 1, leaving Group 2 empty. The JSON document can be specified as a file, as follows: curl [email protected] -X PUT \ Alternatively, the JSON document can be provided explicitly, on the command line. curl -u Administrator:password -X PUT \ \ -d '{"}] }' Each of these commands moves the node 10.143.190.103 into Group 1. To check results, use the GET /pools/default/serverGroups HTTP method and URI: see Getting Server Group Information. See Also See Getting Server Group Information for getting information on the current node-to-group configuration for the server. See Server Group Awareness, for a conceptual overview of groups. See Manage Groups, for examples of managing groups by means of Couchbase Web Console. See Adding Servers to Server Groups, for information on adding nodes to groups.
https://docs.couchbase.com/server/6.5/rest-api/rest-servergroup-put-membership.html
2020-11-23T18:58:37
CC-MAIN-2020-50
1606141164142.1
[]
docs.couchbase.com
CString cannot call CString.Format() on itself. The call will fail if the CString object itself is offered as a parameter. This can then lead to unpredictable results. Use an intermediate temporary CString to avoid the issue. CString str = "Some Data"; str.Format("%s%d", str, 123); CString str is also used in the parameter list.
https://docs.roguewave.com/en/klocwork/current/cxx.func.cstring.format
2020-11-23T19:04:05
CC-MAIN-2020-50
1606141164142.1
[]
docs.roguewave.com
Connecting External Flash IC The EM510W platform includes the fd. object. For this to work, an external flash IC must be connected to the EM510. As shown on the schematic diagram below, this flash IC is ATMEL AT45DB041. Since the EM510 has a dedicated flash memory configuration, the flash IC will be used exclusively by the fd. object and provide 1MB of storage. For the fd. object to work, it must be enabled first. Do this through Project Settings -> Customize dialog.
https://docs.tibbo.com/taiko/em510_ext_flash
2020-11-23T19:49:39
CC-MAIN-2020-50
1606141164142.1
[array(['em510_to_flash_ic.png', 'em510_to_flash_ic'], dtype=object)]
docs.tibbo.com
This topic describes how to configure a password policy in your instance of Open edX. By default, Open edX imposes a minimal password complexity policy for all users who log in to the LMS or Studio. Under the default password complexity policy, passwords must contain 2 to 75 characters and cannot be similar to the user’s username or email address. You can substitute your own password policy for the default policy. To configure a password policy in replacement of the default password policy, follow these steps. AUTH_PASSWORD_VALIDATORSconfiguration key in the lms.ymlconfiguration file. For details, see Configuring a Password Validator. An Open edX password validator is a Python class that specifies how user passwords are validated. You can use whatever criteria you choose to establish a password policy for your Open edX instance. You can create your own custom password validator, or import one or more password validators from password_policy_validators in the edx-platform repository on GitHub. Those password validators include minimum length, maximum length, user attribute similarity, minimum alphabetic, minimum numeric, minimum uppercase, minimum lowercase, minimum punctuation, and minimum symbols. For more information, see also the Django password validation documentation. To configure your Open edX instance to use a particular password validator, add your password validator to the list in the AUTH_PASSWORD_VALIDATORS configuration key in the lms.yml configuration file. For example, to add a password validator named MyPasswordValidator, add a line like this to the lms.env.json configuration file. "AUTH_PASSWORD_VALIDATORS": [ { "NAME": "path.to.file.MyPasswordValidatorClass", }, ]
https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-juniper.master/configuration/password.html
2020-11-23T19:39:31
CC-MAIN-2020-50
1606141164142.1
[]
edx.readthedocs.io
Hills Clothesline Installation Greater Western Sydney For Hills Clothesline Installation in Greater Western Sydney, We Have What You Need! If you're in need of Hills Clothesline Installation Greater Western Sydney, Lifestyle Clotheslines offers fast and efficient clothesline and washing line installation services throughout the Greater Western Sydney area. The clothesline installation experts at Lifestyle Clotheslines can take care of your clothesline installation within Sydney for your convenience, so feel free to give us a call today.
https://docs.lifestyleclotheslines.com.au/article/783-hills-clothesline-installation-greater-western-sydney
2020-11-23T20:00:33
CC-MAIN-2020-50
1606141164142.1
[]
docs.lifestyleclotheslines.com.au
This section describes Open edX installation options and the components that each option includes. More details about the various options are at the Open edX Installation Options page on the edX wiki. There are a three virtual machine options, which install the Open edX software in a virtual Ubuntu machine. If you prefer, you can install into an Ubuntu machine of your own using the Native installation. You can install the Open edX developer stack (devstack), the Open edX full stack (fullstack), or the Open edX analytics developer stack (analytics devstack). Devstack is a Vagrant instance designed for local development. Devstack has the same system requirements as Fullstack. This allows you to discover and fix system configuration issues early in development. Devstack simplifies certain production settings to make development more convenient. For example, nginx and gunicorn are disabled in devstack; devstack uses Django’s runserver instead. For information about devstack and other installation and configuration options from edX and the Open edX community, see the Open edX Installation Options page on the edX wiki. Note Because of the large number of dependencies needed to develop extensions to Open edX Insights, a separate development environment is available to support Analytics development. For more information, see Installing and Starting Analytics Devstack. For more information about Vagrant, see the Vagrant documentation. Fullstack is a Vagrant instance designed for installing all Open edX services on a single server in a production-like configuration. Fullstack is a pre-packaged Native installation running in a Vagrant virtual machine. For information about fullstack and other installation and configuration options from edX and the Open edX community, see the Open edX Installation Options page on the edX wiki. For more information about Vagrant, see the Vagrant. All installations include the following Open edX components: Devstack, fullstack and native installations also include: Fullstack and native also include the following Open edX components: Analytics devstack also includes the following Open edX components: When you install devstack, fullstack, or analytics devstack you can customize the environment. This section provides information about configuration options for Open edX virtual machines. If you are installing an Open edX virtual machine on a Linux or Mac computer, you must configure your installation to use the preview feature in Open edX Studio. /etc/hostsfile, add the following line. 192.168.33.10 preview.localhost You can customize the location of the Open edX source code that gets cloned when you provision a devstack. You may want to do this to have the Open edX virtual machine work with source code that already exists on your computer. By default, the source code location is the directory in which you run vagrant up. To change this location, follow these steps. VAGRANT_MOUNT_BASEenvironment variable to set the base directory for the edx-platformand cs_comments_servicesource code directories.
https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-ficus.master/installation/installation_options.html
2020-11-23T19:59:48
CC-MAIN-2020-50
1606141164142.1
[]
edx.readthedocs.io
Setting up SalesForce for Engagement Scoring (Account and Contact Records) One of the primary uses of CaliberMind is to monitor all of the engagement across all your platforms and channels - and roll it up to the Contact and Account level. Once you have that information in CaliberMind the next best practice is to push those data values down into SalesForce so they are available directly on your accounts, contacts, leads, opportunities, campaigns and other objects. The following article walks you through which objects CalbierMind can push values too, how to set them up and what fields should be created in SalesForce to support this new information. Note - it is assumed at this point that you already understand how to setup custom fields on objects within SalesForce. There are two primary objects that you will probably want to have CaliberMind push data down too. These are the Account and Contact objects. Since CaliberMind is scoring all events and then rolling them up to the Contact then Account Level, these two objects are the obvious place to push those values. SalesForce Account Object - The following fields should be added to your Account Object. This will enable CaliberMind to automatically push data on an regular frequency and update these values directly. SalesForce Contact Object - The following fields should be added to your Contact Object. This will enable CaliberMind to automatically push data on an regular frequency and update these values directly. Once you have these fields setup in SalesForce you can setup CaliberMind to push the fields down on a schedule from the CaliberMind Workflow. You want to use the Create/Update records in SalesForce flow from the flow builder in CaliberMind.
https://docs.calibermind.com/article/sweyr8x0t9-setting-up-sales-force-for-engagement-scoring
2020-11-23T19:12:37
CC-MAIN-2020-50
1606141164142.1
[]
docs.calibermind.com
This article covers various levels of consistency guarantees with regards to: - receiving messages - updating user data - sending messages It does not discuss the transaction isolation aspect. Transaction isolation applies only to the process of updating the user data, it does not affect the overall coordination and failure handling. Transactions Four levels of guarantees with regards to message processing are offered. A level's availability depends on the selected transport. Transaction levels supported by NServiceBus transports The implementation details for each transport are discussed in the dedicated documentation sections. They can be accessed by clicking the links with the transport name in the following table: Transaction scope (Distributed transaction) In this mode the transport receive operation is wrapped in a TransactionScope. Other operations inside this scope, both sending messages and manipulating data, are guaranteed to be executed (eventually) as a whole or rolled back as a whole. If required, the transaction is escalated to a distributed transaction (following two-phase commit protocol coordinated by MSDTC) if both the transport and the persistence support it. A fully distributed transaction is not always required, for example when using SQL Server transport with SQL persistence, both using the same database connection string. In this case the ADO.NET driver guarantees that everything happens inside a single database transaction and ACID guarantees are held for the whole processing. Transaction scope mode is enabled by default for the transports that support it (i.e. MSMQ and SQL Server transport). It can be enabled explicitly with the following code: var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.TransactionScope); Atomicity and consistency guarantees In this mode handlers will execute inside a TransactionScope created by the transport. This means that all the data updates and queue operations are all committed or all rolled back. A distributed transaction between the queueing system and the persistent storage guarantees atomic commits but guarantees only eventual consistency. Consider a system using MSMQ transport and RavenDB persistence implementing the following message exchange scenario with a saga that models a simple order lifecycle: OrderSagareceives a StartOrdermessage - New OrderSagaDatainstance is created and stored in RavenDB. OrderSagasends VerifyPaymentmessage to PaymentService. - NServiceBus completes the distributed transaction and the DTC instructs MSMQ and RavenDB resource managers to commit their local transactions. StartOrdermessage is removed from the input queue and VerifyPaymentis immediately sent to PaymentService. - RavenDB acknowledges the transaction commit and begins writing OrderSagaDatato disk. PaymentServicereceives VerifyPaymentmessage and immediately responds with a CompleteOrdermessage to the originating OrderSaga. OrderSagareceives the CompleteOrdermessage and attempts to complete the saga. OrderSagaqueries RavenDB to find the OrderSagaDatainstance to complete. - RavenDB has not finished writing OrderSagaDatato disk and returns an empty result set. OrderSagafails to complete. In the example above the TransactionScope guarantees atomicity for the OrderSaga: consuming the incoming StartOrder message, storing OrderSagaData in RavenDB and sending the outgoing VerifyPayment message are committed as one atomic operation. The saga data may not be immediately available for reading even though the incoming message has already been processed. OrderSaga is thus only eventually consistent. The CompleteOrder message needs to be retried until RavenDB successfully returns an OrderSagaData instance. Transport transaction - Receive Only In this mode the receive operation is wrapped in a transport's native transaction. This mode guarantees that the message is not permanently deleted from the incoming queue until at least one processing attempt (including storing user data and sending out messages) is finished successfully. See also recoverability for more details on retries. Messages that are required to be sent immediately should use the immediate dispatch option which bypasses batching. Use the following code to use this mode: var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.ReceiveOnly); Consistency guarantees In this mode some (or all) handlers might get invoked multiple times and partial results might be visible: - partial updates - where one handler succeeded updating its data but the other didn't - partial sends - where some of the messages have been sent but others not When using this mode all handlers must be idempotent, i.e. the result needs to be consistent from a business perspective even when the message is processed more than once. See the Outbox section below for details on how NServiceBus can handle idempotency at the infrastructure level. Transport transaction - Sends atomic with Receive Some transports support enlisting outgoing operations in the current receive transaction. This prevents messages being sent to downstream endpoints during retries. Use the following code to use this mode: var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.SendsAtomicWithReceive); Consistency guarantees This mode has the same consistency guarantees as the Receive Only mode, but additionally it prevents occurrence of ghost messages since all outgoing operations are atomic with the ongoing receive operation. Unreliable (Transactions Disabled) Disabling transactions is generally not recommended, because it might lead to message loss. It might be considered if losing some messages is not problematic and if the messages get outdated quickly, e.g. when sending readings from sensors at regular intervals. var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.None); Outbox The Outbox feature provides idempotency at the infrastructure level and allows running in transport transaction mode while still getting the same semantics as Transaction scope mode. exactly-once delivery. When using the Outbox, any messages resulting from processing a given received message are not sent immediately but rather stored in the persistence database and pushed out after the handling logic is done. This mechanism ensures that the handling logic can only succeed once so there is no need to design for idempotency. Avoiding partial updates In transaction modes lower than TransactionScope there is a risk of partial updates because one handler might succeed in updating business data while another handler fails. To avoid this configure NServiceBus to wrap all handlers in a TransactionScope that will act as a unit of work and make sure that there are no partial updates. Use the following code to enable a wrapping scope: var unitOfWorkSettings = endpointConfiguration.UnitOfWork(); unitOfWorkSettings.WrapHandlersInATransactionScope(); Controlling transaction scope options The following options for transaction scopes used during message processing can be configured. Isolation level NServiceBus will by default use the ReadCommitted isolation level. Change the isolation level using var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.TransactionScope); transport.TransactionScopeOptions( isolationLevel: IsolationLevel.RepeatableRead); The only recommended isolation levels used with TransactionScope guarantee are ReadCommited and RepeatableRead. Using lower isolation levels may lead to subtle errors in certain configurations that are hard to troubleshoot. Transaction timeout NServiceBus will use the default transaction timeout of the machine the endpoint is running on. Change the transaction timeout using var transport = endpointConfiguration.UseTransport<MyTransport>(); transport.Transactions(TransportTransactionMode.TransactionScope); transport.TransactionScopeOptions( timeout: TimeSpan.FromSeconds(30)); Via a config file using a the Timeout property of the DefaultSettingsSection.
https://docs.particular.net/transports/transactions?version=core_7.2
2020-11-23T19:14:38
CC-MAIN-2020-50
1606141164142.1
[]
docs.particular.net
Before proceeding to configure the Service Principal Name field, ensure that you have configured the following on your Active Directory server: - Configure the server principal name for the host or service in the Active Directory Domain Services (AD DS). - On the Windows server, start the Server Manager. - Create an AD user account. - Configure the Service Principal Name (SPN) for the domain account by running the ktpass command. For more information, refer to. Example usageTo configure the SPN for the domain account of the DDNS service, run the following command: ktpass -out ddns.keytab -princ DNS/[email protected] -mapuser [email protected] -pass P@ssword456 -crypto RC4-HMAC-NT -kvno 1 -ptype KRB5_NT_PRINCIPAL Example usageTo configure the SPN for the domain account of the client host, run the following command: ktpass -out ddns-client.keytab -princ host/[email protected] -mapuser [email protected] -pass P@ssword123456 -crypto RC4-HMAC-NT -kvno 1 -ptype KRB5_NT_PRINCIPAL In this cast, a client host will need the ddns-client.keytab file to be able to send GSS-TSIG updates using the nsupdate command. - Configure the Active Directory server to send DDNS for its own resource records. - Ensure that "Active Directory Domain Controller Updates" permissions are added and allowed on the Distributed DDNS workflow for the domain managed by Active Directory. - Ensure that the Active Directory domain name appears in the allowed list of Manage Domain Controllers. - Ensure that "Security Client Updates" permissions are added and allowed on the Distributed DDNS workflow for the domain managed by Active Directory and the network (reverse zone) of Active Directory. - Ensure that the IP address of the Primary DNS server is configured to the network adapter of the Active Directory server. - To test if the Active Directory server sends updates for its DNS records, run the following commands: - To verify that Active Directory sends host record and PTR record updates: ipconfig /registerdns - To verify that Active Directory sends other DNS updates: net stop netlogon net start netlogon To successfully send DDNS updates to a Distributed DDNS Service Node with Active Directory, ensure that you have configured the following: - The Distributed DDNS Service Node must be added to the Service Node tab. - Ensure that DDNS service is running. - If you are using Anycast, ensure that Anycast service is configured and running. - Use nsupdate to send DDNS updates. For more information, refer to. To successfully send GSS-TSIG updates to a Distributed DDNS Service Node, ensure that you have configured the following: - Ensure that the krb5-user package is installed on the BDDS. If it is not installed, you can install it using the following command: apt-get install krb5-user - Configure KDC for every domain that clients will send updates to. - Manually add the Kerberos Realm configuration to the /etc/krb5.conf file on the BDDS. The following section must be updated: "[realms] <REALM_NAME> = { kdc = <kdc_address> admin_server = <kdc_address> default_domain = <domain_name> } [domain_realm] .<domain_name> = <REALM_NAME> <domain_name> = <REALM_NAME>"The section should look as follows: "[realms] EXAMPLE.COM = { kdc = 192.168.56.101 admin_server = 192.168.56.101 default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM" - Ensure that the domain account for the client has been created on the Active Directory server and that the SPN is configured for that account. - Transfer the client keytab file to the client machine. - Run the following command to get the Kerberos ticket from the KDC before sending updates: kinit -kt client.keytab <client_spn>For example: kinit -kt ddns-client.keytab host/ddns-client.otherzone.net @OTHERZONE.NET Use nsupdate -g to send DDNS updates.
https://docs.bluecatnetworks.com/r/BlueCat-Distributed-DDNS-Administration-Guide/Reference-Active-Directory-service-configuration/22.1
2022-06-25T04:29:57
CC-MAIN-2022-27
1656103034170.1
[]
docs.bluecatnetworks.com
How to perform Bulk Action in Media Library Now Media Library lets you perform a bulk action; with this, you can either Unselect all or remove all of the selected media assets. To learn your way around it follow the steps and information below. Step 1: Navigate to Media Library Step 2: Select the Folder with the media assets you want to perform bulk action on (as shown in the image below) Step 3: Select at least 2 images to enable bulk action Step 4: Bulk Action A. Select all the media assets as shown in the image below: Note: We at a time load 40 media assets, clicking on the button will select all 40 visible items. Now the button text would change and ask you to select all media assets that are not visible at the moment as well. Like in the screenshot above you already have 40 assets selected but clicking on the "Select All 80 items" would select all the media assets in the folder. Now you have the option to either clear the selection or perform the bulk action. B. Perform Bulk Action From bulk action, you can either clear selection or Remove all of the selected media assets. Step 5: Bulk Remove Selected Assets as shown in the image below A confirmation box would appear asking you to either Archive the media asset or Delete them Permanently. Select your preference After successfully performing the action, the folder would be empty.
https://docs.contentstudio.io/article/890-how-to-perform-bulk-action-in-media-library
2022-06-25T04:03:10
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/618e5e3712c07c18afde74ce/file-ypPEDhHn4Y.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/618e621512c07c18afde74e4/file-LiJDbyZPSX.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/618e629012c07c18afde74e8/file-GrZO0qSg7p.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6193493b64e42a671b637415/file-wluIK09cGo.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61934a7264e42a671b637418/file-9A84C1GbbO.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61934bac12c07c18afde82e7/file-B58OPCaxwM.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/619351b464e42a671b637427/file-xh9ZrRz67Q.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/619359c49ccf62287e5f6be6/file-UBgRPL5dp6.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61935a3defc78d0553e5a830/file-7RP0UDathx.png', None], dtype=object) ]
docs.contentstudio.io
How Social Inbox going to help Android users. Content Studio has finally introduced the much-awaited Social Inbox feature for the android app. Previously we have this feature only for the web app. The Content Studio android app lets you handle conversations from Instagram, Twitter, and Facebook on your mobile. With this, you can give support to your customers faster and have a more successful conversation without worrying to start your PC or laptop. Social inbox works exactly the same on android devices as it works on the web application. The only limitation we have right now is that the web app handles two types of customer interactions; conversations and Posts but with the help of the android app you can only handle conversations for Instagram, Twitter, and Facebook for now. Below are some of the social inbox features within the android app: 1- By hovering over to the inbox section on your mobile you can see the list of your inbox chat as shown in the image below. 2- Above the inbox chat you can see the list of tabs that include: unassigned, Mine, Assigned, Marked as done, Archived, and All. You can go through this doc to know their exact functions. 3- At the top right corner, you can simply click on the icon highlighted in the red color to filter your accounts based on your needs. It will show all of your connected accounts as shown in the image, and you can select the accounts that you need by clicking on them. 4- You can also search for conversations in the inbox chat. But do note that conversations are only searched on the basis of username. 5- It can archive and mark your chat as done by clicking on the subsequent options. 6- You can also link your account and view the various chats on Facebook and Twitter. 7- You can also send and receive messages and images (from Camera and gallery) as shown in the below images. Imp: You can only handle conversations in the mobile app, while on the web application you can handle both types of customer Interactions i.e posts and conversations.
https://docs.contentstudio.io/article/920-how-social-inbox-going-to-help-android-users
2022-06-25T05:43:14
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/621889c3efb7ce7c73443cbd/file-FAE39sJNWQ.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188ca2528a5515a2fcc829/file-Pu1lAgL6Rs.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188d3cefb7ce7c73443cca/file-XEdnSTsZHl.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188156efb7ce7c73443ca3/file-oThSMyJbYd.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6218ad651173d072c69fbc82/file-rsIiV38dCk.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188d75efb7ce7c73443ccc/file-j0JrICzWaT.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188dd9528a5515a2fcc838/file-spWEykwPFt.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/62188b6baca5bb2b753c6544/file-Dc7oHPRdRq.png', None], dtype=object) ]
docs.contentstudio.io
This Task will read the contents of the specified file and provide the result as a data output. The following Output property can be connected to a relevant Input of another Task (for example the Constant Value Input, of the Drive Constant Value) to store the result in the Specification.)
https://docs.driveworkspro.com/Topic/ReadTextFile
2022-06-25T03:55:26
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
NOS enables you to do the following - Analyze data stored on an external object storage - Read data in CSV, JSON, or Parquet format from an external object storage - Join or aggregate external data to relational data stored in Advanced SQL Engine - Query cold data offloaded to an external object storage - Load data from an external object storage into the database using one SQL request - Write Advanced SQL Engine data to an external object storage. The data to be written can come from a table, derived results, another object store, QueryGrid federated query, and so on. - Foreign Tables - Users with CREATE TABLE privilege can create a foreign table inside the database, point this virtual table to an external storage location, and use SQL to translate the external data into a form useful for business. - Using a foreign table in Advanced SQL Engine gives you the ability to: - Load external data to the database - Join external data to data stored in the database - Filter the data - Use views to simplify how the data appears to your users - Use Delta Lake manifest files - Data read through a foreign table is not automatically stored on disk and the data can only be seen by that query. Data can be loaded into the database by accessing a foreign table using these commands: CREATE TABLE AS ... WITH DATA, CREATE TABLE AS … FROM READ_NOS, and INSERT ... SELECT. - READ_NOS - READ_NOS allows you to do the following: - Perform an ad hoc query on all data formats with the data in-place on an external object storage - List all the objects and path structure of an object storage - List the object store - Discover the schema of the data - Read CSV, JSON, and Parquet data - Bypass creating a foreign table in the Advanced SQL Engine - Load data into the database with INSERT … SELECT where the select references READ_NOS - Use a foreign table to query data stored by READ_NOS - Use Delta Lake manifest files Writing data to an external object storage: - WRITE_NOS - WRITE_NOS allows you to write data from database tables to external object storage and store it in Parquet format. Data stored by WRITE_NOS can be queried using a foreign table and READ_NOS. WRITE_NOS allows you to do the following: - Extract selected or all columns from an Advanced SQL Engine table or from derived results and write to an external object storage in Parquet data format. - Write to Teradata-supported external object storage, such as Amazon S3. - Load data into the database with INSERT ... SELECT where the select references WRITE_NOS - Use a foreign table to query data stored by WRITE_NOS Supported External Object Storage Platforms At the time of printing of this guide, the following external object storage platforms are supported: - Amazon S3 - Microsoft Azure Blob storage - Azure Data Lake Storage Gen2 - Google Cloud Storage - Hitachi Content Platform - MinIO - Dell EMC/ECS - NetApp StorageGRID - IBM Cloud Object Store (IBM COS) - Scality Supported Compression Formats External data may arrive from an object in a compressed format. If that is the case, the data will be decompressed inside the Advanced SQL Engine, but only after decryption has been completed on the object store before being transmitted. GZIP is the only compression format supported for both JSON and CSV. Snappy is supported for Parquet. The database recognizes the ".gz" suffix on the incoming files and performs the decompression automatically. Note, compression may bring some trade-offs, such as CPU overhead versus reduced needed Bandwidth amongst others. Encryption To encrypt files written to object store, configure the destination bucket to encrypt all objects using server-side encryption. Server-side encryption at the bucket level is supported by WRITE_NOS, READ_NOS, and foreign tables. Note, all data is transmitted between the Vantage platform and the external object store using TLS encryption, independent of whether the data is encrypted at rest in the object store.
https://docs.teradata.com/r/Teradata-VantageTM-Native-Object-Store-Getting-Started-Guide/July-2021/Welcome-to-Native-Object-Store/NOS-Functionality
2022-06-25T04:49:37
CC-MAIN-2022-27
1656103034170.1
[]
docs.teradata.com
PrerequisiteSubscribe to Vantage on AWS through an As-a-Service contract. - Go to the Teradata Vantage on AWS landing page and enter your contact information. - Enter your system configuration. - Set your preferred Maintenance & Backup Schedule. - Under Account information, select Yes if you are working with any Teradata representatives and enter their contact information. - Agree to the privacy policy and submit the form.
https://docs.teradata.com/r/Teradata-VantageTM-on-AWS-Getting-Started-Guide/November-2020/Create-and-Activate-Accounts/Creating-Your-Vantage-on-AWS-Account
2022-06-25T03:53:15
CC-MAIN-2022-27
1656103034170.1
[]
docs.teradata.com
Zend\Soap\Client¶. Zend\Soap\Client Constructor¶ The Zend\Soap\Client constructor takes two parameters: - $wsdl- the URI of a WSDL file. - $options- options to create SOAP client object.: - ‘soap_version’ (‘soapVersion’) - soap version to use (SOAP_1_1 or SOAP_1_2). - ‘classmap’ (‘classMap’) -.:
https://zf2.readthedocs.io/en/release-2.2.3/modules/zend.soap.client.html
2022-06-25T04:28:27
CC-MAIN-2022-27
1656103034170.1
[]
zf2.readthedocs.io
Tabris.js CLI A command line tool to create, build and serve Tabris.js apps. Installation npm install -g tabris-cli Commands tabris init Creates a new Tabris.js app in the current directory. See: Quick Start Guide - Tabris.js Documentation tabris serve [options] Starts a server the Tabris.js developer app can be pointed to. If a build script is present in package.json, it is executed beforehand. When serving a Tabris.js 3.x app the log output of the app will be displayed in the terminal. options -p [path], --project [path] The directory to serve the Tabris.js app from. Needs to contain a package.json and installed “tabris” module. When omitted, the current working directory is served. -m [module], --main [module] Override the “main” field of package.json. The argument must be a valid module id relative to the project root, e.g. “dist/main.js”. -a, --auto-reload Auto reload the application when a source file is modified. -i, --interactive Enable interactive console for JavaScript input. -l, --log-requests Log requests made by the app. -w, --watch Execute the watch instead of the build script given in the package.json of the app. The watch script can be a long-running task. A prewatch script will be executed before watch and before the server is started. --no-intro Do not print the available external URLs or QR code to the console, only the port. They can still be viewed by opening the given port in a browser. --qrcode-renderer Choose a renderer for the printed QR code. utf8(default): based on UTF-8 characters. terminal: based on background color customization of the cursor color. Use “terminal” if the QR code presentation breaks due to the font used in the terminal. --external [url] Uses the given string as the advertised public URL, to the exclusion of all other. Must include protocol and port. Should be used with --port to ensure the actual port matches this one. --port [port] Changes the port the HTTP server listens to. Causes an error if the port is not available. Keyboard Shortcuts While serving a Tabris.js 3.x App there a various shortcuts available that to help with the development process. CTRL + K Prints an overview of available keys combinations. This message is also printed the first time an app connects to the CLI. CTRL + C Terminates the server and exits the CLI. CTRL + R Reloads the currently served app. CTRL + P Choose to print: - u: an XML summary of the current UI state using console.dirxml(). - s: the contents of localStorage/ secureStorage. CTRL + T Toggles the visibility of the developer toolbar. Tabris.js 3.4 and later only. CTRL + X Removes all content from the app’s localStorage and (on iOS) secureStorage. CTRL + S Saves a .json file containing all current content of the app’s localStorage and (on iOS) secureStorage on the developer machine. You will be prompted for the file name/path of the target file. The path is relative to the current working directory and autocompletion via the tab key is supported. CTRL + L Loads a .json file (as created by CTRL + S) and writes its content in to the app’s localStorage/ secureStorage. All previous content will be removed. You will be prompted for the file name/path of the source file. The path is relative to the current working directory and autocompletion via the tab key is supported. Note that you can not load storage data created on an Android device to an iOS device, or vice versa. This is because secureStorage only exists on iOS. Environment variable TABRIS_CLI_SERVER_LOG Set TABRIS_CLI_SERVER_LOG=true to log requests to the internal HTTP server of the CLI. Useful for debugging connection issues during app sideloading. tabris build [options] <platform> [cordova-platform-opts] Builds a Tabris.js app for the given platform. To speed up the build, pre-compiled build artifacts are kept in a build cache and are reused in subsequent builds. To clean up the build cache, e.g. after updating Cordova plug-ins, run tabris clean. See: Building a Tabris.js app - Tabris.js Documentation See: Common tabris run and tabris build parameters tabris run [options] <platform> [cordova-platform-opts] Builds a Tabris.js app and runs it on a connected device or emulator. See: Building a Tabris.js app - Tabris.js Documentation See: Common tabris run and tabris build parameters options --target <id> The ID of the target device to deploy the app to. See --list-targets. --list-targets Show a list of available targets to use with --target <id>. tabris clean Removes build artifacts. Common tabris run and tabris build parameters platform ios or android. options Default options: --debug --cordova-build-config=./build.json Note: when neither --emulator nor --device is specified and a device is connected, the app will be built for a device. If no device is connected, the app will be built for an emulator. --variables <replacements> Comma-separated list of variable replacements in config.xml. --variables FOO=bar,BAK=baz will replace all occurrences of $FOO and $BAK in config.xml with respectively bar and baz. Note: per default all environment variables are replaced in config.xml. To prevent that, use the --no-replace-env-vars option. --cordova-build-config <path> Path to a build configuration file passed to Cordova. Relative to the cordova/ directory. See Cordova platform documentation (iOS, Android) for more information about the file format. You may want to include this file in .gitignore since it may contain sensitive information. --debug Perform a debug build. Used for development. --release Perform a release build. Used when building apps for the marketplace of their platform. --emulator Build the app for an emulator. --device Build the app for a device. --no-replace-env-vars Do not replace environment variables in config.xml. See --variables documentation for more information about variable replacement in config.xml. --verbose Print more verbose output. cordova-platform-opts Platform-specific options passed to Cordova. See: Platform-specific options - Cordova CLI Reference
http://docs.tabris.com/latest/tabris-cli.html
2022-06-25T04:37:51
CC-MAIN-2022-27
1656103034170.1
[]
docs.tabris.com
AssociateCustomerGateway Associates a customer gateway with a device and optionally, with a link. If you specify a link, it must be associated with the specified device. You can only associate customer gateways that are connected to a VPN attachment on a transit gateway or core network registered in your global network. When you register a transit gateway or core network, customer gateways that are connected to the transit gateway are automatically included in the global network. To list customer gateways that are connected to a transit gateway, use the DescribeVpnConnections EC2 API and filter by transit-gateway-id. You cannot associate a customer gateway with more than one device and link. Request Syntax POST /global-networks/ globalNetworkId/customer-gateway-associations HTTP/1.1 Content-type: application/json { "CustomerGatewayArn": " string", . - CustomerGatewayArn The Amazon Resource Name (ARN) of the customer gateway. Type: String Length Constraints: Minimum length of 0. Maximum length of 500. Pattern: [\s\S]* Required: Yes - DeviceId The ID of the device. Type: String Length Constraints: Minimum length of 0. Maximum length of 50. Pattern: [\s\S]* Required: Yes - LinkId The ID of the link. Type: String Length Constraints: Minimum length of 0. Maximum length of 50. Pattern: [\s\S]* Required: No Response Syntax HTTP/1.1 200 Content-type: application/json { "CustomerGatewayAssociation": { "CustomerGatewayArn": "string", "DeviceId": "string", "GlobalNetworkId": "string", "LinkId": "string", "State": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - CustomerGatewayAssociation The customer gateway association. Type: CustomerGateway:
https://docs.aws.amazon.com/networkmanager/latest/APIReference/API_AssociateCustomerGateway.html
2022-06-25T05:26:25
CC-MAIN-2022-27
1656103034170.1
[]
docs.aws.amazon.com
Field & Object Mapping in Salesforce Field Mapping: telling DayBack about your Salesforce objects Configuring DayBack to work with your Salesforce objects is done almost entirely by filling. The Field Mapping form will describe the fields available in DayBack and a couple of required fields you may need to add to any custom objects. Feel free to disable any fields you don't need or aren't sure about; if there is no "enabled" checkbox beside a field, then the field calendar settings by mapping the fields DayBack needs to know about in order to show your object "Activity of fields in your object and these are likely only an issue for custom objects; even there you probably already have these fields in your object: - Start - the starting date or date/time of the object. If you only have one date field in your object, like a due date, use that field here. - Display - this is the field (or fields) that shows up when you see the event in the calendar: it's the name of your event, like "Meeting with Tim". - Title - this may be the same field as the first field you choose in Display, but this is the editable title that shows in the popover when you go to edit an item in the calendar. The reason DayBack offers both Display and Title is that you may want to make a formula field for Display to display some concatenated information about the event on the calendar, leaving Title for a field you can actually edit., even a picklist, and then.
https://docs.dayback.com/article/29-field-mapping
2022-06-25T04:59:03
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568d5975c69791436155c1b3/images/5ec96094042863474d1b4340/file-NJokZcyQfj.png', None], dtype=object) ]
docs.dayback.com
This article does not yet show and describe the graphical user interface of Checkmk version 2.0.0. We will update this article as soon as possible. 1. Introduction Regular expressions – regexes for short – are used in Checkmk for specifying service names, and they are used in many other functions as well. They are character strings serving as templates that (match) or (do not match) strings in specific texts. Regexes can be employed for many practical tasks, for example, to formulate flexible rules that affect all services whose names include foo or bar. Regexes are often confused with search patterns for file names, because both use the special characters * and ?. These so-called globbing patterns however have a quite different syntax, and are not nearly as powerful as the regular expressions. If you are uncertain whether a regular expression is allowed in a particular situation, activate the online help for advice. In this article we will explain the most important uses for regular expressions – but by no means all of them. When the options shown here are insufficient for your needs, for further reference below you can find more comprehensive information. And of course there is always the internet. 1.1. Normal characters and the point With regular expressions it is always a question of a template – the expression – matching a specific text – e.g, a service name. A template can include a string of special characters that have 'magic' significances. All normal characters in the expression simply match themselves. Checkmk does not distinguish between capital and non-capital letters. The CPU load expression thus matches the text CPU load as well as the text cpu LoAd. Note: for entry fields where – without regular expressions – an exact match is required (mainly with host names), case sensitivity will always be essential! The most important special character is the . point. It matches any single character: Example: 1.2. Using a backslash to mask special characters Since the point matches everything, it naturally follows that it also matches a point. Should you wish to explicitly match a point, then the point must be masked by a \ backslash (escape). This similarly applies to all other special characters, as we shall see. These are: \ . * + ? { } ( ) [ ] | & ^ and $. With the \ backslash the following special character is interpreted as normal character. 1.3. Repeating characters One will very often want to define that any string of characters may appear somewhere in an expression. In regexes this is coded with .* (point asterisk). This is actually only a special case. The asterisk can represent any character, which can appear any number of times in a search text. An empty sequence is also a valid sequence. This means that .* matches any character string and that * matches the preceeding character any number of times: The + is almost the same as *, but it allows no empty sequences. The leading character must occur at least once: Should you wish to restrict the number of repetitions, for this purpose there is a syntax with braces with which a precise number or a range can be specified: A question mark is the abbreviation for {0,1} – i.e. something that appears once, or never. It thus designates the preceding character as optional: 1.4. Character classes, numerals and letters Character classes allow situations such as 'a numeral must occur here'. To this end set all permitted characters in square brackets. You can also enter ranges with a minus sign. Note: The sequence in ASCII-character sets applies here. For example, [abc] specifically stands for one of the letters a, b or c and [0-9] for any character – * both can be combined. A negation for all of these is also possible: Adding a ^ in the brackets thus allows [^abc] to stand for any character *except for a, b, c.. Character classes can of course be combined with other operations. Here are some abstract examples: The following are a few practical examples: Note: If you need one or the other of the characters - or ] you will need to use a trick. Simply code a - directly at the end of the class – as shown in the preceeding example. With this it will be clear to the regex interpreter that it can’t be a sequence. Code the square brackets as the first character in the class. Since no empty classes are permitted it will be interpreted as a normal character. A class with precisely these two characters will look like this: []-]. 1.5. Beginning and end, prefix, suffix and infix When comparing regular expressions with service names and other elements, Checkmk always verifies that the text matches the beginning of the expression. The reason is that this is what you usually need. A rule in which for services the terms CPU and core are coded thus applies to all services whose name begins with one of these terms: This is described as a prefix match. Should you require an exact match, this can be accomplished by appending a $. This effectively matches the end of the text. It is sufficient if the expression matches at any location in the text – a so-called infix match. This is achieved in advance with the familiar .*: An exception to the rule that Checkmk always uses a prefix match is the Event Console (EC), which always works with an infix match – so that only containedness is checked. Here, by prefixing ^, a match for the beginning can be forced – a prefix match in other words. 1.6. Alternatives With a | vertical bar – an OR-link – you can define alternatives: 1|2|3 thus matches with 1, 2 or 3. If the alternatives are required in the middle of an expression, enclose them in brackets '()'. 1.7. Match groups In the Event Console, in Business Intelligence (BI) and also in Bulk renaming of hosts there is the possibilty of relating to text components that are found in the original text. For this patterns in regular expressions are marked with brackets. The text component that matches the first bracketed expression will be available in the substitution as \1, the second expression as \2, etc. The image below shows such a rename. All host names that match the regular expression server-(.*)\.local will be substituted with \1.servers.local. In doing so the \1 represents the exact text that will be 'captured' by the .* in the brackets: In a concrete example, server-lnx02.local will be renamed to lnx02.servers.local. Groups can of course also be combined with the repetition operators *, +, ? and {… }. Thus for example the expression (/local)?/share matches /local/share, as well as 2. Table of all special characters Here is a summary of all of the special characters as described above and the functions performed by the regular expressions as used in Checkmk: The following characters must be escaped with a backslash if they are to be explicitly used: \ . * + ? { } ( ) [ ] | & ^ $ 3. If you’d like to learn the full details Back in the '60s, Ken Thompson, one of the inventors of UNIX, had already developed the first regular expressions in their current form – including today’s standard Unix command grep. Since then countless extensions and dialects have been derived from standard expressions – including extended regexes, Perl-compatible regexes and a very similar variant in Python. Under Filters in views Checkmk utilises POSIX extended regular expressions (extended REs). These are analysed in the monitoring core using C with the Regex function of the C-Bibliothek. A complete reference for this subject can be found in the Linux-Manpage for regex(7): OMD[mysite]:~$ man 7 regex REGEX; In all other locations all of Python’s other options for regular expressions are additionally available. These apply to, among others, the Configurations rules, the Event Console and Business Intelligence (BI). The Python-regexes are an enhancement of the extended REs, and they are very similar to those from Perl. They support, e.g., the so-called negative lookahead, a non-greedy asterisk *, or a forced differentiation between upper and lower cases. The detailed options for these regexes can be found in the Python online help for the re module: OMD[mysite]:~$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> help(re) Help on module re: NAME re - Support for regular expressions Raw Edition. FILE /usr/lib/python2.7/re.py MODULE DOCS docs.python.org/library/re DESCRIPTION. Copyright © 2001-2018 Python Software Foundation. All rights reserved. Copyright © 2000 BeOpen.com. All rights reserved. Copyright © 1995-2000 Corporation for National Research Initiatives. All rights reserved. Copyright © 1991-1995 Stichting Mathematisch Centrum. All rights reserved. License: docs.python.org/2/license.html
https://docs.checkmk.com/latest/en/regexes.html
2021-05-06T06:22:28
CC-MAIN-2021-21
1620243988741.20
[array(['../images/regexes_servicematch.png', 'regexes servicematch'], dtype=object) array(['../images/bulk_renaming_regex.jpg', 'bulk renaming regex'], dtype=object) ]
docs.checkmk.com
Upgrading. Requirements¶ Migration Steps¶ Distro packages¶ If you are using distro packages, they should handle the upgrade process themselves. You may need to perform some changes after the upgrade is done, please check with the distro package’s documentation for those instructions. Docker images¶ If you are using container images please refer to their documentation or release notes on steps to upgrade a new version. Virtualenv Install¶ If you are following virtualenv installation guide from here, these are the steps to upgrade to a newer version of Mailman Core. First, you need to stop the running Mailman services before the upgrade to make sure that you don’t endup in a bad state. You can stop the two systemd services, for Core and Mailman-Web: $ sudo systemctl stop mailman3 $ sudo systemctl stop mailmanweb Then you need to switch to mailman user, activate the virtualenv: $ sudo su mailman $ source /opt/mailman/venv/bin/activate Finally, upgrade your packages: (venv) $ pip install -U mailman postorius django-mailman3 hyperkitty mailman-web Warning It is important to upgrade all the packages together since they depend on each other for critical functionality. There are no compatability guarantees with a new version of one package and an old version of another package. We often update the minimum version required for a dependency in Packages’ metadata but that doesn’t always result in new versions being installed. There is also no direct dependency between django packages and mailman package, but they still expect the latest version of Mailman core. Post upgrade, you need to do a bunch of other tasks. First would be to make sure that your database schema is updated to handle the new versions: (venv) $ mailman-web migrate Then, make sure that all the static files, like stylesheets, javascript, images etc are copied to a single place. This is required to have the CSS and Javascript displayed correctly: (venv) $ mailman-web compress (venv) $ mailman-web collectstatic Update your i18n translation caches to make sure that your installation can pickup any new changes to strings and their translations: (venv) $ mailman-web compilemessages Finally, you can re-start the services to bring up the Web Interface and Mailman Core: $ sudo systemctl start mailman3 $ sudo systemctl start mailmanweb Upgrade to 3.2.0¶ These are specific instructions to upgrade to Mailman Core 3.2.0 or 1.2.0 for mailman-web components (like, Postorius, Hyperkitty) from an older.
https://docs.mailman3.org/en/latest/upgrade-3.2.html
2021-05-06T05:52:42
CC-MAIN-2021-21
1620243988741.20
[]
docs.mailman3.org
directoryDrop Boolean (default: false). When set, the property allows you to drop only folders for upload. Files cannot be uploaded. In browsers which do not support the directoryDropfeature, the behavior falls back to the default file drop. Example <div> <input name="files" id="files" type="file" /> <div class="dropZoneElement">Drag and drop file here</div> </div> <script> $(document).ready(function() { $("#files").kendoUpload({ async: { saveUrl: "", removeUrl: "" }, directoryDrop: true }); }); </script
https://docs.telerik.com/kendo-ui/api/javascript/ui/upload/configuration/directorydrop
2021-05-06T06:21:09
CC-MAIN-2021-21
1620243988741.20
[]
docs.telerik.com
Import content¶ To. Creating the template file¶ - From the Admin menu go to Content - Click on Import next to the content type you want to create the template file for, e.g. Taxon description - Click on the Download link and open the file in Excel. You will see the different Scratchpad fields as column headers. Required fields are shown in Red - Fill the template file with your data and save Create the pre-populated template file¶ - From the Admin menu go to Content - Click Import next to the content type into which you want to import data, e.g. Taxon description. - From the Maximum number of rows drop down menu select the option that best matches your file. (e.g. for a excel file with 3500 rows, choose 5000) - Browse for the file and click on the Import button - View your imported data by clicking on the respective tab for the content type in the Main menu or by clicking on Content in the Admin menu and then on View next to the respective content type Other Important Information¶ - Always download an up-to-date template file - the Excel spreadsheets can be used for most content and are dynamically generated. This means that if you add fields you will need to use a new template. - Be patient with importing data - the Excel file needs to be uploaded, parsed, then saved in your Scratchpad. Upload speeds are usually much slower than download speeds, so depending on your internet connection this may take some time (especially for large files >1MB). Importing medium-sized (3000-6000 term) taxonomies with rich data can take 5-15 minutes. - Keep the browser window open when running an import - if you close the browser window the import will stop. - If a taxonomy imports in the wrong order, try running the import again - if you have defined parent child relationships and a child is imported before its parent, it will be placed at the root of a taxonomy. Running an import again will update the taxonomy and the hierarchical relationship should now be correct. - Use GUIDs if you have any - A GUID is a global unique identifier for a record/node. GUIDs can be used to compare/synchronize different databases. Adding a GUID is not required, you only need it if your records/nodes were generated from an established database and you want to be able to update your data from this database at a later stage. Note that the GUID really has to be globally unique, at least across the whole Scratchpad. So it is not enough just add a number. Better is a combination like “Species2000-1”.
https://scratchpads.readthedocs.io/en/latest/import/content.html
2021-05-06T06:06:46
CC-MAIN-2021-21
1620243988741.20
[array(['../_images/ImportTaxDescr.jpg', '../_images/ImportTaxDescr.jpg'], dtype=object) array(['../_images/ImportTaxDescrTemplate.jpg', '../_images/ImportTaxDescrTemplate.jpg'], dtype=object) array(['../_images/AdminTaxonDesc.jpg', '../_images/AdminTaxonDesc.jpg'], dtype=object) ]
scratchpads.readthedocs.io
OASIS Darwin Information Typing Architecture (DITA) TC Kristen James Eberlein ([email protected]), Eberlein Consulting LLC This document is part of a work product that also includes: Lightweight DITA (LwDITA) is a simplified version of document was last revised or approved by the OASIS Darwin Information Typing Architecture (DITA) TC on the above date. The level of approval is also listed above. Check the “Latest version” location noted above for possible later revisions of this document. TC members should send comments on this document to the TC’s email list. Others should send comments to the TC’s public comment list, after subscribing to it by following the instructions at the “Send A Comment” button on the TC’s web page at. When referencing this note, the following citation format should be used:. 3 What is Lightweight DITA? 3.2 Support for non-XML formats 3.3 Development of LwDITA tools and applications 4 Lightweight DITA design 4.1 Components of the LwDITA topic 4.2 Components of the LwDITA map 4.3 Stricter content model 4.4 Subset of reuse mechanisms 4.5 New multimedia components 4.6 LwDITA specialization 5 LwDITA authoring formats 5.1.2 Example of an XDITA topic 5.1.3 Example of an XDITA map 5.2.2 Example of an HDITA topic 5.2.3 Example of an HDITA map 5.3.2 Examples of MDITA topics 5.3.3 Example of an MDITA map 5.4 Authoring cross-format content with LwDITA 5.4.1 Cross-format example: XDITA map 5.4.2 Cross-format example: XDITA topic 5.4.3 Cross-format example: HDITA topic 5.4.4 Cross-format example: MDITA topic Appendix A LwDITA components Appendix A.1 DITA 1.3 element types in LwDITA Appendix A.2 New element types Appendix A.3 DITA 1.3 attributes in LwDITA Appendix B Acknowledgments Appendix C Revision history Lightweight DITA (LwDITA) is a simplified version of the Darwin Information Typing Architecture committee note covers the following points: Lightweight DITA is a work in progress. This committee note outlines the current plans in order to gain design clarity and receive feedback from potential users and implementers. Please note that details might change between the publication of this committee note and the actual release of the Lightweight DITA standard. The following are references to external documents or resources that readers of this document might find useful. This section provides information about terminology and how it is used in this committee note. @data-conref, that are used in HDITA and the extended profile of MDITA to use DITA features such as conref and keyref. @idattribute on the root element, prolog metadata, and optional use of HTML element types. Lightweight DITA is a standards-based alternative for situations in which DITA 1.3 would be too complex or for communities that do not use XML as an authoring platform. DITA 1.3 is a mature architecture with a deep set of advanced features. This maturity can be intimidating for those considering adoption, especially for simple scenarios. While simplified versions of DITA exist, most are vendor-developed and proprietary. A standards-based lightweight alternative will enable the DITA community to offer a common starting point for simple DITA scenarios that remains fully compatible with DITA 1.3. Some authoring communities have strong ties to specific formats, such as Markdown or HTML. While these formats do not have the same expressiveness as XML, they bring with them a set of tools and practices that can be a natural fit with a DITA ecosystem. Lightweight DITA defines a lower-function level of interchange and mappings for HTML5 and Markdown, thus becoming the first version of DITA to be truly cross-format —allowing authoring and delivery in a mix of native formats that are all mapped to a common semantic standard. The Lightweight DITA subcommittee began work by identifying key authoring communities that were interested in the benefits that LwDITA could provide; it then identified scenarios including cross-format authoring and reuse. LwDITA represents common ground for the functionality that is needed by the following authoring communities: learning and training, software documentation authored by subject matter experts (SMEs), and marketing content. LwDITA is a proposed standard for expressing simplified DITA documents in XML, HTML5, and Markdown. The core goals of LwDITA are the following: LwDITA is not a replacement for DITA 1.3. Organizations and teams that are already using DITA are encouraged to explore LwDITA, but they are not the primary audience for this proposed lightweight standard. Organizations and individuals that have not adopted DITA, either because XML is not a tool used in their professional communities or they are not familiar with information typing, can rely on LwDITA as their introduction to structured authoring and content reuse. LwDITA is intended to be a conforming subset of DITA 1.3. In order to make this possible, the DITA Technical Committee will release a new multimedia domain for use with DITA 1.3. DITA 1.3 has more power (and thus complexity) than is needed in some authoring situations. LwDITA provides a simpler alternative. While LwDITA supports core features in the DITA standard – semantic tagging, topic orientation, content reuse, conditional processing, and specialization – LwDITA deliberately limits itself to generic structures that are highly applicable across many industries. This results in a much smaller standard in terms of element types, attributes, features, and complexity. Conference presentations and practitioners' blogs occasionally describe DITA as an intimidating grammar with too many document and element types. In the base edition, DITA 1.3 has three document types and 189 element types. In contrast, LwDITA has two document types and 48 element types. 39 of the element types are defined in DITA 1.3, and the other 9 are multimedia element types that are part of a forthcoming domain intended for use with DITA 1.3. This pragmatic design has benefits for both small and large projects, as well as new and existing DITA implementations. Compared to DITA 1.3, the learning curve for LwDITA will be shorter, and implementing LwDITA might involve less change management and, as a result, lower costs. LwDITA adds support for structured authoring in HTML5 and Markdown. New forms of non-XML structured authoring have gained popularity. Authors use the extended semantic markup of HTML5 to create structured documents for the Web. Many in industry and academia have adopted plain text languages like Markdown. In its initial release, LwDITA has three authoring formats: These authoring formats will enable and enhance collaboration across divisional silos. Engineers can author in Markdown, marketing writers can author in HTML5, and technical writers and others familiar with DITA can author in XML. Documents authored in the various authoring formats can be aggregated and published as a single document collection. They also can easily integrate into DITA 1.3 collections. These three authoring formats do not represent a final version of LwDITA. In the future, based on community interest and development resources, LwDITA might add mappings, for example, between DITA and JSON, AsciiDoc, or MS Word. The XDITA and HDITA content models are designed to be functionally equivalent to each other, while MDITA is a compatible subset. XDITA and HDITA conform with the OASIS DITA and W3C HTML5 standards, respectively. In its core profile, MDITA aligns with the GitHub Flavored Markdown specification. In its extended profile, MDITA can incorporate YAML front matter headers and HDITA element types and attributes to overcome Markdown limitations as a language for authoring structured and reusable content. The DITA Technical Committee hopes that LwDITA will make it easier for companies to develop inexpensive tools for authoring, aggregating, and publishing LwDITA content. DITA 1.3, with its many elements and advanced features, makes it difficult for companies to implement new authoring and publishing systems. In contrast, the simplified and predictable structure of LwDITA ought to remove many of the barriers that stand in the way of the development of new tools, both commercial and open-source. LwDITA is designed to have a smaller element set, a stricter content model, and fewer reuse mechanisms than DITA 1.3. However, LwDITA also includes new components that provide increased multimedia support. LwDITA uses a subset of the topic element types that are available in DITA 1.3. The subset was carefully chosen to include only the most basic constructions that are needed to structure information effectively. The Lightweight DITA subcommittee considered the needs of diverse industries and sectors (including education, engineering, healthcare, and marketing) when selecting topic elements for LwDITA. The selected subset contains the following document components: For a complete list of the DITA 1.3 element types that are included in LwDITA and their availability in the authoring formats, see DITA 1.3 element types in LwDITA. LwDITA uses a subset of the map element types that are available in DITA 1.3. The selected subset contains the following map components: For a complete list of the DITA 1.3 element types that are included in LwDITA and their availability in the authoring formats, see DITA 1.3 element types in LwDITA. LwDITA has a much stricter content model than DITA 1.3. This ensures a predictable markup structure in topics that simplifies reuse, transformations, style sheet logic, and tools development. This strict content model minimizes authoring decisions by presenting limited choices for elements and attributes. This model, however, depends on a few strict rules. For example, in XDITA and HDITA, with a few exceptions, all text must be within paragraph elements. Exceptions are the description, short description, and title elements. Within paragraphs, the following can appear: In DITA 1.3, the following markup is valid: <section>Compatible light bulbs include the following: <ul> <li>Compact Fluorescent</li> <li>Light Emitting Diode</li> </ul> </section> In contrast, in XDITA the following markup must be used: <section> <p>Compatible light bulbs include the following:</p> <ul> <li> <p>Compact Fluorescent</p> </li> <li> <p>Light Emitting Diode</p> </li> </ul> </section> Note that all text is wrapped in <p> elements. This restriction of mixed content in block elements simplifies tool development for processing LwDITA content, and it also enables easier content reuse, as authors can conref paragraphs from most of the block contexts that are available in LwDITA. LwDITA offers a subset of the reuse mechanisms that are available in DITA 1.3. @props(in XDITA) or @data-props(in HDITA and MDITA extended profile). The @conref (in XDITA) or @data-conref (in HDITA and MDITA extended profile) attribute is available on the following document components: The content reference mechanism is not available in the MDITA core profile. @keyref(in XDITA) or @data-keyref(in HDITA) or [keyref](in MDITA extended profile) attribute can be used on phrase (XDITA) or span (HDITA). It is also available on links, alternative text, and data. @keyrefon phrase (XDITA) or span (HDITA). This design simplifies the DITA authoring experience, as there are no choices to be made. To reuse block-level content, authors will use @conref. For phrase-level content, authors will use @keyref. For a complete list of the DITA 1.3 attributes that are included in LwDITA, see DITA 1.3 attributes in LwDITA. LwDITA adds new element types for multimedia content. These element types are compatible with HTML5; they are part of a forthcoming domain intended for use with DITA 1.3. For years, authors have used different approaches to embed multimedia content in DITA-based deliverables for the Web. The DITA 1.3 specification recommends the <object> element type to include multimedia content in a topic, pointing out that it corresponds to the <object> element type in HTML. However, HTML5 introduced direct element types for audio and video. LwDITA updates the XML-to-HTML element type correspondence and introduces the following multimedia components, which are specialized from the DITA 1.3 <object> and <param> element types: These multimedia components are not available in the MDITA core profile; they must be expressed in raw HDITA syntax as part of the MDITA extended profile. The DITA Technical Committee is working on a multimedia domain add-on for DITA 1.3 that would include some of these element types to maintain compatibility between DITA and LwDITA. LwDITA follows the same specialization architecture as DITA 1.3, although there are some limitations and special rules. Because LwDITA is a proposed standard that spans multiple authoring formats, coordination of the same specialization rules across markup languages poses some unique challenges. Not all LwDITA formats will support specialization to the same degree. <training-video>element type that is specialized from the DITA 1.3 element type <object>. They must specialize it from the XDITA element type <video>. A general recommendation for LwDITA specializations is to keep in mind the lightweight nature of the proposed standard and avoid complicated content structures. Authors who need robust specialization for complex scenarios should use DITA 1.3. Although many of the LwDITA elements and workflows proposed in this document are still experimental, tools already exist to support organizations who want to explore using LwDITA. The DITA Technical Committee expects that the release of Lightweight DITA as an OASIS standard will lead to a rapid increase in the number of commercial and open-source tools that provide support for LwDITA. The Lightweight DITA subcommittee maintains a wiki page with a list of LwDITA tools and resources. The page can be accessed at Tool developers interested in having resources listed on the wiki page should email the Lightweight DITA subcommittee at [email protected] This section lists the element types and attributes that are available in LwDITA. This topic lists the DITA 1.3 element types that are available in LwDITA. It also lists how to represent them in XDITA, HDITA, and MDITA. This topic lists the new XML element types that are part of LwDITA and how to represent them in XDITA and HDITA. These new element types are not available in the MDITA core profile and, if needed, can be represented with their HDITA equivalents as part of the MDITA extended profile. This topic lists the DITA 1.3 attributes that are available in LwDITA and how to represent them in XDITA and HDITA. With the exception of key reference, attributes are not available in the MDITA core profile. In the MDITA extended profile, you can express attributes using their HDITA representation. In an MDITA core-profile topic, a key reference is represented using the GitHub Flavored Markdown syntax for shortcut reference links: [key-value]. There is no equivalent for content reference in the MDITA core profile. The following individuals participated in the creation of this document and are gratefully acknowledged. In addition, the OASIS DITA Technical Committee also would like to recognize the following people for their insights and support: The following table contains information about revisions to this document. <dlentry>cannot be mapped directly to HTML5, an author can preserve the structure and attributes of a definition list in HDITA and MDITA with custom data attributes <img>is always treated as an inline element; an <img>inside a <fig>is treated as a block element
http://docs.oasis-open.org/dita/LwDITA/v1.0/cn01/LwDITA-v1.0-cn01.html
2021-05-06T07:01:24
CC-MAIN-2021-21
1620243988741.20
[]
docs.oasis-open.org
Amazon EKS add-on container image addresses When you deploy Amazon EKS add-ons such as the AWS Load Balancer Controller, the VPC CNI plug-in, kube-proxy, CoreDNS, or storage drivers, you pull an image from an Amazon ECR repository. The image name and tag are listed in the topics for each add-on. The following table contains a list of Regions and the addresses you can use to pull images from.
https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
2021-05-06T07:48:21
CC-MAIN-2021-21
1620243988741.20
[]
docs.aws.amazon.com
If you double click on a lander or offer node in your visual diagram, you will see an option called "This page needs all the accumulated URL Params": By default, it's off. If you enable it, then, when a visitor reaches that page, FunnelFlux will append to that page URL all the parameters that you had in your funnel link. For example: If your funnel link is: yourdomain.com/?flux_fts=Xhjdsh22&flux_cpe=1&yourparam1=abc&yourparam2=def And if you enable this option on a lander that you have setup at yourdomain.com/lander1.php Then when a visitor of that funnel will reach this specific lander node, FunnelFlux will load it at this url: yourdomain.com/lander1.php?yourparam1=abc&yourparam2=def In addition to the custom tokens that are passed from your funnel link as in the example above, you can also accumulate URL params inside your campaigns and funnels by using the "Accumulate These URL Params" advanced settings: These will add extra parameters only for traffic within this funnel. This can be useful if you plan to use specific information within this funnel, but do not want to have these parameters hard-coded into the URL of the lander in lander config. Note that the extra paramater data is all stored in the Accumulated URL Params buffer. You will see in other documents how you can access and edit this buffer to change data passing in your final in interesting ways.
https://docs.funnelflux.com/en/articles/1230712-what-are-accumulated-url-parameters
2021-05-06T07:29:34
CC-MAIN-2021-21
1620243988741.20
[]
docs.funnelflux.com
Tracking without cookies can be useful for a number of scenarios: Traffic coming from Facebook mobile browser/iOS - there can be achieved with FunnelFlux's no-redirect javascript tracking and ties in closely with the session ID tracking discussed here. How to implement it Drop your usual FunnelFlux universal tracking JS on your page. Near the end of the code, alter the noCookies option to true: fflux.track({ timeOnPage: false, timeOnPageResolution: 3000, noCookies: true, tokenInjection: { intoUrl: false, intoForms: { selector: null }, intoLinks: { selector: null }, tokens: {} } }); Doing this will do two things: The JS will no longer set a cookie when processing (hence no need to ask for consent under GDPR) The JS will automatically append flux_sess=THE_IDto the URL of the current page, i.e. it turns on intoUrl for just that parameter You can then use your CTA action links on your page as usual. When the action links load, FunnelFlux will do various checks to identify the user: Cookies Flux_sess appended to the link/URL being loaded Flux_sess in the referrer In this case it will find flux_sess=something in the referrer, will identify the user and know where to send them next - all without cookies! NOTE: Because it is using the refererr, be sure not to set attributes like rel=noreferrer on links as you will block the refererr passing and break this. Using direct passing instead of referrers If you don't want flux_sess=xxx appearing in the browser URL you can instead append the session ID to your action links (and any others) directly, by using the intoLinks function: fflux.track({ timeOnPage: false, timeOnPageResolution: 3000, noCookies: true, tokenInjection: { intoUrl: false, intoForms: { selector: null }, intoLinks: { selector: 'a.something' }, tokens: { flux_sess: '{session-id}' } } }); If you set intoLinks to anything non-null, we assume this is because you don't want the session ID in the URL and so this gets turned off - i.e. noCookies = true will turn on intoUrl unless you have already set session ID to be appended by intoLinks. You could also turn on intoUrl if you wanted, but its unlikely that the referrer is going to provide any assistance if the URLs being loaded have the session ID passed to them directly.
https://docs.funnelflux.com/en/articles/1963633-using-cookieless-tracking
2021-05-06T07:11:52
CC-MAIN-2021-21
1620243988741.20
[]
docs.funnelflux.com
scipy.stats.iqr¶ scipy.stats. iqr(x, axis=None, rng=25, 75, scale=1.0,. - Parameters - xarray_like Input array or object that can be converted to an array. - axisint or sequence of int, optional Axis along which the range is computed. The default is to compute the IQR for the entire array. - rngTwo-element sequence containing floats in range of [0,100] optional Percentiles over which to compute the range. Each must be between 0 and 100, inclusive. The default is the true IQR: (25, 75). The order of the elements is not important. - scalescalar or str, optional The numerical value of scale will be divided out of the final result. The following string values are recognized: ‘raw’ : No scaling, just return the raw IQR. Deprecated! Use scale=1 instead. ‘normal’ : Scale by \(2 \sqrt{2} erf^{-1}(\frac{1}{2}) \approx 1.349\). The default is 1.0. The use of scale=’raw’ is deprecated. Array-like scale is also allowed, as long as it broadcasts correctly to the output such that out / scaleis a valid operation. The output dimensions depend on the input array, x, the axis argument, and the keepdims flag. - - interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}, optional Specifies the interpolation method to use when the percentile boundaries lie between two data points i and j. The following options are available (default is ‘linear’): ‘linear’: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. ‘lower’: i. ‘higher’: j. ‘nearest’: i or j whichever is nearest. ‘midpoint’: (i + j) / 2. - keepdimsbool, optional If this is set to True, the reduced axes are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array x. - Returns - iqrnan - 1 “Interquartile range” - 2 “Robust measures of scale” - 3 “Quantile”.]])
https://docs.scipy.org/doc/scipy-1.5.3/reference/generated/scipy.stats.iqr.html
2021-05-06T07:33:56
CC-MAIN-2021-21
1620243988741.20
[]
docs.scipy.org
Entity Abstract¶ This page will describe the Entity Abstract class. This class is the root of all Entity classes. Entity classes are used as containers for return values from various API endpoints. For example, the Article API will return an Article Entity, the Discussion API will return a Discussion Entity, and so on. It is important to note that an API class will never return an Entity class directly. Rather, it will return an Swader\Diffbot\Entity\EntityIterator, an iterable container with all the Entities inside. The container, however, is configured in such a way that executing get methods on it directly will forward those calls to the first Entity in its dataset. See Swader\Diffbot\Entity\EntityIterator. __construct¶ - Swader\Diffbot\Abstracts\Entity:: __construct(array $data)¶ This class takes a single argument during construction, an array of data. This data is then turned into gettable information by means of getters, both direct and magic. Some getters do additional processing of the data in order to make it more useful to the user. getData¶ __call¶ - Swader\Diffbot\Abstracts\Entity:: __call()¶ Magic method for resolving undefined getters and only getters. If the method being called starts with get, the remainder of its name will be turned into a key to search inside the $data property (see getData). Once the call is identified as a getter call, __getis invoked (see below).
https://diffbot-php-client-docs.readthedocs.io/en/latest/abstract-entity.html
2021-05-06T07:05:20
CC-MAIN-2021-21
1620243988741.20
[]
diffbot-php-client-docs.readthedocs.io
Adaptive transport Adaptive transport is a data transport mechanism for Citrix Virtual Apps and Desktops. It is faster, more scalable, improves application interactivity, and is more interactive on challenging long-haul WAN and internet connections. For more information about adaptive transport, see Adaptive transport. Enable adaptive transportEnable adaptive transport In Citrix Studio, verify that the HDX Adaptive Transport policy is set to Preferred or Diagnostic mode. Preferred is selected by default. - Preferred: Adaptive transport over Enlightened Data Transport (EDT) is used when possible, with fallback to TCP. - Diagnostic mode: EDT is forced on and fallback to TCP is disabled. Disable adaptive transportDisable adaptive transport To disable adaptive transport, set the HDX Adaptive Transport policy to Off in Citrix Studio. Check whether adaptive transport is enabledCheck whether adaptive transport is enabled To check whether UDP listeners are running, run the following command. netstat -an | grep "1494\|2598" In normal circumstances, the output is similar to the following. udp 0 0 0.0.0.0:2598 0.0.0.0:* udp 0 0 :::1494 :::* EDT MTU discoveryEDT MTU discovery EDT automatically determines the Maximum Transmission Unit (MTU) when establishing a session. Doing so prevents EDT packet fragmentation that might result in performance degradation or failure to establish a session. Minimum requirements: - Linux VDA 2012 - Citrix Workspace app 1911 for Windows - Citrix ADC: - 13.0.52.24 - 12.1.56.22 - Session reliability must be enabled If using client platforms or versions that do not support this feature, see Knowledge Center article CTX231821 for details about how to configure a custom EDT MTU that is appropriate for your environment.. Enable or disable EDT MTU discovery on the VDA EDT MTU discovery is disabled by default. To enable EDT MTU discovery, set the MtuDiscoveryregistry key by using the following command, restart the VDA, and wait for the VDA to register: /opt/Citrix/VDA/bin/ctxreg create -k "HKLM\System\CurrentControlSet\Control\Terminal Server\Wds\icawd" -t "REG_DWORD" -v "MtuDiscovery" -d "0x00000001" --force To disable EDT MTU discovery, delete the MtuDiscoveryregistry value. This setting is machine-wide and affects all sessions connecting from a supported client. Control EDT MTU discovery on the client You can control EDT MTU discovery selectively on the client by adding the MtuDiscovery parameter in the ICA file. To disable the feature, set the following under the Application section: MtuDiscovery=Off To re-enable the feature, remove the MtuDiscovery parameter from the ICA file. IMPORTANT: For this ICA file parameter to work, enable EDT MTU discovery on the VDA. If EDT MTU discovery is not enabled on the VDA, the ICA file parameter has no effect.
https://docs.citrix.com/en-us/linux-virtual-delivery-agent/current-release/configuration/adaptive-transport.html
2021-05-06T07:11:31
CC-MAIN-2021-21
1620243988741.20
[array(['/en-us/linux-virtual-delivery-agent/current-release/media/diagnostic-mode.jpg', 'Image of diagnostic mode'], dtype=object) ]
docs.citrix.com
What is a custom user ID The custom User ID is an identifier that should be set for each of your users. It will allow you to use our Segment API. Anything that can identify a user uniquely will work. It can be: - A unique ID you are using in your login system. - An email address. - A username. - Otherwise, you will need to create a stable identifier for your app. The most common way to do that is to generate a RFC4122 UUID. We suggest you to hash this information before setting it. Why using a custom user ID We strongly recommend providing this custom user identifier to be able to use our Herow Segment API to target specific user profiles from your database (e.g. users who have a premium plan) and engage them with specific content while creating a campaign on Herow. Without this custom user id, it might be difficult to match user identifier between your database and Herow database -to know more about which user IDs we manage, read our API Segment documentation-. Setting custom user ID To set a custom user ID, you should make the following call as soon as the user is logged: HerowInitializer.getInstance().setCustomId("hashUserEmail"); If the user logout, you can use the removeCustomId method. HerowInitializer.getInstance().removeCustomId();
https://docs.herow.io/sdk/6.3/android/setting-custom-id.html
2021-05-06T07:46:31
CC-MAIN-2021-21
1620243988741.20
[]
docs.herow.io
Log patterns are the fastest way to discover value in log data without searching. Log data is high volume telemetry with a low value per individual record. Searching can quickly lead to logs that provide a root cause explanation, but most data is repetitive and hard to contextualize when browsing. Patterns can make log data discoverable without spending a lot of time reading through low value data. one.newrelic.com > Logs > Patterns: Use patterns as the basis for alerts when the frequency of important data changes, or for configuring drop rules to get rid of unnecessary repetitive data. Technical overview Log patterns functionality applies machine learning to normalize and group log messages that are consistent in format but variable in content. These grouped messages can be sorted, making it easy to find the most frequent or rarest sets of logs in your environment. Use patterns as the basis for alerts when the frequency of important data changes, or to configure drop rules to get rid of unnecessary repetitive data. Log patterns use advanced clustering algorithms to group together similar log messages automatically. With patterns, you can: - Orient more quickly through millions of logs. - Reduce the time it takes to identify unusual behavior in your log estate. - Monitor the frequency of known patterns over time to focus your energy on what matters, and exclude what's irrelevant. Availability The ability to configure this feature is dependent on role-based permissions. If you see Patterns are turned off in your Log management Patterns UI, click the Configure Patterns button and enable it. If you don't see patterns within 30 minutes of enabling the feature, there may be a lack of data with a message attribute for the system to create a pattern from. To start examining patterns: - Go to one.newrelic.com > Log management, and use the account picker dropdown to select the target account where you want to explore patterns. - In the left navigation of the Log management UI, click Patterns. The main log UI changes to show patterns that match the query in the query bar. one.newrelic.com > Log management > Log patterns: The line chart shows the top 5 patterns over time. Use the time picker and query bar to adjust the results. Explore log patterns By default the log patterns UI first shows the most frequent occurrence of patterns. To sort to show the rarest patterns first, click the Count column. You can also use the query bar or attributes bar to filter your log patterns. Clicking a specific log message will open the log message details panel you're familiar with from the Logs management page. Explore logs with no pattern The Logs with no pattern tab groups all recent log messages in your account that were not clustered into a known pattern yet. These log messages don't represent any problem or flaw in the system; they have no pattern because they are too new to have been processed by the machine learning system. This makes them valuable to explore when you want to understand what has recently changed in your environment. one.newrelic.com > Log management > Log patterns: New Relic's log patterns feature automatically groups logs without a matching pattern. For example: - Are any of these logs tied to a recent problem? This is a quick way to discover unique log data that is appearing for the first time in your environment. - Does your log data have a new format? Sometimes the logs don't represent a problem, but a new format of log data that deviates from the data model you expect your applications to follow. Catching these logs early gives you the opportunity to ask developers to correct any deviations in their log output. The more consistent people are in the way log data is generated, the easier it becomes to use logs across a diverse set of teams. Masked attributes and wildcards Parts of the log messages in patterns are classified as variables and are substituted by masked attributes. The masking process supports and improves the clustering phase by allowing the algorithm to ignore changing details and focus on the repetitive structure. Masked attributes include: date_time ip url uuid Masked attributes are highlighted and are easy to identify, as shown in the following example. one.newrelic.com > Log management > Log patterns: Here is an example of a pattern that has masked attributes. Log patterns extract other less trivial variables that don't belong to any masked attribute. These variables are indicated as wildcards *. one.newrelic.com > Log management > Log patterns: Here is an example of how wildcards * group variables. Troubleshooting Here are a few reasons why you might have patterns enabled but not see any pattern data. If you're sure none of the items below are true, get help from support.newrelic.com. - No data has arrived in the timeframe you're observing. Try expanding the time range you're viewing with the time picker. - It's been less than 24 hours since patterns were enabled in the account. This means the ML model may not be generated for the account yet. - None of the data coming in has a messagefield. Patterns will only be generated for values in the messagefield of a log record. If your logs don't contain message, there will be no data. Put the platform to work with patterns Patterns are a value that is enriched onto the existing log message as a new attribute named newrelic.logPattern. Anything you can do with logs generally can be done with log patterns, such as: - Build your own dashboards with patterns, to monitor a specific pattern or group of patterns you care about. - Create alerts for patterns by adding NRQL alerts. - Use baseline alert conditions to detect anomalies in known log patterns.。
https://docs.newrelic.com/jp/docs/logs/log-management/ui-data/find-unusual-logs-log-patterns/
2021-05-06T07:10:09
CC-MAIN-2021-21
1620243988741.20
[]
docs.newrelic.com
Mercury¶ This section contains a series of tutorial on Mercury. These tutorials are aimed at people who want to use Mercury without using Margo or Thallium (e.g., when using another threading library). Note that it is not necessary for Margo and Thallium users to follow these turorials in order to use Margo and Thallium.
https://mochi.readthedocs.io/en/latest/mercury.html
2021-05-06T07:14:15
CC-MAIN-2021-21
1620243988741.20
[]
mochi.readthedocs.io
Sculpting Tools¶ For Grease Pencil sculpt modes each brush type is exposed as a tool, the brush can be changed in the Tool Settings. See Brush for more information. - Smooth Draw free-hand annotation. - Annotate Line Draw straight line annotation. - Annotate Polygon Draw a polygon annotation. - Annotate Eraser Erase previous drawn annotations.
https://docs.blender.org/manual/en/latest/grease_pencil/modes/sculpting/tools.html
2021-05-06T05:43:39
CC-MAIN-2021-21
1620243988741.20
[array(['../../../_images/grease-pencil_modes_sculpting_tools_brushes.png', '../../../_images/grease-pencil_modes_sculpting_tools_brushes.png'], dtype=object) ]
docs.blender.org
Creating a JBoss 7.1 service offering This topic describes the tasks that you must perform in BMC Cloud Lifecycle Management to create the service offering that the end user can then use to provision the application infrastructure (for example, an OS and an application package). Note Although services and service offerings are bundled with the zipkits, you might want to use the procedures to create a service and service offering to create or edit these artifacts. It includes the following topics: To create the service and the service offering In the BMC Cloud Lifecycle Management Administrator console, you must add a service and a service offering. For JBoss service blueprint, create a service and service offering based on the available deployment model. The JBoss service blueprint supports the following service deployment models: To create a service - From the BMC Cloud Lifecycle Management Administration Console, click the vertical Workspaces menu on the left side of the window, and click Service Catalog. - In the Service Catalog, click Create a New Service. - Enter the service name. - For Type, select a service type. - Business service — Services that customers use and that show the customer view of services, such as email or an online store. Technical service — Supporting IT and infrastructure resources required to support business services that are not visible to customers, such as servers, applications, and network CIs. Note After you select the type and save the service, you cannot change the type. - Enter a description of the service. - Do one of the following actions: - To create the service offering, click Apply. - To create the service offering later, click Save to save your selections and close the window.. - Click Create a New Service Offering. In the General Information tab, define the options described in the following table. - Add a Base Customer Price to define the amount charged to the customer for the service offering. - Add a Base Deployment Cost to define the amount that it costs to provide the service offering. - Click Apply. This action activates the Options tab. You now also can create a requestable offering (for example, a request definition or a post-deploy action). For additional information, see Creating a requestable offering in the BMC Cloud Lifecycle Management online technical documentation. To make the provisioning request - Access Workspaces > Service Instances to display the Service Instances workspace, and click New Service Request. - In the New Service Request dialog box, click the server provisioning service you want to display in the Submit Request dialog box. - Enter the data in the required fields to complete the request for an instance of the service request. You can click Next to review the details. - Click Submit. The request is added to the Pending Activity list in the Service Instances window. The request status is displayed in the Pending Activity list of the Service Instances window. You can double-click on the service request to see its detailed information. For more detailed procedures, see Requesting cloud services and Requesting cloud services in the legacy console. To validate the provisioned components After provisioning the blueprint, you can validate the JBoss along with MySQL components setup in your environment. Where to go next Once you have created the service offering, the cloud end user can request a service offering from the My Cloud Services Console. To view a list of tasks the cloud end user can perform to manage your cloud services, see Managing cloud resources in the BMC Cloud Lifecycle Management online technical documentation.
https://docs.bmc.com/docs/cloudlifecyclemanagement/45/configuring-infrastructure-application-zipkits/jboss-7-1-zipkit/creating-a-jboss-7-1-service-offering
2021-05-06T05:46:34
CC-MAIN-2021-21
1620243988741.20
[]
docs.bmc.com
Ever noticed this super cool feature FunnelFlux has called the shortcut palette? It's easy to miss - you can see the shortcut in the footer of your user interface... So what on earth does it do? Well, if you have ever used something like SublimeText it will be quite familiar. The shortcut palette is a way for you to quickly access functions without having to navigate to different pages and wait for things to load. Let's say you're on the offers page and you realise you want to make an offer source. Hit Ctrl+Alt+P and that will bring up the shortcut palette: Now, start writing offer and it will filter the results: Now you can click "Create Offer Sources" to bring up the dialog for this. Note that each item has it's own dedicated shortcut as well. Once you fill in the offer source details, you can click save then close the dialog. Now you can hit Ctrl+Alt+P again, write offer, click create offer and create an offer under that newly created offer source -- all without leaving the page you are on, e.g. the funnel editor. For those of you who are keyboard ninjas, this will help you save time with basic add/edit functions since you don't need to wait on navigating around the UI. Pretty cool eh 😉
https://docs.funnelflux.com/en/articles/1772197-using-the-shortcut-palette
2021-05-06T07:27:01
CC-MAIN-2021-21
1620243988741.20
[]
docs.funnelflux.com
The FunnelFlux javascript tracking provides some information about the visiting user that can be grabbed and used for whatever purpose. We will be expanding the JS over time to provide more information - such as user country, ISP, etc., so that you can use this information in your landers without needing to pass it in the URL or rely on external providers. The javascript's useful output When the FunnelFlux JS loads it generates an iFrame on the page with the id _ffq_track . This iFrame contains a message with data about the visitor - a typical message may look like this (though it would not have nice formatting like this): window.parent.postMessage({ "idHit": "325721182056347732", "hitTimestamp": 1529576310, "entranceTimestamp": 1529576310, "hitCost": 0, "idVisitor": "325721182056347732", "visitorTagIds": null, "idIncomingTraffic": "325721182056347732", "idTrafficSource": "17", "idFunnel": "203212058743938454", "idNode": 203744182158054903, "nodeType": 2, "timeOnPage": null, "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36", "mainLanguage": "en-US", "otherLanguages": null, "referrer": "", "bIsNewVisitor": true, "bFilteredTraffic": null, "bFunnelEntrance": true, "aTrackingFields": [], "frameId": "_ffq_track_", "ok": 1 }, '*'); So, right now it contains the visitor hit ID, their visitor ID, the funnel and node ID, a list of any tags, and so on. Accessing and using this data The easiest way to access and use this data is by adding some javascript to "listen" for this message then do something. Importantly, code to do this should come before the FunnelFlux JS -- no point trying to listen for something after it has happened right? Here is example code to retrieve visitor ID: <script type="text/javascript"> window.addEventListener("message", function(e) { if( e.data.ok && e.data.frameId == '_ffq_track_' ) { var ffVisitorID = e.data.idVisitor; } }, false); </script> The above code creates a new variable ffVisitorID and sets this to whatever is in the tracking JS message that spawns when the page loads. If you wanted to directly set some HTML properties you could do so in the JS as well: <script type="text/javascript"> window.addEventListener("message", function(e) { if( e.data.ok && e.data.frameId == '_ffq_track_' ) { document.getElementById('visitorid').value = e.data.idVisitor; } }, false); </script> The above could be used to set some hidden form field with ID visitorid to have the FunnelFlux visitor ID in it -- useful if you want to pass visitor ID's to email systems through their opt-in forms. Getting data provided by tokens Ever wanted to get tracker data like country name, device model, ISP etc. without having to use redirects and pass these in the URL? Yep, can do that. You can use any token in the JS to add the corresponding output to the iFrame so you can use it in your page code. Simply update your JS to include relevant tokens: fflux.track({ timeOnPage: false, timeOnPageResolution: 3000, noCookies: false, tokenInjection: { intoUrl: false, intoForms: { selector: null }, intoLinks: { selector: null }, tokens: { 'country': '{location-countryname}', 'isp': '{connection-isp}' } } }); Now the postMessage output will include a flux_inject section that contains these tokens. Example: "flux_inject": { "intoUrl": false, "intoForms": { "selector": null }, "intoLinks": { "selector": null }, "tokens": { "country": "United States", "isp": "Comcast LLC" } } You can then access all these tokens using the same approach as earlier: <script type="text/javascript"> window.addEventListener("message", function(e) { if( e.data.ok && e.data.frameId == '_ffq_track_' ) { var userCountry = e.data.flux_inject.tokens.country; var userIsp = e.data.flux_inject.tokens.isp; } }, false); </script> If you want to use this data dynamically in page text, we suggest wrapping the code that makes those updates inside the above code (i.e. after the var definition parts) so that they all execute in order once the postMessage is available, to make the process as fast and efficient as possible. What we plan to add There is a general trend of moving away from redirection-based tracking, which will make it increasingly difficult to rely on passing data in URLs with tokens. But, dynamic pages that use a passed model, ISP, and tracking field parameters are quite common in the marketing industry - we will try to make the JS as competent at providing these as a redirect. Our current list of data to add includes: User location (continent/country/city) (DONE) User ISP / mobile carrier (DONE) Device type, model, manufacturer (DONE) Is located in EU country? (yes/no) Is known datacenter? (yes/no) Previous node ID Previous lander node visited (if applicable) Session ID We removed IP from the response as part of making the tracking system better align with GDPR - we may add it back in the future with it automatically anonymised for EU countries though at this stage have no plans to add it back.
https://docs.funnelflux.com/en/articles/2041534-getting-data-from-the-funnelflux-javascript
2021-05-06T06:57:39
CC-MAIN-2021-21
1620243988741.20
[]
docs.funnelflux.com
Senate Record of Committee Proceedings Committee on Labor and Government Reform Senate Bill 424 Relating to: leave from employment for the purpose of serving as an organ donor. By Senators Lassa, Lasee, Bewley, L. Taylor, C. Larson, Ringhand, Harris Dodd and Hansen; cosponsored by Representatives Mason, Jacque, Ohnstad, Johnson, Milroy, Ballweg, Horlacher, Subeck, C. Taylor, Sinicki, Spreitzer, Kolste, Considine and Riemer. December 03, 2015 Referred to Committee on Labor and Government Reform April 07, 2016 Failed to pass pursuant to Senate Joint Resolution 1 ______________________________ Mike Mikalsen Committee Clerk
https://docs.legis.wisconsin.gov/2015/related/records/senate/labor_and_government_reform/1239365
2021-05-06T07:57:41
CC-MAIN-2021-21
1620243988741.20
[]
docs.legis.wisconsin.gov
Teams interoperability Important Functionality described in this document is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews. Important To enable/disable Teams tenant interoperability, complete this form. Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing. Teams interoperability allows you to create custom applications that connect users to Teams meetings. Users of your custom applications don't need to have Azure Active Directory identities or Teams licenses to experience this capability. This is ideal for bringing employees (who may be familiar with Teams) and external users (using a custom application experience) together into a seamless meeting experience. For example: - Employees use Teams to schedule a meeting - Meeting details are shared with external users through your custom application. - Using Graph API Your custom Communication Services application uses the Microsoft Graph APIs to access meeting details to be shared. - Using other options For example, your meeting link can be copied from your calendar in Microsoft Teams. - External users use your custom application to join the Teams meeting (via the Communication Services Calling and Chat SDKs) The high-level architecture for this use-case looks like this: While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call. When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the Teams security guide to configure capabilities available to anonymous users. Communication Services Teams Interop is currently in private preview. When generally available, Communication Services users will be treated like "External access users". Learn more about external access in Call, chat, and collaborate with people outside your organization in Microsoft Teams. Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the meeting settings. If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages. Teams in Government Clouds (GCC) Azure Communication Services interoperability isn't compatible with Teams deployments using Microsoft 365 government clouds (GCC) at this time.
https://docs.microsoft.com/en-us/azure/communication-services/concepts/teams-interop
2021-05-06T08:13:11
CC-MAIN-2021-21
1620243988741.20
[array(['media/call-flows/teams-interop.png', 'Architecture for Teams interop'], dtype=object)]
docs.microsoft.com
A complete map of ConnectionOptions to values. The credentials that a session uses to authenticate itself. When running the Diffusion Client in a browser context, access to the Buffer api is made available through {@link diffusion.buffer}. Alias for the JSON interface to keep compatibility with old TypeScript definitions Alias for the JSONDataType interface to keep compatibility with old TypeScript definitions A Result represents a promise for the result of an async operation. It implements the full ES6 Promise specification and is in all respects equivalent to a Promise. Adapted from Alias for the Options interface to keep compatibility with old TypeScript definitions Permissions that are applied on a path Permissions that are applied on a path the error that occurred Access to the Buffer API that is packaged with diffusion. This can be used in browsers that don't have a native Buffer class. It allows the creation of buffers for use with binary datatypes. Allocates a new buffer containing the given {str}. String to store in buffer. encoding to use, optional. Default is 'utf8' Allocates a new buffer of {size} octets. count of octets to allocate. Allocates a new buffer containing the given {array} of octets. The octets to store. Produces a Buffer backed by the same allocated memory as the given {ArrayBuffer}. The ArrayBuffer with which to share memory. Allocates a new buffer containing the given {array} of octets. The octets to store. Copies the passed {buffer} data onto a new {Buffer} instance. The buffer to copy. This is the number of bytes used to determine the size of pre-allocated, internal Buffer instances used for pooling. This value may be modified. Allocates a new buffer of {size} octets. count of octets to allocate. if specified, buffer will be initialized by calling buf.fill(fill). If parameter is omitted, buffer will be filled with zeros. encoding used for call to buf.fill while initalizing Allocates a new buffer of {size} octets, leaving memory not initialized, so the contents of the newly created Buffer are unknown and may contain sensitive data. count of octets to allocate Allocates a new non-pooled buffer of {size} octets, leaving memory not initialized, so the contents of the newly created Buffer are unknown and may contain sensitive data. count of octets to allocate Gives the actual byte length of a string. encoding defaults to 'utf8'. This is not the same as String.prototype.length since that returns the number of characters in a string. string to test. (TypedArray is also allowed, but it is only available starting ES2017) encoding used to evaluate (defaults to 'utf8') The same as buf1.compare(buf2).. An array of Buffer objects to concatenate Total length of the buffers when concatenated. If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional loop to the function, so it is faster to provide the length explicitly. When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray. The optional {byteOffset} and {length} arguments specify a memory range within the {arrayBuffer} that will be shared by the Buffer. The .buffer property of a TypedArray or a new ArrayBuffer() Creates a new Buffer using the passed {data} data to create a new Buffer Creates a new Buffer containing the given JavaScript string {str}. If provided, the {encoding} parameter identifies the character encoding. If not provided, {encoding} defaults to 'utf8'. Returns true if {obj} is a Buffer object to test. Returns true if {encoding} is a valid encoding argument. Valid string encodings in Node 0.12: 'ascii'|'utf8'|'utf16le'|'ucs2'(alias of 'utf16le')|'base64'|'binary'(deprecated)|'hex' string to test. The build version of this client library Access to PropertyKeys Access to the datatypes namespace The ErrorReason enum Valid TopicSpecification keys Access to the locks namespace Access to the selectors namespace Access to the topicUpdate namespace Access to the topics namespace The version of this client library in the form major.minor.patch Connect to a specified Diffusion server. This will return a Result that will complete successfully if a session can be connected, or fail if an error was encountered. If the result is successful, the fulfilled handler will be called with a Session instance. This session will be in a connected state and may be used for subsequent API calls. If the result fails, the rejected handler will be called with an error reason. If sessionName and workerJs is supplied, then the call will create a shared session inside a shared WebWorker. If a shared session with that name already exists, this function will return an instance of the existing session. Shared sessions can only be created when running in a browser environment that supports the SharedWorker. For more information regarding shared sessions, see connectShared. Example: diffusion.connect('example.server.com').then(function(session) { // Connected with a session console.log('Connected!', session); }, function(error) { // Connection failed console.log('Failed to connect', error); }); the options to construct the session with. If a string is supplied, it will be interpreted as the host option. the name of the shared session the location of the diffusion worker script a Result for this operation Connect to a specified Diffusion server using a shared WebWorker session. This will return a Result that will complete successfully if a session can be connected, or fail if an error was encountered. Shared sessions can only be created when running in a browser environment that supports the SharedWorker. Shared sessions are identified by a name. If a shared session with that name already exists, this function will return an instance of the existing session. Otherwise the call will fail. Sessions can only be shared across scripts originating from a single domain. Otherwise the browser will create independent shared workers resulting in one shared session for each domain. The shared session will stay alive as long as there is an open browser tab that initiated a connection through the shared session. When the last tab is closed the shared worker holding the shared session will be terminated. The shared session is also expected to be terminated when the only tab holding a reference to the session is reloaded or experiences a page navigation. The exact behavior may be browser dependent. The workerJs argument must be set to the URL of the diffusion-worker.js supplied with the JavaScript distribution. The same-origin policy of the shared worker requires the calling script and the diffusion-worker.js to reside on the same domain. If the result is successful, the fulfilled handler will be called with a Session instance. This session will be in a connected state and may be used for subsequent API calls. If the result fails, the rejected handler will be called with an error reason. Example: diffusion.connectShared('some-session', 'diffusion-worker.js') .then(function(session) { // Connected with a session console.log('Connected!', session); }, function(error) { // Connection failed console.log('Failed to connect', error); }); the name of the shared session the location of the diffusion worker script a Result for this operation Escapes special characters in a string that is to be used within a topic property or a session filter. This is a convenience method which inserts an escape character '' before any of the special characters ' " or . the string to be escaped the string value with escape characters inserted as appropriate Set the level of logging used by Diffusion. This will default to silent. Log levels are strings that represent different degrees of information to be logged. Available options are: the log level to use a set of roles a string representing the supplied roles, formatted as required by the $Roles session property the string with quoted roles separated by whitespace or commas set of roles Returns an update constraint factory. update constraint factory Value returned by Session.getPrincipal if no principal name is associated with the session. Dictionary containing standard session property keys Example: // Get the ALL_FIXED_PROPERTIES key var props = diffusion.clients.PropertyKeys.ALL_FIXED_PROPERTIES; Example: // Get all user and fixed properties var props = diffusion.clients.PropertyKeys.ALL_FIXED_PROPERTIES .concat(diffusion.clients.PropertyKeys.ALL_USER_PROPERTIES); Enum representing the reason that the session has been closed. Example: diffusion.connect(case diffusion.clients.CloseReason.CLOSED_BY_CLIENT: // Do something case diffusion.clients.CloseReason.ACCESS_DENIED: // Do something else ... } });).then(function(session) , function(err) { switch(err) { The connection attempt failed due to a security restraint or due to invalid credentials. The client requested close. Not recoverable. The session has been closed by the server, or another session using the ClientControl feature. The client could not establish a connection to the server. An error was thrown while the client was attempting to connect to the server. There was an error parsing the handshake response. The client received a handshake response from the server but the response was malformed and could not be deserialised. The connection handshake was rejected by the server. The server responded with an unknown error code when the client attempted to connect. The client detected that the connection was idle. The client has not received a ping message from the server for an extended period of time. The connection request was rejected because the license limit was reached. Loss of messages from the client has been detected. For example, whilst waiting for the arrival of missing messages in a sequence of messages a timeout has occurred. HTTP based transports use multiple TCP connections. This can cause the messages to be received out of order. To reorder the messages those sent to the server may contain a sequence number indicating the correct order. If a message is received out of order there is a short time for the earlier messages to be received. If the messages are not received in this time the client is closed. Missing, invalid or duplicate sequence numbers will also close the client for this reason. This cannot be recovered from as the client and the server are in inconsistent states. The handshake response contained an incompatible protocol version. The server does not support the client's protocol version. Whilst disconnected, the client explicitly aborted a reconnect attempt. There was an unexpected error with the network connection. The underlying transport (Websocket, XHR) received an error that could not be handled. Enum containing reason codes used to report error conditions. Some common ErrorReason values are defined as global constants. More specific reasons may be defined by individual features. Example: // Handle an error from the server session.addStream('foo', diffusion.datatypes.string()).on('error', function(e) { if (e == diffusion.errors.ACCESS_DENIED) { // Handle authorisation error } else { // Log the problem console.log(e); } }); The request was rejected because the caller has insufficient permissions. An application callback threw an exception. Check logs for more information. A cluster operation failed because partition ownership changed during processing. This is a transient error that occurs while the cluster is recovering from failure. The session can retry the operation. A cluster operation failed to be routed to a server within the cluster due to a communication failure, or the server that owns a partition is not currently known. This is a transient error that occurs while the cluster is recovering from failure. The session can retry the operation. Communication with the server failed. A conflicting registration exists on the server. An operation failed due to using an incompatible data type. A topic update could not be performed because the topic is managed by a component (for example,fan-out) which prohibits external updates. An operation failed because invalid data was received. An invalid path was supplied. An operation failed because an addressed session could not be found. A recipient session has rejected a received message request. Communication with the server failed because a service request timed out. Communication with the server failed because the session is closed. A conflicting registration exists on the same branch of the topic tree. A sent message was not handled by the specified recipient. The request was rejected because the requested operation is unsupported for this caller. The reason that a topic could not be added. Example: session.topics.add('foo').then(function() { ... }, function(err) { switch (err) { case diffusion.topics.TopicAddFailReason.EXISTS: ... case diffusion.topics.TopicAddFailReason.INVALID_PATH: ... } }); When trying to create the topic the cluster was migrating the partition that owns the topic. The correct owner could not be identified and the request failed. This is a transient failure for the duration of the partition migration. Adding the topic failed because of a license limit. The topic already exists with the same details. Adding the topic failed because a topic is already bound to the specified path but the caller does not have the rights to manage it. This can be because the topic is being managed by a component with exclusive control over the topic, such as fan-out and thus the caller will not be able to update or remove the topic. If the caller has suitable permissions then it could still subscribe to the topic, but the topic's specification may be different from that requested. The topic already exists, with different details. Adding the topic failed because an incompatible topic owned is already bound to the parent path. The topic could not be initialised, supplied value may be of the wrong format. Deprecation notice This value is associated only with removed methods that allow the specification of an initial value when creating a topic. It will be removed in a future release. The topic details are invalid. The supplied topic path is invalid. The topic path is invalid. Invalid permissions to add a topic at the specified path. A referenced topic could not be found. An unexpected error occured while creating the topic. A user supplied class could not be found or instantiated. Deprecation notice This value is associated only with removed methods that create topics. It will be removed in a future release. Enum containing possible Topic Types. Example: // Get a topic type for adding topics var topicType = diffusion.topics.TopicType.JSON; session.topics.add("foo", topicType); Binary Topic. This is a stateful topic that handles data in Binary format. Topic that stores and publishes IEEE 754 double-precision floating point numbers (i.e native JavaScript Numbers). Based on the double data type. Supports null Double values. The topic does not support delta-streams - only complete values are transmitted. Topic that stores and publishes 64-bit integer values. Based on the int64 data type. Values are of the type Int64. Supports null int64 values. Does not support delta-streams - only complete values are transmitted. JSON (JavaScript Object Notation) Topic. This is a stateful topic that handles data in JSON representation. Topic that stores and publishes data in the form of records and fields. Based on the RecordV2 data type. Supports delta-streams. Routing Topic. A functional topic that can point to different target topics for different clients. From the point of view of a client subscribing to such a topic this would be seen as a normal stateful topic but it has no state of its own and cannot be published to. Such a topic may specify a user written Java class which will be invoked to define the mapping of the topic to another data topic when a client subscribes. Alternatively the mapping can be delegated to a control client using the SubscriptionControl feature. Topic that stores and publishes String values. Based on the string data type. Supports null String values. Supports delta-streams. Time Series Topic. A time series is a sequence of events. Each event contains a value and has server-assigned metadata comprised of a sequence number, timestamp, and author. A time series topic allows sessions to access a time series that is maintained by the server. A time series topic has an associated event data type, such as Binary, String, or JSON, that determines the type of value associated with each event. The TIME_SERIES_SUBSCRIPTION_RANGE property configures the range of historic events retained by a time series topic. If the property is not specified, a time series topic will retain the ten most recent events. The TIME_SERIES_SUBSCRIPTION_RANGE property configures a time series topic to send a range of historic events from the end of the time series to new subscribers. This is a convenient way to synchronize new subscribers without requiring the use of a range query. By default, new subscribers will be sent the latest event if delta streams are enabled and no events if delta streams are disabled. See the description of Subscription range in the Session.timeseries time series feature} documentation. The TIME_SERIES_EVENT_VALUE_TYPE property must be provided when creating a time series topic. A topic type that is unsupported by the session. Enum containing reasons that an unsubscription occurred. Example: // Use UnsubscribeReason to validate unsubscription notifications session.addStream('>foo', diffusion.datatypes.string()) .on('unsubscribe', function(topic, specification, reason) { switch (reason) { case diffusion.topics.UnsubscribeReason.REMOVED : // Do something if the topic was removed default : // Do something else if the client was explicitly unsubscribed } }); The unsubscription occurred because the session is no longer authorized to access the topic. The server has a significant backlog of messages for the session, and the topic specification has the conflation topic property set to 'unsubscribe'. The session can resubscribe to the topic. The unsubscription is not persisted to the cluster. If the session fails over to a different server it will be resubscribed to the topic. The server or another client unsubscribed this client. The topic was removed The unsubscription was requested by this client. A fallback stream has been unsubscribed or subscribed due to the addition or removal of a stream that selects the topic. The server has re-subscribed this session to the topic. Existing streams are unsubscribed because the topic type and other attributes may have changed. This can happen if a set of servers are configured to use session replication, and the session connected to one server reconnects ('fails over') to a different server. A stream that receives an unsubscription notification with this reason will also receive a subscription notification with the new TopicSpecification. A reason that is unsupported by the session. The reason that a topic could not be updated. Example: session.topics.update('foo', 'bar').then(function() { ... }, function(err) { switch (err) { case diffusion.topics.UpdateFailReason.MISSING_TOPIC: ... case diffusion.topics.UpdateFailReason.EXCLUSIVE_UPDATER_CONFLICT: ... } }); An update to a replicated topic failed because the cluster was repartitioning due to a server starting, stopping, or failing. The session can retry the operation. An attempt has been made to apply a delta to a topic that has not yet has a value specified for it. Attempt to perform a non-exclusive update to a topic branch that already has an update source registered to it. An update could not be performed because the topic is managed by a component (e.g fan-out) that prohibits updates from the caller. The update was of a type that is not compatible with the topic it was submitted for, or the topic does not support updating. The updater used is not active. An update to a replicated topic failed because the cluster was repartitioning due to a server starting, stopping, or failing. The session can retry the operation. The topic being updated does not exist. An update was not performed because the constraint was not satisfied. The update failed, possibly because the content sent with the update was invalid/incompatible with topic type or data format. Action to be taken by the system authentication handler for connection attempts that do not provide a principal name and credentials.
https://docs.pushtechnology.com/docs/6.6.0-preview.2/js/globals.html
2021-05-06T05:44:07
CC-MAIN-2021-21
1620243988741.20
[]
docs.pushtechnology.com
Spike. Getting the Simulator You can use either RISC-V ISA Simulator or QEMU >= v4.2 shipped with your Linux distribution. If you prefer to build qemu from source, make sure you have the correct target enabled. git clone cd qemu mkdir build cd build ../configure --prefix=/opt/riscv --target-list=riscv64-softmmu,riscv32-softmmu make Building seL4test repo init -u repo sync mkdir cbuild cd cbuild ../init-build.sh -DPLATFORM=spike -DRISCV64=1 # The default cmake wrapper sets up a default configuration for the target platform. # To change individual settings, run `ccmake` and change the configuration # parameters to suit your needs. ninja # If your target binaries can be executed in an emulator/simulator, and if # our build system happens to support that target emulator, then this script # might work for you: ./simulate If you plan to use the ./simulate script, please be sure to add the -DSIMULATION=1 argument when running cmake. Generated binaries can be found in the images/ directory. You can also use run the tests on the 32-bit spike platform by replacing the -DRISCV64=TRUE option with -DRISCV32=TRUE.
https://docs.sel4.systems/Hardware/spike.html
2021-05-06T07:00:53
CC-MAIN-2021-21
1620243988741.20
[]
docs.sel4.systems
math¶ math. isfinite(x: float)¶ isfinite(float) -> bool Return True if x is neither an infinity nor a NaN, and False otherwise. math. ceil(x: float)¶ ceil(float) -> float Return the ceiling of x as an Integral. This is the smallest integer >= x. math. floor(x: float)¶ floor(float) -> float Return the floor of x as an Integral. This is the largest integer <= x. math. expm1(x: float)¶ expm1(float) -> float Return e raised to the power x, minus 1. expm1 provides a way to compute this quantity to full precision. math. ldexp(x: float, i: int)¶ ldexp(float, int) -> float Returns x multiplied by 2 raised to the power of exponent. math. atan2(y: float, x: float)¶ atan2(float, float) -> float Returns the arc tangent in radians of y/x based on the signs of both values to determine the correct quadrant. math. hypot(x: float, y: float)¶ hypot(float, float) -> float Return the Euclidean norm. This is the length of the vector from the origin to point (x, y). math. copysign(x: float, y: float)¶ copysign(float, float) -> float Return a float with the magnitude (absolute value) of x but the sign of y. math. trunc(x: float)¶ trunc(float) -> float Return the Real value x truncated to an Integral (usually an integer). math. lgamma(x: float)¶ lgamma(float) -> float Return the natural logarithm of the absolute value of the Gamma function at x. math. remainder(x: float, y: float)¶ remainder(float, float) -> float Return the IEEE 754-style remainder of x with respect to y. For finite x and finite nonzero y, this is the difference x - n*y, where n is the closest integer to the exact value of the quotient x / y. If x / y is exactly halfway between two consecutive integers, the nearest even integer is used for n. math. gcd(a: float, b: float)¶ gcd(float, float) -> float returns greatest common divisor of x and y. math. frexp(x: float)¶ frexp(float) -> Tuple[float, int] The returned value is the mantissa and the integer pointed to by exponent is the exponent. The resultant value is x = mantissa * 2 ^ exponent. math. modf(x: float)¶ modf(float) -> Tuple[float, float] The returned value is the fraction component (part after the decimal), and sets integer to the integer component. math. isclose(a: float, b: float)¶ isclose(float, float) -> bool Return True if a is close in value to b, and False otherwise. For the values to be considered close, the difference between them must be smaller than at least one of the tolerances. Unlike python, rel_tol and abs_tol are set to default for now.
https://docs.seq-lang.org/stdlib/math.html
2021-05-06T07:47:28
CC-MAIN-2021-21
1620243988741.20
[]
docs.seq-lang.org
Creating Main Deformation Chains for Multi-pose Rigs The first step in creating a multi-pose rig is to create the main chain for the additional poses. - In the Timeline view, make sure the time marker is set to the frame displaying your first drawing. - In the Node view, select the drawing layer containing the drawing you want to create the chain for. If you want to create the chain for an arm composed in multiple pieces, you must select only one of the drawing layer, such as the upper arm. The additional pieces will be added afterward. - In the Deformation toolbar, click the Create New Deformation Chain button. A new deformation chain is created and appears in the Transformation Chain drop-down list. - In the Deformation toolbar, select the Rigging tool. - Create your deformer structure—see About Creating Deformations. - If needed, in the Node view, link the new deformation group to the additional pieces, such as the forearm and hand. - Repeat these steps for all the deformation chains that need to be created on the current puppet's view. These chains are now the default deformation chains that will be used on all drawings not using a custom deformation chain.
https://docs.toonboom.com/help/harmony-20/premium/deformation/create-main-deformation-chain-multi-pose-rig.html
2021-05-06T07:29:25
CC-MAIN-2021-21
1620243988741.20
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/Step/HAR9_03_PoseRig_001.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/HAR12/H12_Multiple_Poses_Rig-004.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/Step/HAR9_03_PoseRig_005.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Deformation/HAR12/H12_Multiple_Poses_Rig.png', None], dtype=object) ]
docs.toonboom.com
The Directional Light simulates light that is being emitted from a source that is infinitely far away. This means that all shadows cast by this light will be parallel, making this the ideal choice for simulating sunlight. The Directional Light when placed:
https://docs.unrealengine.com/en-US/BuildingWorlds/LightingAndShadows/LightTypes/Directional/index.html
2021-05-06T07:37:10
CC-MAIN-2021-21
1620243988741.20
[array(['./../../../../../Images/BuildingWorlds/LightingAndShadows/LightTypes/Directional/Directional_LightHeader.jpg', 'Directional_LightHeader.png'], dtype=object) array(['./../../../../../Images/BuildingWorlds/LightingAndShadows/LightTypes/Directional/directional_001.jpg', 'Directional Light'], dtype=object) array(['./../../../../../Images/BuildingWorlds/LightingAndShadows/LightTypes/Directional/directional_002.jpg', 'Directional Light Shadow Frustum'], dtype=object) ]
docs.unrealengine.com
DisassociateFirewallRuleGroup Disassociates a FirewallRuleGroup from a VPC, to remove DNS filtering from the VPC. Request Syntax { "FirewallRuleGroupAssociationId": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - FirewallRuleGroupAssociationId The identifier of the FirewallRuleGroupAssociation. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Required: Yes Response Syntax { "FirewallRuleGroupAssociation": { "Arn": "string", "CreationTime": "string", "CreatorRequestId": "string", "FirewallRuleGroupId": "string", "Id": "string", "ManagedOwnerName": "string", "ModificationTime": "string", "MutationProtection": "string", "Name": "string", "Priority": number, "Status": "string", "StatusMessage": "string", "VpcId": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - FirewallRuleGroupAssociation The firewall rule group association that you just removed. Type: FirewallRuleGroupAssociation:
https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DisassociateFirewallRuleGroup.html
2021-05-06T05:57:39
CC-MAIN-2021-21
1620243988741.20
[]
docs.aws.amazon.com
ListFirewallDomainLists Retrieves the firewall domain lists that you have defined. For each firewall domain list, you can retrieve the domains that are defined for a list by calling ListFirewallDomains. A single call to this list operation might return only a partial list of the domain lists. For information, see MaxResults. Request Syntax { "MaxResults": number, "NextToken": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - 100. { "FirewallDomainLists": [ { "Arn": "string", "CreatorRequestId": "string", "Id": "string", "ManagedOwnerName": "string", "Name": "string" } ], "NextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - FirewallDomainLists A list of the domain lists that you have defined. This might be a partial list of the domain lists that you've defined. For information, see MaxResults. Type: Array of FirewallDomainListMetadata objects -:
https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_ListFirewallDomainLists.html
2021-05-06T06:38:56
CC-MAIN-2021-21
1620243988741.20
[]
docs.aws.amazon.com
The collation data in glibc is extremely out of date, most locales base their collation rules on an iso14651_t1_common file which has not been updated for probably more than 15 years. Therefore, all characters added in later Unicode versions are missing and not sorted at all which causes bugs like 9.0.0. Because additions and changes in the syntax of the new iso146541_t1_common file, updating that file requires changing the collation rules of almost all locales. Because all these collation rules have to be touched anyway, this is a good opportunity to fix bugs in the collation ruies and sync them with the collation rules in CLDR.
https://docs.fedoraproject.org/bg/fedora/f28/release-notes/desktop/I18n/
2021-05-06T07:14:36
CC-MAIN-2021-21
1620243988741.20
[]
docs.fedoraproject.org
Azure Identity library samples for JavaScript These sample programs show how to use the JavaScript client libraries for Azure Identity in some common scenarios. Prerequisites The samples are compatible with Node.js >= 8.0.0. You need an Azure subscription and an Azure Key Vault to run these sample programs. To create an AAD application: - Follow Documentation to register a new application in the Azure Active Directory (in the Azure portal). - Note down the CLIENT_IDand TENANT_ID. - In the "Certificates & Secrets" tab, create a secret and note that down. To allow your registered application to access your Key Vault - In the Azure portal, go to your Azure Key Vault. - In the left-side-navbar of your Azure Key Vault in the Azure portal, go to the Access Policiessection, then click the + Add Access Policybutton. - In the Add access policypage, select all the permissions for Keys, Secrets and Certificates. - For the Select principalfield, click on the None selected. A panel will appear at the right of the window. Search for your Azure Active Directory application, click the application on the search results, then click "Select" at the bottom. - Once your application is selected, click the "Add" button. - Click the Savebutton at the top of the Access Policies section of your Key Vault. - For more information on securing your Key Vault: Learn more Adapting the samples to run in the browser may require some additional consideration. For details, please see the package README. Setup To run the samples using the published version of the package: - Install the dependencies using npm: npm install helloWorld.js Alternatively, run a single sample with the correct environment variables set (step 2 is not required if you do this), for example (cross-platform): npx cross-env KEYVAULT_NAME="<key vault name>" AZURE_TENANT_ID="<AAD tenant id>" AZURE_CLIENT_ID="<AAD client id>" AZURE_CLIENT_SECRET="<AAD client secret>" node environmentCredential.js Next Steps Take a look at our API Documentation for more information about the APIs that are available in the clients.
https://docs.microsoft.com/en-us/samples/azure/azure-sdk-for-js/identity-javascript/
2021-05-06T07:51:15
CC-MAIN-2021-21
1620243988741.20
[]
docs.microsoft.com
: Download the tool (exe) and documentation!
https://docs.microsoft.com/en-us/virtualization/community/team-blog/2013/20130522-hyper-v-replica-capacity-planner
2021-05-06T08:15:34
CC-MAIN-2021-21
1620243988741.20
[]
docs.microsoft.com
Get more context for task execution¶ When declaring a task, you can request for more context to be passed to the task function: @app.task(..., pass_context=True) def mytask(context: procrastinate.JobContext, ...): ... This serves multiple purposes. The first one is for introspection, logs, etc. The JobContext object contains all sort of useful information about the worker that executes the job. The other useful feature is that you can pass arbitrary context elements using App.run_worker (or App.run_worker_async) and its additional_context argument. In this case the context the task function receives will have an additional_context attribute corresponding to the elements that were passed: @app.task(pass_context=True) def mytask(context: procrastinate.JobContext): http_session = context.additional_context["http_session"] return await http_session.get("") async with AsyncSession() as http_session: await app.run_worker_async(additional_context={"http_session": http_session}) ... It may not be a good practice to use this additional_context object to share data from tasks to tasks. In order to keep the least surprising behavior, Procrastinate will try to keep modifications of this dictionary in one task from being visible by other tasks: tasks receive a shallow copy of this dict instead of the dict itself. That being said, the values kept in this dict are not processed by Procrastinate. Any task mutating a value inside this dict will impact what all the concurrent and following tasks will read. Note that if you start a worker, providing it an additional_context dict, and then modify the dict, the dict the tasks will receive will also be a shallow copy of the dict at the time the worker started running.
https://procrastinate.readthedocs.io/en/stable/howto/context.html
2022-09-25T05:12:12
CC-MAIN-2022-40
1664030334514.38
[]
procrastinate.readthedocs.io
Started by Ken., September 20, 2018, 04:45:31 PM 0 Members and 1 Guest are viewing this topic. QuoteWindows XP, Vista and 7: maximum of four primary partitions has been reached2. On a computer with Windows XP, Vista and 7 you can sometimes only choose to give the entire hard disk to Mint, in the installer from Linux Mint.In that case you probably already have the maximum of four primary partitions on the hard disk. Logical partitions have no maximum; primary partitions do. At least in the old fashioned BIOS, because modern Windows 10 computers running on UEFI (provided that the UEFI is in UEFI mode!), don't have this limitation.The solution is to destroy one of the primary partitions, for example by means of the application GParted on the Mint DVD. This can be a tiny partition, because size doesn't matter: the installer can afterwards retrieve enough space by shrinking another, existing partition. It's simply a matter of reducing the number of the primary partitions. Page created in 0.063 seconds with 20 queries.
https://www.docskillz.com/docs/index.php?PHPSESSID=e07f997c120877303bdd073a973b9cab&topic=1410.msg6602
2022-09-25T05:30:14
CC-MAIN-2022-40
1664030334514.38
[]
www.docskillz.com
Started by Ronald, May 26, 2021, 06:16:36 PM 0 Members and 1 Guest are viewing this topic. Quote from: Skhilled on May 29, 2021, 09:47:08 AMThe. Page created in 0.047 seconds with 19 queries.
https://www.docskillz.com/docs/index.php?PHPSESSID=e07f997c120877303bdd073a973b9cab&topic=1741.msg7756
2022-09-25T05:58:08
CC-MAIN-2022-40
1664030334514.38
[]
www.docskillz.com
Two-Factor Authentication Settings Overview For additional security, administrators can configure two-factor authentication (also referred to as multi-factor authentication) for end users accessing AppsAnywhere. This adds an extra layer of security to the system as end users need to provide a code or additional authentication to access AppsAnywhere resources. There are two methods of configuring two-factor authentication: Users are prompted for two-factor authentication when logging into AppsAnywhere Application launches are configured to prompt for two-factor authentication using Swivel Two-factor authentication on login Using this method, two-factor authentication is not configured directly in AppsAnywhere, instead administrators configure a third-party single sign on method which has two-factor authentication enabled. The most common types are OAuth 2.0 (e.g. Azure AD or ADFS) and SAML 2.0. Administrators will need to refer to their identity provider for details on configuring two-factor authentication in the third-party single sign on system. Once setup, AppsAnywhere can be configured to direct users to the single sign on method when they access the base URL by setting the Action for Unauthenticated Users to redirect to the single sign on URL. Users will then be prompted for two-factor authentication. See Configuring SSO Defaults for more information. Two-factor authentication on applications This is configured in AppsAnywhere Admin once a Swivel API server has been configured. Refer to the Swivel provider documentation for more information on configuring Swivel. Once a Swivel API server has been setup, it can be connected to AppsAnywhere. Navigate to Settings > Two-Factor Authentication Set Enabled 2FA Module to Swivel. Enter the Swivel API URL. Enter the Swivel API Secret. Set the correct Version. To enable two-factor authentication on applications Set the Secure With Two-Factor Authentication? Never will not prompt for two-factor authentication. Always will prompt on every launched. Off-Site Only will only prompt when the user is defined as being off site. Save the application.
https://docs.appsanywhere.com/appsanywhere/2.12/two-factor-authentication-settings
2022-09-25T05:01:44
CC-MAIN-2022-40
1664030334514.38
[]
docs.appsanywhere.com
InfluxDB file system layout The InfluxDB Enterprise file system layout depends on the installation method or containerization platform used to install InfluxDB Enterprise. InfluxDB Enterprise file structure The InfluxDB file structure includes the following: - Data directory - WAL directory - Metastore directory - Hinted handoff directory - InfluxDB Enterprise configuration files Data directory (Data nodes only) Directory path where InfluxDB Enterprise stores time series data (TSM files). To customize this path, use the [data].dir configuration option. WAL directory (Data nodes only) Directory path where InfluxDB Enterprise stores Write Ahead Log (WAL) files. To customize this path, use the [data].wal-dir configuration option. Hinted handoff directory (Data nodes only) Directory path where hinted handoff (HH) queues are stored. To customize this path, use the [hinted-handoff].dir configuration option. Metastore directory Directory path of the InfluxDB Enterprise metastore, which stores information about the cluster, users, databases, retention policies, shards, and continuous queries. On data nodes, the metastore contains information about InfluxDB Enterprise meta nodes. To customize this path, use the [meta].dir configuration option in your data node configuration file. On meta nodes, the metastore contains information about the InfluxDB Enterprise RAFT cluster. To customize this path, use the [meta].dir configuration option in your meta node configuration file. InfluxDB Enterprise configuration files InfluxDB Enterprise stores default data and meta node configuration file on disk. For more information about using InfluxDB Enterprise configuration files, see: File system layout InfluxDB Enterprise supports .deb- and .rpm-based Linux package managers. The file system layout is the same with each. Data node file system layout Data node file system overview - /etc/influxdb/ - influxdb.conf (Data node configuration file) - /var/lib/influxdb/ - data/ - TSM directories and files - hh/ - HH queue files - meta/ - client.json - wal/ - WAL directories and files Meta node file system layout Meta node file system overview - /etc/influxdb/ - influxdb-meta.conf (Meta node configuration file) - /var/lib/influxdb/ - meta/ - peers.json - raft.db - snapshots/ - Snapshot directories and.
https://docs.influxdata.com/enterprise_influxdb/v1.10/concepts/file-system-layout/
2022-09-25T05:23:02
CC-MAIN-2022-40
1664030334514.38
[]
docs.influxdata.com
Manage software from FreeBSD ports New in version 2014.1.0. Note Warning Any build options not passed here assume the default values for the port, and are not just differences from the existing cached options from a previous make config. Example usage: security/nmap: ports.installed: - options: - IPV6: off
https://docs.saltproject.io/en/3004/ref/states/all/salt.states.ports.html
2022-09-25T04:06:38
CC-MAIN-2022-40
1664030334514.38
[]
docs.saltproject.io
Extended Advertising Configuration FlagsExtended Advertiser Detailed Description This enum defines configuration flags for the extended advertising. Macro Definition Documentation ◆ SL_BT_EXTENDED_ADVERTISER_ANONYMOUS_ADVERTISING Omit advertiser's address from all PDUs (anonymous advertising). The advertising cannot be connectable or scannable if this flag is set. ◆ SL_BT_EXTENDED_ADVERTISER_INCLUDE_TX_POWER Include the TX power in advertising packets.
https://docs.silabs.com/bluetooth/latest/a00042
2022-09-25T04:03:27
CC-MAIN-2022-40
1664030334514.38
[]
docs.silabs.com
About the Splunk Infrastructure Monitoring entity integration in ITSI The (ITSI) entity integration with Splunk Infrastructure Monitoring lets you use ITSI monitoring tools to investigate and troubleshoot your AWS, Azure, and GCP instances from Splunk Infrastructure Monitoring. The integration leverages the Splunk Infrastructure Monitoring Add-on, which runs on the search head cluster and provides generating search commands that fetch metrics and event data from your Splunk Infrastructure Monitoring account. For setup instructions, see Integrate Splunk Infrastructure Monitoring with ITSI. Fetch data with the Splunk Infrastructure Monitoring Add-on The Splunk Infrastructure Monitoring Add-on brings metrics and event data from Splunk Infrastructure Monitoring into ITSI on-demand. The return data bypasses Splunk indexes and directly streams into the Splunk interface. You can further manipulate the Splunk Infrastructure Monitoring data using Splunk Search Processing Language (SPL) to fit your specific use case. ITSI takes the data and populates the ITSI summary index with the appropriate metrics and events. For more information, see Set up Infrastructure Monitoring. Add structure to your data with the Content Pack for Splunk Infrastructure Monitoring When you install the Content Pack for Splunk Infrastructure Monitoring, ITSI entity discovery searches use the Splunk Infrastructure Monitoring Add-on to identify AWS, Azure, and GCP integration instances in your organization. The searches bring your cloud instances into ITSI in the form of entities and associate them with entity types. Each Splunk Infrastructure Monitoring entity contains a navigation link in the entity health dashboard leading back to the corresponding instance within Splunk Infrastructure Monitoring. The content pack automatically creates ITSI services corresponding to each integration type, which include KPIs to monitor critical functions. Once you configure the Splunk Infrastructure Monitoring integration, use the service topology tree included in the content pack to monitor multiple cloud integrations all in one place. The following image shows the populated service topology!
https://docs.splunk.com/Documentation/ITSI/4.9.6/Entity/SIMAbout
2022-09-25T06:09:02
CC-MAIN-2022-40
1664030334514.38
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Convert a dashboard to HTML Splunk users running any version earlier than Splunk Enterprise 8.1.0 or Splunk Cloud Platform 8.0.2004 can convert a Simple XML dashboard to HTML for additional customization. Converting a dashboard to HTML is no longer supported for Enterprise version 8.1.0 and later, and Cloud version 8.0.2004 and later. Use Dashboard Studio to rebuild and configure dashboards. Splunk continues to support HTML pages that are not converted dashboards and HTML panels within SXML dashboards.., 9.0.0, 9.0.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.2.3/Viz/ExportHTML
2022-09-25T06:15:15
CC-MAIN-2022-40
1664030334514.38
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
How to Write a Legally Binding Document For a Contract You may be wondering how to write a legally binding document for a contract. The following article will cover the elements of a legally binding contract, Do-it-yourself contracts, and getting a lawyer to draft your contract. If you have questions or concerns, don’t hesitate to ask us! We’re happy to help. We can even assist you if you have never drafted a contract before. Do-it-yourself contract writing Do-it-yourself contracts are available for purchase and download online. These templates require answers to questions and generate a customized contract. These templates are best used for simple transactions and are tailored to state laws. These do-it-yourself templates are an inexpensive option for creating simple contracts. However, they are not always the best choice for complex transactions. In order to avoid potential pitfalls, you should read these tips carefully. A lawyer’s input is essential for a legally sound contract. While most people rely on attorneys to draft contracts, many people choose to write them themselves. Although this is a great idea for the DIY-er, it is vital to consult with a lawyer when preparing these documents. To make things easier, you can download a sample contract online and substitute the facts. Once you’ve completed the contract, you can then review it to make sure it contains all of the essential terms. Non-binding contracts There are a few important things you need to know about contracts. Firstly, they must be valid. A contract is enforceable if all parties agree to its terms and conditions. It cannot be for illegal acts and must be signed by legal adults. Secondly, it must be based on mutual understanding. Finally, it must not be based on fraud or threats. Here are some important tips for writing a legally binding contract: There are two main types of contracts: binding and non-binding. The difference between the two lies in the purpose of each document. Non-binding documents are used for preliminaries, such as for discussion, so the parties can agree on all terms. However, they do not have the same legal weight as binding contracts. So, it is best to use them sparingly. If you’re writing a legally binding document, make sure it contains the following essential terms: Elements of a legally binding contract There are a number of essential elements to a legally binding contract. A contract must be entered into by competent parties who can fulfill the obligations under the agreement. Minors and people with limited mental capacity cannot enter into a contract. Courts generally find that parties with these limitations lack the legal capacity to enter into a legally binding contract. Although oral contracts are not as enforceable as those made in writing, they must still satisfy certain legal requirements. First, a contract must contain all the elements. Without any one of these elements, the contract may not be enforceable. The elements of a legally binding contract are the same in all jurisdictions, although some differ from jurisdiction to jurisdiction. In general, contracts must contain the following elements. Moreover, the offer should be precise and definitive to ensure that it is legally binding. The offer also states the terms and conditions of the agreement. Getting a lawyer to draft a contract Although anyone can draft a contract legally, certain elements must be present in order for a contract to be legally binding. For example, the contract must have both parties’ consent and legal competency, and it must contain the relevant details and clauses. Non-compete clauses are legal, but they don’t hold up in most states. A lawyer can make sure that all of the details are included. A contract is a series of promises that are enforceable by law. Most contracts are written, but a few exceptions do exist. While you can make a contract without a lawyer’s assistance, it is best to seek professional legal counsel. In addition to drafting the contract for you, a lawyer can negotiate better terms for you. While drafting the contract, be sure to discuss your expectations with your lawyer. A contract is a legal document and should be drafted properly. Creating a contract in Acrobat Sign Creating a legally binding document in ACROBAT Sign is a simple, yet powerful way to make agreements and contracts. The E-Sign Act, signed by President Bill Clinton on June 30, 2000, grants electronic signatures the same legal status as handwritten signatures in the United States. The act protects the legal effect of electronic signatures and prevents their denial in court. While signature laws vary by country, most have similar basic principles. Using a legal e-sign solution such as Acrobat Sign ensures compliance with these requirements. E-signature technology has become an important part of the legal system. In addition to the use of electronic signatures to verify a person’s identity, it also protects confidential information. A legal e-signature is a digital sound that is logically associated with a record. Electronic signatures can replace traditional handwritten signatures in virtually every process, including contracts, application forms, and government benefit enrollment forms. How to Write a Legally Binding Document For a Contract Contents
https://authentic-docs.com/how-to-write-a-legally-binding-document-for-a-contract/
2022-09-25T06:13:31
CC-MAIN-2022-40
1664030334514.38
[]
authentic-docs.com
This document explains how to structure and write database migrations for different scenarios you might encounter. For introductory material on migrations, see the topic guide. and unique=True arguments (choose an appropriate default for the type of the field you’re adding). Run the makemigrations command. This should generate a migration with an AddField operation. Generate two empty migration files for the same app by running makemigrations myapp --empty twice. We’ve renamed the migration files to give them meaningful names in the examples below. Copy the AddField operation from the auto-generated migration (the first of the three new files) to the last migration, change AddField to AlterField, and add imports of uuid and models. For example: # Generated by Django A.B on YYYY-MM-DD HH:MM: class Migration(migrations.Migration): dependencies = [ ('myapp', '0003_auto_20150129_1705'), ] operations = [ migrations.AddField( model_name='mymodel', name='uuid', field=models.UUIDField(default=uuid.uuid4, unique=True), ), ] Change unique=True to null=True – this will create the intermediary null field and defer creating the unique constraint until we’ve populated unique values on all the rows. In the first empty migration file, add a RunPython or RunSQL operation to generate a unique value (UUID in the example) for each existing row. Also add an import of uuid. For example: # Generated by Django A.B on YYYY-MM-DD HH:MM command. Note there is a race condition if you allow objects to be created while this migration is running. Objects created after the AddField and before RunPython will have their original uuid’s overwritten.). (MySQL’s atomic DDL statement support refers to individual statements rather than multiple statements wrapped in a transaction that can be rolled back.). ManyToManyFieldto use a throughmodel), ), ].
https://django.readthedocs.io/en/3.1.x/howto/writing-migrations.html
2022-09-25T05:16:39
CC-MAIN-2022-40
1664030334514.38
[]
django.readthedocs.io
Delete the Association Between a Classic Key and a Target When you delete the association between a classic key and a target (cloud KMS), the key is deleted from the cloud KMS, but remains in the Akeyless KMS. The key might not be deleted immediately from the cloud KMS, according to the cloud KMS deletion policy. The CLI command to delete the association between a classic key and a target is: akeyless delete-assoc-target-item --name <classic key name> --target-name <target name> where: - name: The name of the classic key from which you want to delete a target association. - target-name: The name of the target whose association with the classic key you want to delete. The full list of options for this command is: -n, --name *Item name --id, --assoc-id The association id to be deleted. Not required if target name specified -t, --target-name The target name with which association will be deleted -?
https://docs.akeyless.io/docs/delete-the-association-between-a-classic-key-and-a-target
2022-09-25T05:26:03
CC-MAIN-2022-40
1664030334514.38
[]
docs.akeyless.io
Collect Relevant Data How to collect relevant data in your labeled and unlabeled datasets using Aquarium Overview It is no easy task to decide what data you should label next and add to your training set. Determining which data will make the greatest impact to improving your model also presents its own set of challenges. By focusing data collection and labeling on highest value data, you can get more model improvement in less time and with less labeling costs than random sampling. If you have a large unlabeled dataset, Aquarium's Collection Campaign segment helps you quickly collect the subset you actually want to use—without the need for someone to manually review the entirety of your unlabeled data. Utilizing Collection Campaigns requires setting up a Collection Campaign segment within your dataset. Learn more about organizing your data with Segments. Aquarium enables you to analyze your datasets to determine if there are underrepresented areas within your data or areas where your model struggles. Once you have identified these difficult cases, you can group your data into a Collection Campaign segment and can find more examples similar to these in the unlabeled dataset. You can then send these examples to a labeling provider, use them to retrain your model, and get the most model improvement for the least labeling cost! In this guide, the main steps we will cover are: Navigating a Collection Campaign segment Kicking off a similarity search through an unlabeled dataset Exporting your newly collected data from Aquarium Once completed you should feel comfortable: Searching unlabeled datasets using example data you've collected Reviewing the results of the similarity search the process of exporting your newly found data Prerequisites This guide makes the assumption that you have already found subsets of your data of interest for targeted unlabeled data collection, and added them to a Collection Campaign type segment. Your teams can use Aquarium's various views to accomplish the task of understanding where your training dataset could benefit from additional, targeted data. We have an entire guide dedicated to the process of assessing your data quality here. In summary, Aquarium has tools that can help you find areas of confusion, low model metrics scoring, and sparse representation in order to target your data collection towards datapoints that are most helpful to improving your model. Embeddings must be generated for the unlabeled dataset being searched through. Whatever model you use to generate the new corpus's embeddings must be the same model you used to generate the Issue elements' embeddings. See this guide for uploading unlabeled data correctly using the embedding_version parameter In order to follow the step-by-step instructions, this guides makes the assumption you have already: Created Collection Campaign segments Uploaded a Labeled Dataset Uploaded an Unlabeled dataset Collecting Relevant Data User Guide This guide runs through the complete flow of setting up a Collection Campaign and collecting new, unlabeled data similar to those you've previously identified in a Collection Campaign Segment. 1. Navigate to Your Collection Campaign Segment At this point the idea is that you have already created a Collection Campaign segment. If you need help knowing how to use Aquarium to assess your data quality and find subsets of data to put into a Collection Campaign segment, check out this guide! In the top navigation bar in Aquarium click the "Segments" button to be brought to the Segments page. Segments Tab Once on the Segments page, you'll be able to view all your created segments. Navigate to the Data Collection tab to view all of your created Collection Campaign segments. Data Collection Tab Once you have selected which Collection Campaign you would like to work with, click on the name in the table to view details regarding your specific segment. Detail view of a Collection Campaign segment 2. Click on "Collected" Tab For more information regarding a Collection Campaign segment, read here . If you have not run a Collection Campaign before on this segment, your screen will look like this: You will be able to see text that says, "No samples have ben collected yet!" 3. Start the Similarity Search In the collected tab, if you have properly uploaded an unlabeled dataset you will see a dropdown with all of the valid unlabeled datasets that you are able to search through.Depending on your goals for the similarity search, it may make sense to split your unlabeled datasets up in different ways when uploading. Unlabeled dataset dropdown Click the button to the right of your indicated unlabeled dataset that says "Calculate Similar Dataset Elements", and you'll see the text below change to reflect the status of you similarity search. You'll also see a green bar pop up at the top of your screen indicating the similarity search has started. 4. Review Results of the Search Results can take anywhere from 10 seconds to a couple minutes as Aquarium compares your subset of data to the indexed unlabeled dataset.Once returned you will see your screen look like this: You can see the total number of results returned and tiles for each results You can scroll through the returned images and use the Sort By Ascending or Descending to view the elements from the perspective of similarity score. At this point you could export your data and follow step 6 in this guide. But for even better results we may want to take it a step further and refine your search results. 5. Iteratively Refine Search Results You may want to iteratively refine their search results, and by starting your first similarity similarity search in your unlabeled dataset you have taken the first step. Once you have done a first pass on your initial results, you are able to select an image by click on the circle in the top left corner of each tile, and then choose to accept or discard a frame or crop. (The examples shown are at the frame level view). You'll notice that element will then show up under the Accepted tab or Discarded tab. Elements added to the Accepted bucket will be used as seed elements when rerunning the similarity search. Once you have added at least 10 elements to the Accepted bucket and 20 to the Discarded bucket, run another similarity search by clicking the white 'Recalculate Similar Dataset Elements' button that will appear. By going through multiple iterations of search and refinement, the user can grow the number of relevant examples in the issue and get more relevant collection results after each iteration. Repeat this process as many times as needed to refine your newly collected dataset. 6. Export Your Collected Data Once you have completed running your similarity search and your data refinement, Aquarium provides two options of how to export your newly created dataset: 1. Batch export to JSON 2. Use a webhook to export your data directly to a labeling provider. We have separate pages in our docs dedicated to exporting data out of Aquarium. These docs will show you how the export data is formatted as well as things like how to set up a webhook with a labeling provider.To access both options, use the dropdown button in the top right corner of your screen to select which export option you would like to use. Note if you have not set up a webhook to the labeling provider the button will be greyed out. Greyed out export button GIF demonstrating how to export your data to a JSON file depending on which tab you are in Within the Unsorted, Accepted, and Discarded tabs, you can also select individual elements to export instead of all of the data contained in the tab. Your download will start immediately and depending on how much data you are exporting can take a little longer, but the download should start within a few seconds. And congrats ! You have successfully located new targeted subsets of your unlabeled data to then label and add into your training set in order to improve model performance! Have questions about other export formats or want to discuss a more custom option to the workflow in this guide? Please feel free to reach out to us here . Common End-To-End Workflows - Previous Assess Data Quality Next - Common End-To-End Workflows Evaluate Model Performance Last modified 11d ago Copy link Outline Overview Prerequisites Collecting Relevant Data User Guide 1. Navigate to Your Collection Campaign Segment 2. Click on "Collected" Tab 3. Start the Similarity Search 4. Review Results of the Search 5. Iteratively Refine Search Results 6. Export Your Collected Data
https://docs.aquariumlearning.com/aquarium/3.-common-workflows/collect-relevant-data
2022-09-25T05:04:03
CC-MAIN-2022-40
1664030334514.38
[]
docs.aquariumlearning.com
Selecting projects To assign ClockWork tickets to projects, you will need to select the projects to appear in your User View. - Select the Menu in Clockwork - Select Projects - Press the dropdown menu for established projects and select your project - Press ‘Add to My Projects’ - You have successfully added a project to your User View when you see the Project Areas appear. Repeat steps to add additional projects. Navigate to the Ticket window and select the project from your Project list.
https://docs.eps-office.com/Clockwork/SelectingProjects
2022-09-25T04:29:08
CC-MAIN-2022-40
1664030334514.38
[]
docs.eps-office.com
What was decided upon? (e.g. what has been updated or changed?) Show Only, Date, Format, Library, Location, Language, Author/Creator, Subject, Journal Title, New Records, Collection Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) Notes from Subject Librarians:1. Availability–default; 2. Library—very important; 3. Resource Format—include data, peer reviewed options; 4. Date; 5. Subject—Note follow up comment from a librarian: “it occurred to me that maybe people confuse ‘Subject’ in DL with LC subject headings. The facet ‘Subject’ in DL is not subject headings and therefore isn’t as helpful … usually. In my experience, the subjects are too broad. For instance, the Subject facet, Jewish Identity, is not an LC subject heading. The LC subject heading is: Jews –Identity. So my argument would be to drop ‘Subject’ down the priority list.”; 6. Author/Creator; 7. Language; 8. Databases: Put at the bottom for staff use. Not useful for subject librarians. Agreed that the filters should be on the left side of the page. Who decided this? (e.g. what unit/group) User Interface When was this decided? Additional information or notes.
https://docs.library.vanderbilt.edu/2018/10/15/facets-filters-see-following-rows-for-specifics-on-each-filter-2/
2022-09-25T05:29:13
CC-MAIN-2022-40
1664030334514.38
[]
docs.library.vanderbilt.edu
An alert correlation policy defines user settings, described below, that are applied when taking first response actions on alerts. Policy modes The following policy modes are supported: Filter criteria setting This setting helps select alerts to which the policy applies. Alert Pattern Actions There is one alert pattern action available. Suppress seasonal alerts setting With this setting, the system suppresses alerts that occur regularly, at around the same time. For example, a high CPU utilization alert that occurs nightly at around 1:00 AM due to a scheduled backup job on a server that usually goes back to the OK state, by 1:30 AM. Alert Attribute Actions There are two alert attribute actions. Suppress alerts With this setting, you can create suppression conditions to suppress alerts that have certain alert attributes. Note that if the alert payload has a source time that is older than the suppression time, the First Response recommendation or suppression is not applied. Run processes With this setting, a process definition runs on alerts that are expected. For example, assigning an alert as a user task to an assignee. Key Considerations First response considerations: - If the data is not accurate in the training file, the system uses the learned historical data (Continuous Learning). - If the alert is suppressed, the run process is not applied. The run process is applied later only when the alert is unsuppressed. - Higher priority is given to a policy that is in enabled mode and includes the user-defined conditions. An action can have one or more policies. The priority rule is applied only when one action qualifies for multiple policies. For multiple policies, during the run time, the system initially checks the policy mode and gives higher priority to the policy with the ON mode. If the policy has user-defined conditions (Suppress for a specific duration), the alert is suppressed accordingly. The system provides the following order of priority for the execution of a policy: - Policy modes: ON > Recommend > Observed - First response conditions: User-defined setting > Training file > Machine learning Next steps - Review Training File. - See Managing First Response Policy.
https://docs.opsramp.com/solutions/alerting/first-response/
2022-09-25T05:39:52
CC-MAIN-2022-40
1664030334514.38
[]
docs.opsramp.com
After you have the latest version of WordPress, save the downloaded Rara Business theme somewhere handy on your computer, as you will be using the included files for the rest of the installation process. The Rara Business theme file includes: A WordPress Theme Files (in .zip format)— This (rara-business. Now, to install and activate the theme follow these steps or the above gif: - Go to Appearance > Themes - Click on Add New button. - Click on Upload Theme. - Click on “Choose File”, select the “rara-business.zip” file from your computer and click Open. - Click Install Now. - After the theme is installed, click on Activate to use the theme on your website.
https://docs.rarathemes.com/docs/rara-business/theme-installation-activation/how-to-install-activate-rara-business-wordpress-theme/
2022-09-25T05:12:35
CC-MAIN-2022-40
1664030334514.38
[array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/how-to-activate-theme-gif-for-rara-business.gif', 'how to activate theme gif for rara business.gif'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/themes-for-rara-business.png', 'themes for rara-business'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/add-new-rara-business.png', 'add new rara-business'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/upload-theme-rara-business.png', 'upload theme rara-business'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/choose-file-rara-business.png', 'choose file rara-business'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/install-theme-rara-business.png', 'install theme rara-business'], dtype=object) array(['https://docs.rarathemes.com/wp-content/uploads/2019/07/Page-10-Image-45.png', 'Activate rara business pro'], dtype=object) ]
docs.rarathemes.com
. Delete spam reports DELETE /v3/suppression/spam_reports Base url: This endpoint allows you to delete your spam reports. Deleting a spam report will remove the suppression, meaning email will once again be sent to the previously suppressed address. This should be avoided unless a recipient indicates they wish to receive email from you again. You can use our bypass filters to deliver messages to otherwise suppressed addresses when exceptions are required. There are two options for deleting spam reports: - You can delete all spam reports by setting the delete_allfield to truein the request body. - You can delete a list of select spam reports by specifying the email addresses in the Authentication - API Key Headers Request Body Indicates if you want to delete all email addresses on the spam report list. A list of specific email addresses that you want to remove from the spam report list. { .
https://docs.sendgrid.com/api-reference/spam-reports-api/delete-spam-reports
2022-09-25T04:26:46
CC-MAIN-2022-40
1664030334514.38
[]
docs.sendgrid.com