content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
DeconfigureAgent Scope Intended Audience All X CommVault Internal CE/System Test Development Platform CommServer (Database: CommServ) Release 10.0.0 Description Deconfigure a client or iDataAgent. Note Example: 1) Deconfigure a particular iDa of a client: qoperation execscript -sn DeconfigureAgent.sql -si jellyfish -si ; -si file system 2) Deconfigure a client and all iDataAgents: qoperation execscript -sn DeconfigureAgent.sql -si jellyfish Usage qoperation execscript -sn DeconfigureAgent.sql -si client_name [-si IdataAgent] script_name: QS_DeconfigureAgent Force Deconfigure a client or iDataAgent.
http://docs.snapprotect.com/netapp/v10/article?p=features/cli/qscripts/CommServ.QS_DeconfigureAgent.Readme.html
2022-01-29T02:05:29
CC-MAIN-2022-05
1642320299894.32
[]
docs.snapprotect.com
IsNameDefined Determines whether there is a logon name defined for this user in this zone. Syntax bool IsNameDefined {get; set;} Property value Returns true if there is a logon name defined for this user. Set this property false to clear the logon name. Exceptions IsNameDefined throws an InvalidOperationException if the name has not been defined and you attempt to set this property true.
https://docs.centrify.com/Content/win-prog/HierarchicalUser-IsNameDefined.html
2022-01-29T00:53:42
CC-MAIN-2022-05
1642320299894.32
[]
docs.centrify.com
Citus Documentation Welcome to the documentation for Citus 10.2! Citus is an open source extension to PostgreSQL that transforms Postgres into a distributed database. To scale out Postgres horizontally, Citus employs distributed tables, reference tables, and a distributed SQL query engine. The query engine parallelizes SQL queries across multiple servers in a database cluster to deliver dramatically improved query response times, even for data-intensive applications.
https://docs.citusdata.com/en/stable/
2022-01-29T01:40:05
CC-MAIN-2022-05
1642320299894.32
[]
docs.citusdata.com
Custom Fonts Neuron Themes comes bundled with a large set of 977+ google fonts and texts as well as the freedom to select color, line height, letter spacing, transforming a text or font size. Nevertheless, we have left the opportunity open to adding custom fonts on your own, if that is what you are looking for. To add you Custom Fonts - Go to WP Dashboard > Neuron > Custom Fonts - Click on Add New - Type the name of your custom font - Click on Add Font Variation Choose the Font Variation options - Choose the font's weight. You can select from all the weights normal, bold, lighter, 200, 300, and more, make sure to choose the weight that the font supports. - Decide on the font's type of style Normal, Italic, Oblique - Choose the format you want to upload the font - Click on Publish - You will need to repeat the process for every weight and font style you use. Formats Available - Web Open Font Format (WOFF) - WOFF is a font format that is used in web pages. - Web Open Font Format (WOFF 2.0) - is implemented in all major browsers, and is widely used on production websites. It supports the entirety of the TrueType and OpenType formats. - TrueType Fonts (TTF) - are the basic fonts created by Apple and Microsoft and EOT and is the most commonly used format. - SVG Fonts/Shapes - this type of format is used to embed glyph information into SVG to display font text. - Embedded OpenType Fonts (EOT) - this type of format is a compact form of OpenType fonts supported only by Microsoft Internet Explorer. Make sure to use this format to support IE. View your Custom Fonts - Add a New Posts - Drag the Heading, or any other widgets that support text - Go to the Style Tab go to Typography - From the drop-down Font Family menu, you will be able to see the added font at the top of the menu.
https://docs.neuronthemes.com/article/81-custom-fonts
2022-01-29T02:20:42
CC-MAIN-2022-05
1642320299894.32
[]
docs.neuronthemes.com
Templates¶ A template is a file on disk which can be used to render dynamic data provided by a view. Pyramid offers a number of ways to perform templating tasks out of the box, and provides add-on templating support through a set of bindings packages. Before discussing how built-in templates are used in detail, we'll discuss two ways to render templates within Pyramid in general: directly and via renderer configuration. Using Templates Directly¶ an asset specification instead of a relative template name is usually a good idea, because calls to render_to_response() using asset specifications will continue to work properly if you move the code containing them to another location. with which you're most comfortable supported Mako bindings.): System Values Used During Rendering¶ When a template is rendered using render_to_response() or render(), renderers make these names available as top-level template variables. Templates Used as Renderers via Configuration¶ that defines the view configuration lives". In this case, this is the directory containing the file that defines the my_view function. Similar renderer configuration can be done imperatively. See Writing View Callables Which Use a Renderer. See also See also Built-in Renderers..:
https://docs.pylonsproject.org/projects/pyramid/en/1.8-branch/narr/templates.html
2022-01-29T01:11:38
CC-MAIN-2022-05
1642320299894.32
[]
docs.pylonsproject.org
Library Properties (Drive) Use this dialog box to view or modify the drive related options in a library. Attributes - Mark Library/Drive Broken When Error Thresholds Exceeded When selected, the library or drives will be marked as Broken, when its accumulated software and hardware error number exceeds the corresponding threshold values established in the Hardware Maintenance Thresholds dialog box in the Control Panel. - Verify access path using Serial Number for Drive When selected, the drive serial number and access path is verified before reading or writing to the media. It is strongly recommended that this option be enabled at all times to prevent the overwriting of data, when the drive access path is changed due to hardware configuration changes. - Check for cleaning media loaded in Drive When selected, the media is checked to see if it is a cleaning media, before performing any other operation on the media. It is strongly recommended that this option be enabled at all times. In most libraries, unselecting this option may result in SCSI failures which requires manual intervention to unload the media from the drive. - Check for Tape Alerts When selected, error messages provided by the drive manufacturer is recorded in the log files. - Set drive as needs cleaning on Cyclic Redundancy Check (CRC) errors When selected, the drive is marked for cleaning, when CRC errors are encountered. - Skip Unload drive for Autoloaders before unmounting media When selected, indicates that the MediaAgent must skip. - Detect and update media type when media is loaded into the drive When selected, the MediaAgent automatically detects the correct media type when the media is used the first time. For example, if you import mixed media in bulk and discovered them as a specific media. (The media type information can be viewed from the Media Properties associated with the specific media.) This option is supported for IBM Ultrium and DLT/SDLT drives. - Enable Auto Drive Replacement when new device is detected during Mount When selected, the MediaAgent automatically detects new drives that were replaced, if drive serialization is supported in the library. It is strongly recommended that this option be enabled before replacing drives in a libraries that support drive serialization as it provides a one touch solution for replacing drives. In some cases the system may not automatically detect the new drives. In such situations, follow the alternate procedures described in Books Online to replace the drives. - Check for media change in drive every n minute(s) This option is applicable only for stand-alone drives. When selected, indicates how often the system must automatically check the media in the drive to update the media information. SCSI Reservation - Use SCSI Reserve for drive operations When selected, indicates that the MediaAgent must use the drive exclusively during data protection and other operations, using SCSI reservation. This option is useful in the SAN environment where multiple computers may try to access the same drive, resulting in data corruption. Before enabling this option, verify and ensure that the hardware (i.e., drives, SAN switches, etc) supports this type of operation. Refer to the hardware manufacturer's documentation to see if this operation is supported. If this option is enabled and the hardware does not support this type of operation, subsequent data protection jobs may fail. Choose this option to enable persistent SCSI-3 reservation to exclusively access the drive during read / write operations.Use SCSI-2 Reserve for content resolution Choose this option to enable SCSI-2 reservation to exclusively access the drive during read / write operations. Enable Auto-Cleaning The Auto-Cleaning options are not available for stand-alone drives. - On sense code When selected, the cleaning tape is automatically mounted and a drive cleaning operation is performed, whenever the hardware indicates that a drive requires cleaning. When cleared, the drive does not get automatically cleaned when the hardware indicates that a drive requires cleaning. As a result, subsequent mount operations in the drive may fail. - Cleaning thresholds exceed When selected, the cleaning tape is automatically mounted and a drive cleaning operation is performed when the accumulated software and hardware errors and the total number of drive usage hours, exceeds the corresponding threshold values established in the Drive Cleaning tab of Hardware Maintenance Thresholds dialog box. When cleared, the drive does not get automatically cleaned when the hardware indicates that a drive requires cleaning. As a result, subsequent mount operations in the drive may fail. - Wait n day(s) after last cleaning When selected, automatic drive cleaning operation will not be performed for the specified number of days. When cleared, and if the On sense code and/or When thresholds exceed options are enabled, automatic drive cleaning operation will be performed whenever those conditions require drive cleaning. - Continue using drive even if it needs cleaning during restore When selected, the drive will be used for restores operations even if the drive is marked as requires cleaning, either by sense code or exceeded thresholds. When cleared, the drive will not be used for any operation. If the library is a stand-alone drive, the Stuck in Drive and Enable Auto Cleaning options are not applicable and hence not displayed.
http://docs.snapprotect.com/netapp/v10/article?p=en-us/universl/library_operations/prop_box/drive.htm
2022-01-29T02:09:05
CC-MAIN-2022-05
1642320299894.32
[]
docs.snapprotect.com
General Notes. The following debug sequence should be used when you have no idea about why the system isn't working. If you have one, you may skip unnecessary sections. Set debug options to "True" in the following Murano configuration files: /etc/murano-api/murano-api.conf /etc/murano-conductor/conductor.conf Stop both murano-api and murano-conductor services. We will start them one by one from the console. murano-api. First, the murano-api must be started. Open new console Start murano-api service manually ># murano-api --config-dir /etc/murano-api 2>&1 \ > /var/log/murano-api-live.log & ># tailf /var/log/murano-api-live.log Open dashboard, create and send to deploy some simple environment. Open RabbitMQ web console, open your vhost and ensure that queues were created and there is at least one message. Check log for errors - there shouldn't be any Keep murano-api service running murano-conductor. Next to the murano-api the murano-conductor should be started Open new console Start conductor from console ># muranoconductor --config-dir /etc/murano-conductor \ > /var/log/murano-conductor-live.log & ># tailf /var/log/murano-conductor-live.log Check that there is no python exceptions in the log. Some errors like 404 are ok, as conductor tries to delete environment that doesn't exist Check heat stack status. It should not be in FAILED state. If it is - check heat and nova error log to find the cause. Keep murano-conductor service running. Log Files. There are various log files which will help you to debug the system. Murano Log Files /var/log/murano-api.log /var/log/murano-conductor.log /var/log/apache2/errors.log /var/log/httpd/errors.log Windows Log Files C:\Program Files (x86)\CloudBase Solutions\logs\log.txt C:\Murano\Agent\log.txt C:\Murano\PowerShell.log
http://murano-docs.github.io/0.2.11/administrators-guide/content/ch04.html
2022-01-29T01:50:23
CC-MAIN-2022-05
1642320299894.32
[]
murano-docs.github.io
Cloud Platform limits the actions you take within the user interface with the teams, roles, and permissions system. In some cases, if you don’t have the permission to take an action, the interface control is present, but disabled. In other cases (for example, the Subscriptions page if you aren’t the Owner or Administrator of the subscription), the control or page isn’t displayed at all. If there is an action you want to take, but can’t, check your roles and permissions in the Manage page of the Cloud Platform user interface. On that page, you can also find the names of the Owner or Administrator of each organization of which you are a member, and the names of all the teams and the team leads. Depending on your goal, you can contact the Owner, Administrator, or Team lead and request the role you need. Sign in to the Cloud Platform user interface and click Manage in the top menu. The My Organizations page displays every organization in which you have a role. Cloud Platform displays each organization on a separate card, which lists the organization Owner and each Administrator for the organization. Look at the card for the organization you want to work on. The card lists each team you’re a member of, and users assigned to the organization. It also lists what role you have for each team. For more detailed information, click Manage to display the Team management page.
https://docs.acquia.com/cloud-platform/access/teams/my-roles/
2022-01-29T01:15:11
CC-MAIN-2022-05
1642320299894.32
[]
docs.acquia.com
Mobile Attribution Get visibility of customer acquisition channel and LTV of each channel Features - App installation attribution : Tracking your App installs from each channel is crucial to your marketing campaign, know what channels are doing better and improve them! - Revenue per acquisition campaign : Know the lifetime value per user from your marketing channels and know which channel is bringing more valuable users? - Trace App Install : Track every install of your app to know how's your channel performing - Trace deferred deep linking : no matter where your deep link was open, you'll always know because our deep links gets tracked on mobile or desktop - Measure the ROI /Sales for influencer campaigns on Snapchat, instagram stories , SMS etc - Link Retargeting : with built in social media /web push retargeting in our tracker URLs (smartDeepLinks) - Post install Event tracking : Track your users' /events post installation , and link them to acquisition channel - Shorten, Branded Tracking URLs Use Cases App installation attribution App Conversion Event Attribution Continue User Flow , with Web2App Routing Direct your users to your app even if they don't have it installed yet! With built in Social Media Retargeting, even those didn't install the app , will see your retargeting Ads App Install Detection Detects if the user has your app already installed or not, with Deferred deep linking the user screen flow will continue to the target screen even if the app was just installed. Advertise App's Content in other Apps Smart Deeplinks help you effectively advertise within the app content. You can also customize the user's experience with the option to show an App Content Preview on the web to improve installation conversion.
https://docs.appgain.io/mobileattribution/gettingStarted/
2022-01-29T00:45:48
CC-MAIN-2022-05
1642320299894.32
[array(['https://cdn.appgain.io/docs/appgain/mobileAttribution/gettingStarted/1.png', 'enter image description here'], dtype=object) array(['https://cdn.appgain.io/docs/appgain/mobileAttribution/gettingStarted/2.png', 'enter image description here'], dtype=object) array(['https://cdn.appgain.io/docs/appgain/mobileAttribution/gettingStarted/3.png', 'enter image description here'], dtype=object) array(['https://cdn.appgain.io/docs/appgain/mobileAttribution/gettingStarted/4.png', 'enter image description here'], dtype=object) array(['https://cdn.appgain.io/docs/appgain/mobileAttribution/gettingStarted/5.png', 'enter image description here'], dtype=object) ]
docs.appgain.io
How to Create an Intent Exit Popup Exit Intent Popups will help you convert users into possible clients at the very last moment, just when they are about to leave the page, capture your users' attention and make them an offer they can not resist. Converting website visitors is tricky, but we have all the tools and means necessary to create powerful and capable popups that are superior in design as well. > - 1050px - Height - Fit to Content - Close Button - OFF - Entrance Animation - Slide Up - Exit Animation - Slide Down Add the content for the Popup - Add a section with two columns. From the Content tab let’s set the content width to the value of 1050 and the column gap will be No Gap. Go to the Styles tab and set the background color. - Drag in the image element, and upload it. Set the image size to full. - Now let’s click on the first column handle to open its options, set the Vertical Align to Middle and the Horizontal Align will need to be Center. Go to the Advanced Tab and add some right and left padding to the value of 80 pixels. - Let’s drag in a heading element. We will write a catchy copy for the popups. - Another heading element will need to be placed on the popup, this time for the main content of the popup. Let’s write it and then style it. - Next, we will drag in a text editor element to show users what our popup is about and introduce them to our irresistible offer. - Go to the Styles tab and we will need to tweak the typography options. From the Advanced Tab, we will add some bottom padding of 80 pixels. Add the Form Element to Integrate with the Popup Layout Tab - Let’s search for the form element and drag it onto the page. From the Forms Fileds section, let’s delete two of the fields until we are left with the email type only. - We will need to hide the title for design purposes and only leave the placeholder text. - Set the column width of the field to be 75% to have the button be displayed next to the form field. - Let’s configure the Submit button, set the column width to be 25 %. - From the Action After Submit, we will have our form be integrated with MailChimp, once we select it you will see another section appear on the panel containing further configurations. Style Tab - Set the column gap to be the value of 0. The row gap will take the same value. - Field Section > Set the background color to be transparent. Select a border color, we will go with black this time. We’ll unlink the border width and put the value of 1 pixel only for the bottom. - Set the border radius to the value of 0, and we will tweak the padding as well. Once again unlink the values and put the value of 12 pixels for the bottom. - Design the button as well. The background color will be set to be transparent. Assign the red color for the text. Tweak the typography options, as per usual. Configure the width value. Set the border radius the value of 0 and here as well we will tweak the text padding value and assign it to be 12 pixels for the bottom part of the button. - From the Advanced Tab and add some Padding values of 30 pixels for the left and right. - One last final component to add, and that's another heading element. Type in the content. Once we do that to have our popup close after a user has clicked on our heading we will need to assign it dynamic content. Click on this database barrel icon to open the settings, from the drop-down menu choose Popup. Click on the wrench icon and under the action, choose popup. Publish the Popup - Click on Publish after you are done building your popup - Conditions - Assign it to be on the Entire Site, or choose a more specific condition - Triggers - Switch the On-Page Exit Intent ON. - We won’t need to assign any Advanced Rules. - Now every time a user is about to leave your page, you will have a second chance to convert them into possible clients.
https://docs.neuronthemes.com/article/169-how-to-create-an-intent-exit-popup
2022-01-29T00:31:03
CC-MAIN-2022-05
1642320299894.32
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e82138204286364bc97815e/images/5fd3fa2436980410c9123745/file-bYU7jBZZUC.jpg', None], dtype=object) ]
docs.neuronthemes.com
Enabling dynamic child creation in weight mode In the weight mode, when you enable dynamic child creation for a queue, it becomes a parent queue that can have both static and dynamic child queues. You can enable that feature through the YARN Queue Manager UI. In the weight mode, there is no Managed Parent Queue. When you enable the dynamic child creation feature for a queue, it becomes a parent queue that can have both static and dynamic child queues simultaneously. If this feature is not enabled, the queue can only have static child queues. In contrast to Manage Parent Queue where the dynamic queue nesting level is limited to one, in weight mode, auto dynamic child creation allows you to create 2-level dynamic queues. By default dynamic child queues are deleted automatically 5 minutes after the last job finished on them. You can disable this feature using the Capacity Scheduler Auto Queue Deletion YARN property. For more information, see Disabling Auto Queue Deletion. - In Cloudera Manager, select the YARN Queue Manager UI.A graphical queue hierarchy is displayed in the Overview tab. - Find the queue for which you want to enable the auto dynamic child creation feature. - Select the More Options menu and select Enable Dynamic Child Creation.Dynamic Child Queue Capacities window is displayed. - Set the Minimum and Maximum capacities that will be applied for every dynamic child queue under that particular parent queue. Save the minimum and maximum capacity. If you want to define a placement rule that could lead to dynamically created child queues, ensure that during placement rule creation you check the Create the target queue if it does not exist? property and provide a parent queue for which dynamic child creation is enabled. For more information, see Manage placement rules.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/yarn-allocate-resources/topics/yarn-enable-dynamic-auto-child-creation-weight-mode.html
2022-01-29T01:26:44
CC-MAIN-2022-05
1642320299894.32
[]
docs.cloudera.com
How to add an appeal number into my account Unlike other phoning softwares, you have the possibility to manage incoming calls as well as outgoing calls. To do so, it is required to own a DID number. DID means Direct Inward Dialing. A DID number allows an incoming phone call to result on the direct line of a person without having to go through a switchboard. As a telecom operator, we can : - Assign non-geographic numbers (09 n°) or geographic numbers (01,02...). - Retrieve management of other operators' numbers thanks to portability operations. You can make a request for a DID number for different reasons : - You just created your company and you need one or several numbers (switchboard, direct lines of collaborators...), - You manage an hotline activity and need a number to which your clients will redirect their calls, - You conduct different phone prospecting operations and want prospects to be able to call you back. NB/ The geographic DID n° attribution is made under certain conditions (quantity, time commitment...), ask all informations by using the link below. To request a DID n°, contact the Support stating the type and number of n° desired.
https://docs.web2contact.com/pages/docs/en/manage-my-domain/account/add-a-did-number-to-your-domain.html
2022-01-29T01:00:50
CC-MAIN-2022-05
1642320299894.32
[]
docs.web2contact.com
YARN ResourceManager High Availability The ResourceManager high availability (HA) feature adds redundancy in the form of an active-standby ResourceManager pair. The YARN ResourceManager is responsible for tracking the resources in a cluster and scheduling applications (for example, MapReduce jobs). The ResourceManager high availability (HA) feature adds redundancy in the form of an active-standby ResourceManager pair to remove this single point of failure. Furthermore, upon failover from the active ResourceManager to the standby, the applications can resume from the last state saved to the state store; for example, map tasks in a MapReduce job are not run again if a failover to a new active ResourceManager occurs after the completion of the map phase. This allows events such the following to be handled without any significant performance effect on running applications: - Unplanned events such as machine crashes - Planned maintenance events such as software or hardware upgrades on the machine running the ResourceManager ResourceManager HA requires ZooKeeper and HDFS services to be running.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/yarn-high-availability/topics/yarn-resourcemanager-ha-overview.html
2022-01-29T01:55:12
CC-MAIN-2022-05
1642320299894.32
[]
docs.cloudera.com
Logging Corteza logs most of the operations that have occurred in the system in the action log. The action log user interface provides an overview of events such as users that have registered or logged in, records that have been created, and templates that have been rendered. You can use the action log for debugging and detecting suspicious behavior as it provides a rich insight into what has occurred. The action log interface The action log user interface resides in the Corteza Admin web application, under . Listing actions To list current action log entries: Navigate to, optionally insert the filtering parameters and click on the search button. Inspecting actions To inspect a specific action: Navigate to, optionally insert the filtering parameters and click on the search button, click on the action you wish to inspect.
https://docs.cortezaproject.org/corteza-docs/2021.9/administrator-guide/logging.html
2022-01-29T02:04:07
CC-MAIN-2022-05
1642320299894.32
[]
docs.cortezaproject.org
Manage Statistics Statistics on Couchbase Server can be monitored; per bucket, per node, per service, and per cluster. By means of Couchbase Web Console, appropriate combinations of statistics can be selected for display, across multiple interactive dashboards. Understanding Statistics Management Couchbase Server provides statistics; which are updated continuously, and so always represent the current state of the cluster. Statistics refer to buckets, nodes, clusters, and services. Statistics can be viewed by means of Couchbase Web Console, the Couchbase CLI, and the REST API. Manage Statistics with the UI<< At the foot of the Dashboard, numbers are displayed; to indicate which cluster-nodes are active, failed-over, pending rebalance, and inactive. The services present on the cluster are also indicated. Near the top, a notification is provided, regarding buckets: Couchbase Web Console organizes statistics by bucket: therefore, until a bucket is added to the cluster, no statistics can be shown. To add a bucket, left-click on one of the options provided by the notification. Either left-click on Buckets, to add a custom bucket; or, left-click on Sample Buckets, to install a sample bucket. The following examples assume that the travel-sample bucket has been installed. The dashboard now appears as follows: The Cluster Overview thus displays animated charts that provide a variety of information on the status of data-management on the cluster. Additional information can be displayed by left-clicking on the Node Resources control, located near the bottom of the screen. Dashboard Access All chart-content is provided by bucket. Users whose roles allow them both to access Couchbase Web Console and see administrative details on one or more buckets are able to see the default chart-content for those buckets. For example, the Full Admin, Cluster Admin, Read Only Admin, and Security Admin roles permit display of charts for all buckets defined on the cluster; while the Bucket Admin role permits display of charts only for those buckets to which the role has been applied. Users who can see the default content for some or all buckets can also create their own, customized content for those buckets. Note that customized content is saved on Couchbase Server only on a per user basis: therefore, for example, when a Full Admin creates customized content, it is visible only to the Full Admin, not to any other user. Dashboard Controls In the upper part of the screen, the following controls appear: The control at the left reads Cluster Overview. When left-clicked on, it displays a pull-down menu, as follows: The Couchbase Web Console Dashboard screen can be used to display multiple dashboards in succession, each accessed from this pull-down menu. Currently, the menu provides two dashboards for display. Cluster Overview, which is displayed by default, provides statistics on RAM, operations, memory usage, replication, CPU, and other resource-related areas. All Services provides statistics for services and server-systems. The second control to the right reads, by default, Minute. This control allows selection of the time-granularity for chart-display. Left-click on the control to display a pull-down menu of options: The third control to the right provides a pull-down menu that lists the buckets defined on the cluster. The selected bucket is that in relation to which statistics are currently shown. The current option, travel-sample, is the only option available, since it is the only bucket currently loaded. The fourth control to the right reads All Server Nodes, and indicates in parentheses the number of nodes currently in the cluster. Left-click on the control to display the individual nodes: The default selection allows data from all server nodes to be displayed simultaneously. By selecting an individual node from the pull-down menu, the displayed data is restricted to that corresponding to the selected node. At the far right of the screen, the Reset control is displayed: Left-clicking on this control provides the following notification: As this indicates, confirming will delete all previously made customisations. Therefore, to keep changes you have made to your dashboard-appearance, left-click on Cancel. Add a Dashboard A dashboard can contain groups of charts. The dashboard is first defined; then groups can added to the dashboard; with charts being added to each group. To define a dashboard, access the New Dashboard control, in the pull-down menu accessed from the first of the controls, at the left of the screen: Left-clicking on the '+' symbol displays an extension to the pull-down: The editable new dashboard field can be used to enter a name for the dashboard being defined. Optionally, a description of the dashboard and its purpose can be added in the optional description… field. The radio buttons towards the bottom allow selection between the options start w/ current charts (in which case the new dashboard’s content will be initialized with whatever charts are already displayed on the screen) and start blank, in which case the new dashboard will initially show no charts at all. To create a new dashboard named Test Dashboard that starts with the existing content, enter data as follows: Left-click on the Save button. The new dashboard is now displayed, as follows: Currently, the dashboard contains now content. Buttons labeled Add Group and Add a Chart appear to the right. Note that the new dashboard is now listed in the pull-down menu: Add a Group To add a group of charts to the newly created dashboard, make sure that the new dashboard is selected in the Choose Dashboard field, and then left-click on the Add Group button, at the upper right: This displays the New Group dialog: Add an appropriate name for the group of charts you are creating, and left-click on the Save button: The dashboard is redisplayed, and now appears as follows: The newly defined group Test Group appears on the dashboard. Currently, it contains no charts. Left-click on the Add a Chart button, to the right of the Test Group field: This brings up the Add Chart dialog: The upper area of the dialog is headed Multi-Stat or Multi-Node Chart? It provides two radio buttons: Selecting show separate nodes + single statistic creates a chart that displays a single statistic for each of the nodes in the cluster. This allows easy visual comparisons to be made between the states of the different nodes. This is the default selection. Selecting combine node data + multiple stats per chart creates a chart that displays multiple statistics for the cluster as a whole. This allows easy visual comparisons to be made between different speeds and usage-rates, calculated across all of the nodes. In the middle of the dialog, interactive tabs appear for System, Data, Index, Query, Search, Analytics, Eventing, and XDCR. By left-clicking on any of these, associated statistics are displayed in the lower section of the dialog. The System tab is selected by default: consequently, the associated statistics CPU, Idle Streaming Requests, Streaming Wakeups, and others are displayed. Each of these statistics is accompanied by a check-box, to permit its selection. Note that the choice made with the upper radio buttons affects the availability of statistics for selection. For example, selecting show separate node + single statistic ensures that after a single statistic has been selected, the rest are greyed-out. Creating a Single-Statistic Chart, Referencing All Nodes Separately Accepting the default radio button selection, show separate nodes + single statistic, select the CPU statistic from the lower part of the dialog: A tooltip is provided, indicating that the statistic concerns Percentage of CPU in use across all available cores on this server. The choice is confirmed, adjacent to a green checkmark, at the lower left of the dialog. All statistics other than CPU are greyed out. Note that at the upper right, a selector is provided whereby the size of the chart, in its default appearance within the dashboard, can be specified: Leaving the selection as S (for small), left-click on the Save Chart button. The dashboard now appears as follows: The chart created for CPU is now displayed at the left. The chart features a line for each of the two nodes in the cluster. By hovering the mouse-cursor over the corner of the chart, controls can be displayed in the chart’s upper-right corner: The garbage-can icon allows the chart to be deleted: a notification will appear, asking for confirmation. The notepad icon allows the chart to be edited: a dialog named Edit Chart is displayed (note that this dialog is almost identical in appearance to the Add Chart dialog already examined). By hovering the mouse-cursor over the central, data-bearing area of the chart, a pop-up can be displayed, confirming the exact statistic displayed at the cursor-location: As this clarifies, the chart’s blue and orange lines provide the CPU statistic for each of the cluster’s two nodes. To improve readabiliy further, left-click on the chart, to maximize it. The appearance is now as follows: Note the vertically minimized version of the chart, which appears at the foot of the display, with the magnifying-glass icon to its left. By clicking on this at a starting-point on the horizontal axis, and dragging the cursor to the left or right, a time-period can be selected; which is then reflected in a redisplay of the main chart. For example: Note also that by accessing the control at the upper-center of the maximized chart, the time-granularity for display can be modified. For example, change hour to minute: The maximized chart now appears as follows: Minimize the chart by left-clicking on the 'X' icon, at the upper-right: Creating a Chart of Multiple Statistics, Each Representing the Whole Cluster Left-click on the Add a Chart button. When the Add Chart dialog appears, select the combine node data + multiple stats per chart radio button. Accepting the default System setting, select the CPU, Available RAM, and Swap Used checkboxes: Note that because certain statistics are incompatible with one another, in terms of co-located display, the selection of some may grey-out the others — as is the case with Idle Streaming Requests, Streaming Wakeups, and HTTP Request Rate here. Left-click on Save Chart, to save. The dashboard now appears as follows: Left-click on the new chart, to maximize: The chart provides individual lines for CPU, Available RAM, and Swap Used. The calibration on the left vertical-axis is for CPU percentage; that on the right for megabytes of RAM and swap. From this point, additional charts can be created for the other system services, with different statistic-combinations selected for each. Additional groups of charts can be defined, and multiple dashboard-instances simultaneously maintained. Manage Statistics with the CLI On the command-line, statistics can be managed with the cbstats tool. This allows a bucket to be specified as the source of statistics. Port 11210 must be specified. For example, the memory option returns statistics on memory for the specified bucket: /opt/couchbase/bin/cbstats -b travel-sample -u Administrator -p password \ localhost:11210 memory If successful, the command returns the following: bytes: 38010040 ep_blob_num: 31591 ep_blob_overhead: 2159511 ep_item_num: 3584 ep_kv_size: 24495752 ep_max_size: 104857600 ep_mem_high_wat: 89128960 ep_mem_high_wat_percent: 0.85 ep_mem_low_wat: 78643200 ep_mem_low_wat_percent: 0.75 ep_oom_errors: 0 ep_overhead: 5194392 ep_storedval_num: 31591 ep_storedval_overhead: 2159511 ep_storedval_size: 2527280 ep_tmp_oom_errors: 0 ep_value_size: 22306240 mem_used: 38010040 mem_used_estimate: 38010040 mem_used_merge_threshold: 524288 total_allocated_bytes: 67864856 total_fragmentation_bytes: 4220648 total_heap_bytes: 111050752 total_metadata_bytes: 6175864 total_resident_bytes: 103907328 total_retained_bytes: 18448384 The vbucket option returns statistics for all vBuckets for the specified bucket. The output can be filtered, so that a particular vBucket can be examined: /opt/couchbase/bin/cbstats -b travel-sample -u Administrator -p password \ localhost:11210 vbucket | grep 1014 This produces the following output: vb_1014: active Manage Statistics with the REST API The Couchbase-Server REST API allows statistics to be gathered either from the cluster or from the individual bucket. Get Cluster Statistics Cluster statistics can be accessed by means of the /pools/default URI, as follows: curl -v -X GET -u Administrator:password localhost:8091/pools/default | jq Note that in this example, output is piped to the jq tool: this formats the output, and so improves readability. A sample of the (extensive) formatted output might appear as follows: { "name": "default", "nodes": [ { "systemStats": { "cpu_utilization_rate": 12.08791208791209, "swap_total": 536866816, "swap_used": 218357760, "mem_total": 1040723968, "mem_free": 194670592, "mem_limit": 1040723968, "cpu_cores_available": 1 }, "interestingStats": { "cmd_get": 0, "couch_docs_actual_disk_size": 95912798, "couch_docs_data_size": 46982656, "couch_spatial_data_size": 0, "couch_spatial_disk_size": 0, "couch_views_actual_disk_size": 0, . . . The full output includes information on: Memory and disks: how much space is available in total, how much is currently free, etc. Nodes, CPUs, uptime, ports being used, services deployed. URIs for important Couchbase Server endpoints, such as rebalance, failOver, ejectNode, and setAutoCompaction. Cluster settings, such as viewFragmentationThresholdand indexCompactionMode; and counters for operations such as rebalance and failover. For more information, see Retrieving Cluster Information. Get Bucket Statistics To get statistics for an individual bucket, use the /buckets/<bucket-name>/stats URI. For example: curl -v GET -u Administrator:password \ | jq Extracts from the (extensive) formatted output might appear as follows: { "op": { "samples": { "couch_total_disk_size": [ 95912798, 95912798, . . ], "couch_docs_fragmentation": [ 0, 0, . . ], "couch_views_fragmentation": [ 0, 0, . . ], "hit_ratio": [ 0, 0, . . }, "samplesCount": 60, "isPersistent": true, "lastTStamp": 1553695746640, "interval": 1000 }, "hot_keys": [] } A number of key statistics are thus returned, each applied to each of the specified bucket’s vBuckets. For more information, see Getting Bucket Statistics.
https://docs.couchbase.com/server/current/manage/manage-statistics/manage-statistics.html
2022-01-29T01:00:31
CC-MAIN-2022-05
1642320299894.32
[array(['../_images/manage-statistics/DBaccessDB.png', 'DBaccessDB'], dtype=object) array(['../_images/manage-statistics/DBblankInitial.png', 'DBblankInitial'], dtype=object) array(['../_images/manage-statistics/ClusterOverview.png', 'ClusterOverview'], dtype=object) array(['../_images/manage-statistics/dashboardControls.png', 'dashboardControls'], dtype=object) array(['../_images/manage-statistics/DashboardToggle.png', 'DashboardToggle'], dtype=object) array(['../_images/manage-statistics/timeGranularityOptions.png', 'timeGranularityOptions'], dtype=object) array(['../_images/manage-statistics/dashboardBucketControl.png', 'dashboardBucketControl'], dtype=object) array(['../_images/manage-statistics/allServerNodesPullDown.png', 'allServerNodesPullDown'], dtype=object) array(['../_images/manage-statistics/resetButton.png', 'resetButton'], dtype=object) array(['../_images/manage-statistics/resetDashboardNotiification.png', 'resetDashboardNotiification'], dtype=object) array(['../_images/manage-statistics/clickToAddDashboardOne.png', 'clickToAddDashboardOne'], dtype=object) array(['../_images/manage-statistics/clickToAddDashboardTwo.png', 'clickToAddDashboardTwo'], dtype=object) array(['../_images/manage-statistics/clickToAddDashboardThree.png', 'clickToAddDashboardThree'], dtype=object) array(['../_images/manage-statistics/testDashboardInitialAppearance.png', 'testDashboardInitialAppearance'], dtype=object) array(['../_images/manage-statistics/pullDownMenuWithNewDashboard.png', 'pullDownMenuWithNewDashboard'], dtype=object) array(['../_images/manage-statistics/addGroupButton.png', 'addGroupButton'], dtype=object) array(['../_images/manage-statistics/newGroupDialog.png', 'newGroupDialog'], dtype=object) array(['../_images/manage-statistics/newGroupDialogFilled.png', 'newGroupDialogFilled'], dtype=object) array(['../_images/manage-statistics/dashboardWithInitialGroup.png', 'dashboardWithInitialGroup'], dtype=object) array(['../_images/manage-statistics/clickOnChartAddition.png', 'clickOnChartAddition'], dtype=object) array(['../_images/manage-statistics/addChartDialog.png', 'addChartDialog'], dtype=object) array(['../_images/manage-statistics/addChartDialogCPUselection.png', 'addChartDialogCPUselection'], dtype=object) array(['../_images/manage-statistics/chartSizeSelector.png', 'chartSizeSelector'], dtype=object) array(['../_images/manage-statistics/dashboardWithOneChart.png', 'dashboardWithOneChart'], dtype=object) array(['../_images/manage-statistics/cpuChartWithControlDisplayed.png', 'cpuChartWithControlDisplayed'], dtype=object) array(['../_images/manage-statistics/cpuChartWithPopUpDisplayed.png', 'cpuChartWithPopUpDisplayed'], dtype=object) array(['../_images/manage-statistics/cpuChartMaximized.png', 'cpuChartMaximized'], dtype=object) array(['../_images/manage-statistics/cpuChartMaximizedWithMagnify.png', 'cpuChartMaximizedWithMagnify'], dtype=object) array(['../_images/manage-statistics/changeTimeGranularity.png', 'changeTimeGranularity'], dtype=object) array(['../_images/manage-statistics/cpuChartMaximizedWithMinuteSelection.png', 'cpuChartMaximizedWithMinuteSelection'], dtype=object) array(['../_images/manage-statistics/XiconSelection.png', 'XiconSelection'], dtype=object) array(['../_images/manage-statistics/multStatisticChartSelections.png', 'multStatisticChartSelections'], dtype=object) array(['../_images/manage-statistics/dashboardWithMultiStatisticChartAdded.png', 'dashboardWithMultiStatisticChartAdded'], dtype=object) array(['../_images/manage-statistics/multiStatisticChartMaximized.png', 'multiStatisticChartMaximized'], dtype=object) ]
docs.couchbase.com
HACMan Service Kiosk About The HACMan Service Kiosk (name pending) is a useful and easy to use kiosk positioned next to the entrance/exit, and will provide information such as the current time, recent members who have entered, IRC/Telegram viewing, a direct entry system to allow guests to enter, viewing the outer door camera (when implemented), and a race timer for members to gain superiority on the leaderboard! (racing between the outer and inner doors) Last update: August 22, 2020
https://docs.hacman.org.uk/old_wiki_files/Hackspace/HACMan_Service_Kiosk/
2022-01-29T02:20:56
CC-MAIN-2022-05
1642320299894.32
[]
docs.hacman.org.uk
# Registering Tokens with Users When a user opens their MetaMask, they are shown a variety of assets, including tokens. By default, MetaMask auto-detects some major popular tokens and auto-displays them, but for most tokens, the user will need to add the token themselves. While this is possible using our UI with the Add Token button, that process can be cumbersome, and involves the user interacting with contract addresses, and is very error prone. You can greatly improve the security and experience of users adding your token to their MetaMask by taking advantage of the wallet_watchAsset API as defined in EIP-747 (opens new window). # Code-free Example Here are a couple live web applications that let you enter your token details, and then share them with a simple web link: # Example If you'd like to integrate suggesting a token into your own web app, you can follow this code snippet to implement it: const tokenAddress = '0xd00981105e61274c8a5cd5a88fe7e037d935b513'; const tokenSymbol = 'TUT'; const tokenDecimals = 18; const tokenImage = ''; try { // wasAdded is a boolean. Like any RPC method, an error may be thrown. const wasAdded = await ethereum.request({ method: 'wallet_watchAsset', params: { type: 'ERC20', // Initially only supports ERC20, but eventually more! options: { address: tokenAddress, // The address that the token is at. symbol: tokenSymbol, // A ticker symbol or shorthand, up to 5 chars. decimals: tokenDecimals, // The number of decimals in the token image: tokenImage, // A string url of the token logo }, }, }); if (wasAdded) { console.log('Thanks for your interest!'); } else { console.log('Your loss!'); } } catch (error) { console.log(error); }
https://docs.metamask.io/guide/registering-your-token.html
2022-01-29T02:15:19
CC-MAIN-2022-05
1642320299894.32
[]
docs.metamask.io
The following figure shows a Stage 0 to Stage 2 Boot stack that uses the HSM mode. It reduces the number of steps by distributing the SSK. This figure uses the Zynq® UltraScale+™ MPSoC device to illustrate the stages. Boot Process Creating a boot image using HSM mode is similar to creating a boot image using a standard flow with following BIF file. all: { [auth_params] ppk_select=1;spk_id=0x8 [keysrc_encryption]bbram_red_key [pskfile]primary.pem [sskfile]secondary.pem [ bootloader, encryption=aes, aeskeyfile=aes.nky, authentication=rsa ]fsbl.elf [destination_cpu=a53-0,authentication=rsa]hello_a53_0_64.elf } Stage 0: Create a boot image using HSM Mode A trusted individual creates the SPK signature using the Primary Secret Key. The SPK Signature is on the Authentication Certificate Header, SPK, and SPK ID. To generate a hash for the above, use the following BIF file snippet. stage 0: { [auth_params] ppk_select=1;spk_id=0x3 [spkfile]keys/secondary.pub } The following is the Bootgen command: bootgen -arch zynqmp -image stage0.bif -generate_hashes The output of this command is: secondary.pub.sha384. Stage 1: Distribute the SPK Signature The trusted individual distributes the SPK Signature to the development teams. openssl rsautl -raw -sign -inkey keys/primary0.pem -in secondary.pub.sha384 > secondary.pub.sha384.sig The output of this command is: secondary.pub.sha384.sig Stage 2: Encrypt using AES in FSBL The development teams use Bootgen to create as many boot images as needed. The development teams use: - The SPK Signature from the Trusted Individual. - The Secondary Secret Key (SSK), SPK, and SPKID Stage2: { [keysrc_encryption]bbram_red_key [auth_params] ppk_select=1;spk_id=0x3 [ppkfile]keys/primary.pub [sskfile]keys/secondary0.pem [spksignature]secondary.pub.sha384.sig [bootloader,destination_cpu=a53-0, encryption=aes, aeskeyfile=aes0.nky, authentication=rsa] fsbl.elf [destination_cpu=a53-0, authentication=rsa] hello_a53_0_64.elf } bootgen -arch zynqmp -image stage2.bif -o final.bin
https://docs.xilinx.com/r/en-US/ug1400-vitis-embedded/Creating-a-Boot-Image-Using-HSM-Mode-PSK-is-not-Shared
2022-01-29T01:41:18
CC-MAIN-2022-05
1642320299894.32
[]
docs.xilinx.com
To select a traffic configuration: - In the Project Explorer, right-click the hardware platform and select. - Under Performance Analysis, select Performance Analysis on <filename>.elf. - You can use the ATG Configuration view to define multiple traffic configurations and select the traffic to be used for the current run. - The port location is taken from the Hardware handoff file. If no ATG was configured in the design, the ATG Configuration view is empty. - You can use the ATG Configuration page to add and edit configurations. - To add a configuration to the list of configurations, click the + button. - To edit a configuration, select the Configuration: drop-down list to choose the configuration that you want to edit. - For ease of defining an ATG configuration, you can create Configuration Templates. These templates are saved for the user workspace and can be used across the Projects for ATG traffic definitions. To create a template, do the following: - Click Configure Templates. - Click the + button to add a new user-defined configuration template. - The newly created template is assigned a Template ID with the pattern of "UserDef_*" by default. You can change the ID and also define the rest of the fields. - You can use these defined templates to define an ATG configuration. To delete a Configuration Template, select it and click the X button.Tip: In an ATG configuration, to set a port so that it does not have any traffic, set the Template ID for that port to None.
https://docs.xilinx.com/r/en-US/ug1400-vitis-embedded/Selecting-an-ATG-Traffic-Configuration
2022-01-29T02:24:35
CC-MAIN-2022-05
1642320299894.32
[]
docs.xilinx.com
Audit Trail Report About This Report The Audit Trail Report displays a list of operations that users performed in the CommCell environment over a specified time range. Each operation that appears in the report is grouped into a playback, or severity, level: critical, high, medium, or low. The length of time that Audit Trail data is retained in the database can vary amongst severity levels. You can configure retention settings for data in each severity level in the CommCell Console Control Panel. For a complete list of operations that are audited and for instructions on configuring the retention settings, see Audit Trail. For security reasons, this report is available only to CommCell Administrators. Your CommCell user account must be a member of the Master user group and have a CommCell-level association to view this report. When To Use This Report You can use this report to: - Monitor operations in the CommCell computer. - Track operations performed by particular users. What This Report Contains This report presents the data in the following sections: How To Generate the Report - On the CommCell Console ribbon, select the Reports tab, and then click Other Reports > Audit Trail. The Report Selection dialog box is displayed. - On the General tab, in the Playback Level list, select an event level. The report output includes events at the level you select and higher. For example, if you select Medium, then the High and Critical events also appear in the report output. -. Example: Verifying Changes to Audit Trail Retention Settings You can verify any configuration changes listed under Operations Recorded by Audit Trail. For example, you can view changes made to Audit Trail retention settings. - Change the retention days for any of the Audit Trail severity levels. - Run the Audit Trail Report at High playback level. To see other types of configuration changes, you can run the Audit Trail report at the playback level associated with the action you performed. Any configuration changes that are tracked by Audit Trail will appear in the report. Additional Options The following table describes additional operations that you can perform with the reports feature:
http://docs.snapprotect.com/netapp/v10/article?p=features/reports/types/gui_audit_trail.htm
2022-01-29T01:00:33
CC-MAIN-2022-05
1642320299894.32
[]
docs.snapprotect.com
Create a child zone under the parent zone Creating a child zone is similar to creating a parent zone. You select the parent in the left pane, then create and configure the child zone to prepare an environment into which you will migrate existing users and groups. To create a child zone under the parent zone: - Start Access Manager. - In the console tree, expand the Zones node. - Select the top-level parent zone, right-click, then click Create Child Zone. Type a name and description for the child zone, then click Next. For example, if you are organizing by functional group, this zone might be finance or engineering. If you are organizing by data center location, the child zone might be sanfrancisco or seattle. - Click Finish to complete the zone creation. The new zone is listed under the Child Zones node in the left pane.
https://docs.centrify.com/Content/inst-depl/MigratePrepareZoneChildCreate.htm
2022-01-29T02:43:23
CC-MAIN-2022-05
1642320299894.32
[]
docs.centrify.com
Choosing Distribution Column Citus uses the distribution column in distributed tables to assign table rows to shards. Choosing the distribution column for each table is one of the most important modeling decisions because it determines how data is spread across nodes. If the distribution columns are chosen correctly, then related data will group together on the same physical nodes, making queries fast and adding support for all SQL features. If the columns are chosen incorrectly, the system will run needlessly slowly, and won’t be able to support all SQL features across nodes. This section gives distribution column tips for the two most common Citus scenarios. It concludes by going in-depth on “co-location,” the desirable grouping of data on nodes. Multi-Tenant Apps. Best Practices Partition distributed tables by a common tenant_id column. For instance, in a SaaS application where tenants are companies, the tenant_id will likely be company_id. Convert small cross-tenant tables to reference tables. When multiple tenants share a small table of information, distribute it as a reference table. Restrict filter all application queries by tenant_id. Each query should request information for one tenant at a time. Read the Multi-tenant Applications guide for a detailed example of building this kind of application. Real-Time Apps While the multi-tenant architecture introduces a hierarchical structure and uses data co-location to route queries per tenant, real-time architectures depend on specific distribution properties of their data to achieve highly parallel processing. We use “entity id” as a term for distribution columns in the real-time model, as opposed to tenant ids in the multi-tenant model. Typical entities single node must do a disproportionate amount of work. Best Practices Choose a column with high cardinality as the distribution column. For comparison, a “status” field on an order table with values “new,” “paid,” and “shipped” is a poor choice of distribution column because it assumes only those few values. The number of distinct values limits the number of shards that can hold the data, and the number of nodes that can process it. Among columns with high cardinality, it is good additionally to choose those that are frequently used in group-by clauses or as join keys. Choose a column with even distribution. If you distribute a table on a column skewed to certain common values, then data in the table will tend to accumulate in certain shards. The nodes holding those shards will end up doing more work than other nodes. Distribute fact and dimension tables on their common columns. Your fact table can have only one distribution key. Tables that join on another key will not be co-located with the fact table. Choose one dimension to co-locate based on how frequently it is joined and the size of the joining rows. Change some dimension tables into reference tables. If a dimension table cannot be co-located with the fact table, you can improve query performance by distributing copies of the dimension table to all of the nodes in the form of a reference table. Read the Real-Time Dashboards guide for a detailed example of building this kind of application. Timeseries Data In a time-series workload, applications query recent information while archiving old information. The most common mistake in modeling timeseries information in Citus is using the timestamp itself as a distribution column. A hash distribution based on time will distribute times seemingly at random into different shards rather than keeping ranges of time together in shards. However, queries involving time generally reference ranges of time (for example the most recent data), so such a hash distribution would lead to network overhead. Best Practices Do not choose a timestamp as the distribution column. Choose a different distribution column. In a multi-tenant app, use the tenant id, or in a real-time app use the entity id. Use PostgreSQL table partitioning for time instead. Use table partitioning to break a big table of time-ordered data into multiple inherited tables with each containing different time ranges. Distributing a Postgres-partitioned table in Citus creates shards for the inherited tables. Read the Timeseries Data guide for a detailed example of building this kind of application. Table Co-Location Relational databases are the first choice of data store for many applications due to their enormous flexibility and reliability. Historically the one criticism of relational databases is that they can run on only a single machine, which creates inherent limitations when data storage needs outpace server improvements. The solution to rapidly scaling databases is to distribute them, but this creates a performance problem of its own: relational operations such as joins then need to cross the network boundary. Co-location is the practice of dividing data tactically, where one keeps related information on the same machines to enable efficient relational operations, but takes advantage of the horizontal scalability for the whole dataset. The principle of data co-location is that all tables in the database have a common distribution column and are sharded across machines in the same way, such that rows with the same distribution column value are always on the same machine, even across different tables. As long as the distribution column provides a meaningful grouping of data, relational operations can be performed within the groups. Data co-location in Citus for hash-distributed tables The Citus extension for PostgreSQL is unique in being able to form a distributed database of databases. Every node in a Citus cluster is a fully functional PostgreSQL database and Citus adds the experience of a single homogenous database on top. While it does not provide the full functionality of PostgreSQL in a distributed way, in many cases it can take full advantage of features offered by PostgreSQL on a single machine through co-location, including full SQL support, transactions and foreign keys. In Citus a row is stored in a shard if the hash of the value in the distribution column falls within the shard’s hash range. To ensure co-location, shards with the same hash range are always placed on the same node even after rebalance operations, such that equal distribution column values are always on the same node across tables. A distribution column that we’ve found to work well in practice is tenant ID in multi-tenant applications. For example, SaaS applications typically have many tenants, but every query they make is specific to a particular tenant. While one option is providing a database or schema for every tenant, it is often costly and impractical as there can be many operations that span across users (data loading, migrations, aggregations, analytics, schema changes, backups, etc). That becomes harder to manage as the number of tenants grows. A practical example of co-location Consider the following tables which might be part of a multi-tenant web analytics SaaS: CREATE TABLE event ( tenant_id int, event_id bigint, page_id int, payload jsonb, primary key (tenant_id, event_id) ); CREATE TABLE page ( tenant_id int, page_id int, path text, primary key (tenant_id, page_id) ); Now we want to answer queries that may be issued by a customer-facing dashboard, such as: “Return the number of visits in the past week for all pages starting with ‘/blog’ in tenant six.” Using Regular PostgreSQL Tables If our data was in a single PostgreSQL node, we could easily express our query using the rich set of relational operations offered by SQL: SELECT page_id, count(event_id) FROM page LEFT JOIN ( SELECT * FROM event WHERE (payload->>'time')::timestamptz >= now() - interval '1 week' ) recent USING (tenant_id, page_id) WHERE tenant_id = 6 AND path LIKE '/blog%' GROUP BY page_id; As long as the working set for this query fits in memory, this is an appropriate solution for many applications since it offers maximum flexibility. However, even if you don’t need to scale yet, it can be useful to consider the implications of scaling out on your data model. Distributing tables by ID As the number of tenants and the data stored for each tenant grows, query times will typically go up as the working set no longer fits in memory or CPU becomes a bottleneck. In this case, we can shard the data across many nodes using Citus. The first and most important choice we need to make when sharding is the distribution column. Let’s start with a naive choice of using event_id for the event table and page_id for the page table: -- naively use event_id and page_id as distribution columns SELECT create_distributed_table('event', 'event_id'); SELECT create_distributed_table('page', 'page_id'); Given that the data is dispersed across different workers, we cannot simply perform a join as we would on a single PostgreSQL node. Instead, we will need to issue two queries: Across all shards of the page table (Q1): SELECT page_id FROM page WHERE path LIKE '/blog%' AND tenant_id = 6; Across all shards of the event table (Q2): SELECT page_id, count(*) AS count FROM event WHERE page_id IN (/*…page IDs from first query…*/) AND tenant_id = 6 AND (payload->>'time')::date >= now() - interval '1 week' GROUP BY page_id ORDER BY count DESC LIMIT 10; Afterwards, the results from the two steps need to be combined by the application. The data required to answer the query is scattered across the shards on the different nodes and each of those shards will need to be queried: In this case the data distribution creates substantial drawbacks: Overhead from querying each shard, running multiple queries Overhead of Q1 returning many rows to the client Q2 becoming very large The need to write queries in multiple steps, combine results, requires changes in the application A potential upside of the relevant data being dispersed is that the queries can be parallelised, which Citus will do. However, this is only beneficial if the amount of work that the query does is substantially greater than the overhead of querying many shards. It’s generally better to avoid doing such heavy lifting directly from the application, for example by pre-aggregating the data. Distributing tables by tenant Looking at our query again, we can see that all the rows that the query needs have one dimension in common: tenant_id. The dashboard will only ever query for a tenant’s own data. That means that if data for the same tenant are always co-located on a single PostgreSQL node, our original query could be answered in a single step by that node by performing a join on tenant_id and page_id. In Citus, rows with the same distribution column value are guaranteed to be on the same node. Each shard in a distributed table effectively has a set of co-located shards from other distributed tables that contain the same distribution column values (data for the same tenant). Starting over, we can create our tables with tenant_id as the distribution column. -- co-locate tables by using a common distribution column SELECT create_distributed_table('event', 'tenant_id'); SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event'); In this case, Citus can answer the same query that you would run on a single PostgreSQL node without modification (Q1): SELECT page_id, count(event_id) FROM page LEFT JOIN ( SELECT * FROM event WHERE (payload->>'time')::timestamptz >= now() - interval '1 week' ) recent USING (tenant_id, page_id) WHERE tenant_id = 6 AND path LIKE '/blog%' GROUP BY page_id; Because of the tenantid filter and join on tenantid, Citus knows that the entire query can be answered using the set of co-located shards that contain the data for that particular tenant, and the PostgreSQL node can answer the query in a single step, which enables full SQL support. In some cases, queries and table schemas will require minor modifications to ensure that the tenant_id is always included in unique constraints and join conditions. However, this is usually a straightforward change, and the extensive rewrite that would be required without having co-location is avoided. While the example above queries just one node because there is a specific tenant_id = 6 filter, co-location also allows us to efficiently perform distributed joins on tenant_id across all nodes, be it with SQL limitations. Co-location means better feature support The full list of Citus features that are unlocked by co-location are: Full SQL support for queries on a single set of co-located shards Multi-statement transaction support for modifications on a single set of co-located shards Aggregation through INSERT..SELECT Foreign keys Distributed outer joins Pushdown CTEs (requires PostgreSQL >=12) Data co-location is a powerful technique for providing both horizontal scale and support to relational data models. The cost of migrating or building applications using a distributed database that enables relational operations through co-location is often substantially lower than moving to a restrictive data model (e.g. NoSQL) and, unlike a single-node database, it can scale out with the size of your business. For more information about migrating an existing database see Transitioning to a Multi-Tenant Data Model. Query Performance.
https://docs.citusdata.com/en/latest/sharding/data_modeling.html
2022-01-29T00:42:47
CC-MAIN-2022-05
1642320299894.32
[array(['../_images/mt-colocation.png', 'co-located tables in multi-tenant architecture'], dtype=object) array(['../_images/colocation-shards.png', 'illustration of shard hash ranges'], dtype=object) array(['../_images/colocation-inefficient-queries.png', 'queries 1 and 2 hitting multiple nodes'], dtype=object) array(['../_images/colocation-better-query.png', 'query 1 accessing just one node'], dtype=object)]
docs.citusdata.com
CDS 3.2 Powered by Apache Spark Maven Artifacts The following table lists the groupId, artifactId, and version required to access the artifacts for CDS 3.2 Powered by Apache Spark: POM fragment The following pom fragment shows how to access a CDS 3.2 artifact from a Maven POM. The complete artifact list for this release follows.The complete artifact list for this release follows. <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.12</artifactId> <version>3.2.0.3.2.7170.0-49</version> <scope>provided</scope> </dependency>
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/cds-3/topics/spark-cds-30-maven-artifacts.html
2022-01-29T02:00:14
CC-MAIN-2022-05
1642320299894.32
[]
docs.cloudera.com
osm-systemnamespace, use: osm-prometheuswith the relevant Prometheus service name. testnamespace and enable OSM namespace monitoring and metrics scraping for the namespace. podinfodeployment and a horizontal pod autoscaler: podinfodeployment. The following podinfocanary custom resource instructs Flagger to: podinfodeployment created earlier, podinfodeployment revision changes, and podinfodeployment will be scaled to zero and the traffic to podinfo.testwill be routed to the primary pods. During the canary analysis, the podinfo-canary.testaddress can be used to target directly the canary pods. podinfodeployment during the canary analysis, Flagger will restart the analysis. kubectl describeoutput below shows canary rollout failure: podinfocanary custom resource earlier, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. podinfo-canary.yamlfile) and add the following metric. For more information on creating additional custom metrics using OSM metrics, please check the metrics available in OSM.
https://docs.flagger.app/tutorials/osm-progressive-delivery
2022-01-29T00:54:03
CC-MAIN-2022-05
1642320299894.32
[]
docs.flagger.app
For a period of a couple of hours on 5 June we had a number of logic app runs fail due to an invalid certificate error message. It did not affect every run (maybe 1 in 20), but this was still enough to cause some head aches as it also caused the process which sends error messages to fail too. We checked the Azure status history for the period and there was nothing there which seemed to relate. Does this mean anything to anyone and is there something we need to do to prevent this? Thanks Craig
https://docs.microsoft.com/en-us/answers/questions/34638/http-request-failed-as-there-is-an-error-getting-a.html
2022-01-29T02:55:08
CC-MAIN-2022-05
1642320299894.32
[array(['/answers/storage/attachments/9814-2020-06-11-14-58-5.png', '9814-2020-06-11-14-58-5.png'], dtype=object) ]
docs.microsoft.com
TFTP¶ Several models of phone out there that still only use TFTP for provisioning. Even though they have reached end of life, some of the popular ones are the Cisco 7960 and 7940. Also would need to add the TFTP port to the server firewall but this should be allowed only to specific IP addresses as TFTP has no security. Recommend to use TFTP only as a last resort for phones that don’t support HTTPS. Install TFTPD apt-get install tftpd service xinetd Change the configuration edit the /etc/xinetd.d/tftp Enable TFTP in FusionPBX Gui Goto Advanced > Default Settings > Provision Set Enabled to True and define the path to where the TFTP files will be. Test TFTP tftp x.x.x.x get 000000000000.cnf See the file getting requested for tftp tail -f /var/log/syslog | grep tftp
https://docs.techlacom.com/en/latest/additional_information/tftp.html
2022-01-29T01:23:11
CC-MAIN-2022-05
1642320299894.32
[array(['../_images/fusionpbx_tftp.jpg', '../_images/fusionpbx_tftp.jpg'], dtype=object) ]
docs.techlacom.com
Display Specific User Profile with a shortcode If you want to display a specific profile with the Profile Form on a page, you can use the following code snippets: Sample Usage: [um_embed_profile user_id="123" form_id="3"] user_id - This attribute is a user ID form_id - This attribute is a Profile Form ID
https://docs.ultimatemember.com/article/1628-display-specific-profile-with-shortcode
2022-01-29T01:42:33
CC-MAIN-2022-05
1642320299894.32
[]
docs.ultimatemember.com
LiveViewer¶ This device was create for backward compatibility with former graphical applications used at ESRF by the diagnostic group for the monitoring of the electron beam. It is no longer maintain. Instead we recommend to use the video API provided via the main device LimaCCDs. Nevertheless you will find here the of the available properties, attributes and commands.
https://lima1.readthedocs.io/en/v1.9.12/applications/tango/python/doc/plugins/liveviewer.html
2022-01-29T01:17:51
CC-MAIN-2022-05
1642320299894.32
[]
lima1.readthedocs.io
Manually forward email to Help Scout Every now and then, you might get an email in your inbox that needs to be created as a conversation in Help Scout. Luckily, all you have to do is forward the email to your Help Scout mailbox, and we'll take care of the rest. In this article A primer on forwarding A couple of things to remember when forwarding messages: - When you forward an email, we'll attempt to recognize your signature and remove it from the incoming message. - We'll pull out the original sender and set their address as the customer in Help Scout. - A forwarded message becomes a new conversation when it hits your Help Scout mailbox. - If you add any text to an email, that text will be added as a note when the conversation is created. - You can optionally use email commands to perform various conversations actions. - The mailbox auto reply is not sent to customers when messages are manually forwarded. Which address do I forward to? We recommend forwarding messages to your catch-all support address. For example, [email protected]. Since you've got an auto-forwarding rule already created, the message will be sent directly to Help Scout. Optionally, you can forward messages to your Help Scout mailbox alias (e.g., [email protected]). Notes for success The most common forwarding mistake is forwarding a message from an address that is not associated with your Help Scout account. Ideally, you should be forwarding messages from the email address associated with your Help Scout profile. Note: If you're planning to forward from multiple addresses, add those addresses to the Alternate Email field on your profile page. Help Scout will know that you're associated with those accounts, so we'll set the original customer appropriately. Forwarding from Outlook Forwarding emails from Outlook can be problematic because the customer's email address is not included in the email headers. Without an email address, Help Scout can't associate the conversation with a customer. To fix this, use the Forward as Attachment option: Outlook 2007: Select Forward as Attachment from the Actions menu or press Control + Alt + F. Outlook 2003: Just drag the message into a new message. This will force Outlook to forward it as an attachment.
http://docs.helpscout.net/article/59-manual-forwarding
2017-08-16T19:37:05
CC-MAIN-2017-34
1502886102393.60
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/52b3179ce4b0a3b4e5ec6393/file-kPc7JvfiR4.png', None], dtype=object) ]
docs.helpscout.net
extract - To delete a page break --Lorna Scammell December 2010
https://docs.joomla.org/index.php?title=J1.5:Split_an_article:_Joomla!_1.5&oldid=33605
2015-02-27T08:24:57
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
Tutorial¶ autocomplete_light.register() shortcut to generate and register Autocomplete classes¶ Register an Autocomplete for your model in your_app/autocomplete_light_registry.py, it can look like this: import autocomplete_light from models import Person # This will generate a PersonAutocomplete class autocomplete_light.register(Person, # Just like in ModelAdmin.search_fields search_fields=['^first_name', 'last_name'], attrs={ # This will set the input placeholder attribute: 'placeholder': 'Other model name ?', # This will set the yourlabs.Autocomplete.minimumCharacters # options, the naming conversion is handled by jQuery 'data-autocomplete-minimum-characters': 1, }, # This will set the data-widget-maximum-values attribute on the # widget container element, and will be set to # yourlabs.Widget.maximumValues (jQuery handles the naming # conversion). widget_attrs={ 'data-widget-maximum-values': 4, # Enable modern-style widget ! 'class': 'modern-style', }, ) AutocompleteView.get() can proxy PersonAutocomplete.autocomplete_html() because PersonAutocomplete is registered. This means that openning /autocomplete/PersonAutocomplete/ will call AutocompleteView.get() which will in turn call PersonAutocomplete.autocomplete_html(). ![ digraph autocomplete { "widget HTML" -> "widget JavaScript" -> "AutocompleteView" -> "autocomplete_html()"; }](_images/graphviz-287f18e27a0a3c873499a21431b09138eecf1b7e.png) Also AutocompleteView.post() would proxy PersonAutocomplete.post() if it was defined. It could be useful to build your own features like on-the-fly object creation using Javascript method overrides like the remote autocomplete. Warning Note that this would make all Person public. Fine tuning security is explained later in this tutorial in section Overriding the queryset of a model autocomplete to secure an Autocomplete. autocomplete_light.register() generates an Autocomplete class, passing the extra keyword arguments like AutocompleteModel.search_fields to the Python type() function. This means that extra keyword arguments will be used as class attributes of the generated class. An equivalent version of the above code would be: class PersonAutocomplete(autocomplete_light.AutocompleteModelBase): search_fields = ['^first_name', 'last_name'] model = Person autocomplete_light.register(PersonAutocomplete) Note If you wanted, you could override the default AutocompleteModelBase used by autocomplete_light.register() to generate Autocomplete classes. It could look like this (in your project’s urls.py): autocomplete_light.registry.autocomplete_model_base = YourAutocompleteModelBase autocomplete_light.autodiscover() Refer to the Autocomplete classes documentation for details, it is the first chapter of the the reference documentation. autocomplete_light.modelform_factory() shortcut to generate ModelForms in the admin¶ First, ensure that scripts are installed in the admin base template. Then, enabling autocompletes in the admin is as simple as overriding ModelAdmin.form in your_app/admin.py. You can use the modelform_factory() shortcut as such: class OrderAdmin(admin.ModelAdmin): # This will generate a ModelForm form = autocomplete_light.modelform_factory(Order) admin.site.register(Order) Refer to the Form, fields and widgets documentation for other ways of making forms, it is the second chapter of the the reference documentation. autocomplete_light.ModelForm to generate Autocomplete fields, the DRY way¶ First, ensure that scripts are properly installed in your template. Then, you can use autocomplete_light.ModelForm to replace automatic Select and SelectMultiple widgets which renders <select> HTML inputs by autocompletion widgets: class OrderModelForm(autocomplete_light.ModelForm): class Meta: model = Order Note that the first Autocomplete class registered for a model becomes the default Autocomplete for that model. If you have registered several Autocomplete classes for a given model, you probably want to use a different Autocomplete class depending on the form using Meta.autocomplete_names: class OrderModelForm(autocomplete_light.ModelForm): class Meta: autocomplete_names = {'company': 'PublicCompanyAutocomplete'} model = Order autocomplete_light.ModelForm respects Meta.fields and Meta.exclude. However, you can enable or disable autocomplete_light.ModelForm‘s behaviour in the same fashion with Meta.autocomplete_fields and Meta.autocomplete_exclude: class OrderModelForm(autocomplete_light.ModelForm): class Meta: model = Order # only enable autocompletes on 'person' and 'product' fields autocomplete_fields = ('person', 'product') class PersonModelForm(autocomplete_light.ModelForm): class Meta: model = Order # do not make 'category' an autocomplete field autocomplete_exclude = ('category',) Also, it will automatically enable autocompletes on generic foreign keys and generic many to many relations if you have at least one generic Autocomplete class register (typically an AutocompleteGenericBase). For more documentation, continue reading the reference documentation.
http://django-autocomplete-light.readthedocs.org/en/latest/tutorial.html
2015-02-27T07:28:04
CC-MAIN-2015-11
1424936460577.67
[]
django-autocomplete-light.readthedocs.org
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info): To pass captcha, please enter the... third 8th first ...characters from the sequence a653b2e20b in the reverse order of the listing above: a653b2e20b Real name is optional. If you choose to provide it, this will be used for giving you attribution for your work. edits pages recent contributors
https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=API17:JButtonCustom
2015-02-27T08:37:42
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
About. - Notifications - The BlackBerry Hub also collects your notifications about new PIN messages, time zone changes, software updates, third-party apps, and more. Your service provider might send you SIM Toolkit notifications that appear in the BlackBerry Hub. Tapping on these notifications launches the SIM Toolkit app. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/50635/mwa1334581676005.jsp
2015-02-27T07:32:46
CC-MAIN-2015-11
1424936460577.67
[]
docs.blackberry.com
The following persons deserve credit for Cargo. Managers: - Vincent Massol vmassol: Cargo creator - Matt Wringe: Cargo Lead Developer and Manager - S. Ali Tokmen: Cargo Lead Developer and Manager Committers: -. - Nigel Magnay: Several improvements to the Modules API and implementation of several container-specific merger classes. - sfarquhar: JNDI datasource configuration, Generic test runner extension and many more. - Jan Bartel: Added support for Jetty 5.x and 6.x. - Vincent Siveton: Lots of various patches on the Maven2 plugin - Kohsuke Kawaguchi: Various patches to improve Cargo's API and implementation of the Embedded Tomcat container +. - Ate Douma: Several patches for Tomcat._0<<
http://docs.codehaus.org/pages/viewpage.action?pageId=239894826
2015-02-27T07:41:07
CC-MAIN-2015-11
1424936460577.67
[array(['/download/attachments/10226/cargo-logo-vertical.png?version=1&modificationDate=1132673234618&api=v2', None], dtype=object) ]
docs.codehaus.org
- Click the Test Lists tab. Click List in the Add ribbon. Name the Test List. Add the test(s) to the list from left to right. Click OK to save the new Test List. Click Run List in the Execution ribbon. Select multiple Test Lists using the Shift key or Ctrl + Click, then click Run List to execute those lists in their order of selection. The Test Studio Test Runner launches first in a command prompt window. This calls each test in the list. A browser window or WPF app opens and each test executes in sequence. Upon completion, the Results tab opens. To view the test results, double click the test result entry in the Timeline view (MyTestList in this example). Dynamic Test List Click the Project tab. Select a test and refer to the Test Details pane on the right. Below I have set Owner to Telerik and Priority to 1. Click the Test Lists Tab Click Dynamic List in the Add ribbon. Name the Test List. Craft one or more Rules to filter on specific criteria, clicking plus button after each one. The current results are displayed real-time. Click OK to save the new Test List. Each time you click Run List in the Execution ribbon, Test Studio dynamically queries the project and executes the tests that meet the criteria of the Rules. See also - Test Lists Type - Blog post for an in-depth look at Dynamic Test Lists.
http://docs.telerik.com/teststudio/getting-started/test-execution/test-lists-standalone
2015-02-27T07:32:41
CC-MAIN-2015-11
1424936460577.67
[]
docs.telerik.com
The Control Panel can be accessed by logging into Joomla!'s back-end. After logging in, the Control Panel will be displayed. To access the Control Panel from another area in the back-end, simply go to Site > Control Panel. The Control Panel provides access to many default Joomla! functions and features. From the Control Panel, you can create and manage articles and categories. Direct links are also provided to the Media, Menu, User, Module, Extension, and Language Managers as well as the Global Configuration. The icons available in control panel are:
https://docs.joomla.org/index.php?title=Help17:Site_Control_Panel&diff=prev&oldid=84714
2015-02-27T08:35:36
CC-MAIN-2015-11
1424936460577.67
[]
docs.joomla.org
The core abstract programming model of Ruby and Groovy are very similar: everything is an object, there is a MOP in control of all activity, and closures are the core structuring tool after classes. Ruby uses the Ruby library, Groovy uses the Java library with some additions of its own. This is the biggest difference but it is a huge difference. Syntactically, things like: becomes: which doesn't show that the Groovy closures syntax is: which is slightly different from Ruby, but does show that sometimes Groovy has a different approach to certain things compared to Ruby. So in moving from Ruby to Groovy, there are gotchas.
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=63415
2015-02-27T07:46:03
CC-MAIN-2015-11
1424936460577.67
[]
docs.codehaus.org
When working on a big project, you might want multiple people with different roles working on the project together. Inviting a user to a project - Go to Project Settings > Users - Click on +Invite a User - Enter the email address of the user you want to invite - Select the Role you want to assign to the user. See Roles for more information. - Click on Invite. An email is sent to the address you add. Once the user signs up they will have access to your project. The functionalities that they have access to depends on the role that they are assigned. Changing Roles You can change the role by selecting the new role in the dropdown next to their name on the users page. Changes are saved automatically
https://docs.canonic.dev/recipes/access-to-content-editors/
2022-05-16T19:14:44
CC-MAIN-2022-21
1652662512229.26
[]
docs.canonic.dev
Built-in Functions → quarterMonth quarterMonth() is a conversion function that returns the month number of the quarter from the beginning of the quarter for the current date. To return the quarter’s month number for a given date or timestamp, use the quarterMonth() conversion function with a parameter. Signature quarterMonth() Returns int representing the current quarter’s month number. Example Identify the month number of the quarter, for the current date, 08/26/2021. quarterMonth() The following table illustrates the behavior of the quarterMonth() conversion function: August is the second month of the current quarter (Q3: July, August, and September). Thus, the quarterMonth() is 2. Use the following steps for detailed instructions on how to use the quarterMonth()Month function, quarterMonth(), to add the formula to the editor. - Select Validate & Save. - In the Properties panel, for Column Label, enter quarterMonth(). - Name the insight Revenue Per Category. - In the Action bar, select Save.
https://docs.incorta.com/5.1/references-built-in-functions-conversion-quartermonth/
2022-05-16T17:52:14
CC-MAIN-2022-21
1652662512229.26
[]
docs.incorta.com
Command Step Class Create an Azure ML Pipeline step that runs a command. - Inheritance - azureml.pipeline.core._python_script_step_base._PythonScriptStepBaseCommandStep Constructor CommandStep(command=None, name=None, compute_target=None, runconfig=None, runconfig_pipeline_params=None, inputs=None, outputs=None, params=None, source_directory=None, allow_reuse=True, version=None) Parameters The command to run or path of the executable/script relative to source_directory. It is required unless it is provided with runconfig. It can be specified with string arguments in a single string or with input/output/PipelineParameter in a list. The name of the step. If unspecified, the first word in the command is used. - compute_target - DsvmCompute or AmlCompute or RemoteCompute or HDInsightCompute or str or tuple The compute target to use. If unspecified, the target from the runconfig is used. This parameter may be specified as a compute target object or the string name of a compute target on the workspace. Optionally if the compute target is not available at pipeline creation time, you may specify a tuple of ('compute target name', 'compute target type') to avoid fetching the compute target object (AmlCompute type is 'AmlCompute' and RemoteCompute type is 'VirtualMachine'). - runconfig - ScriptRunConfig or RunConfiguration The optional configuration object which encapsulates the information necessary to submit a training run in an experiment. - runconfig_pipeline_params - <xref:<xref:{str: PipelineParameter}>> Overrides of runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property. Supported values: 'NodeCount', 'MpiProcessCountPerNode', 'TensorflowWorkerCount', 'TensorflowParameterServerCount' - inputs - list[InputPortBinding or DataReference or PortDataReference or PipelineData or <xref:azureml.pipeline.core.pipeline_output_dataset.PipelineOutputDataset> or DatasetConsumptionConfig] A list of input port bindings. - outputs - list[PipelineData or OutputDatasetConfig or PipelineOutputAbstractDataset or OutputPortBinding] A list of output port bindings. A dictionary of name-value pairs registered as environment variables with "AML_PARAMETER_". A folder that contains scripts, conda env, and other resources used in the step. Indicates whether the step should reuse previous results when re-run with the same settings. Reuse is enabled by default. If the step contents (scripts/dependencies) as well as inputs and parameters remain unchanged, the output from the previous run of this step is reused. When reusing the step, instead of submitting the job to compute, the results from the previous run are immediately made available to any subsequent steps. If you use Azure Machine Learning datasets as inputs, reuse is determined by whether the dataset's definition has changed, not by whether the underlying data has changed. An optional version tag to denote a change in functionality for the step. Remarks An CommandStep is a basic, built-in step to run a command on the given compute target. It takes a command as a parameter or from other parameters like runconfig. It also takes other optional parameters like compute target, inputs and outputs. You should use a ScriptRunConfig or RunConfiguration to specify requirements for the CommandStep, such as custom docker image. The best practice for working with CommandStep is to use a separate folder for the executable or script to run any dependent files associated with the step, and specify that folder with the source_directory parameter. Following this best practice has two benefits. First, it helps reduce the size of the snapshot created for the step because only what is needed for the step is snapshotted. Second, the step's output from a previous run can be reused if there are no changes to the source_directory that would trigger a re-upload of the snapshot. For the system-known commands source_directory is not required but you can still provide it with any dependent files associated with the step. The following code example shows how to use a CommandStep in a machine learning training scenario. To list files in linux: from azureml.pipeline.steps import CommandStep trainStep = CommandStep(name='list step', command='ls -lrt', compute_target=compute_target) To run a python script: from azureml.pipeline.steps import CommandStep trainStep = CommandStep(name='train step', command='python train.py arg1 arg2', source_directory=project_folder, compute_target=compute_target) To run a python script via ScriptRunConfig: from azureml.core import ScriptRunConfig from azureml.pipeline.steps import CommandStep train_src = ScriptRunConfig(source_directory=script_folder, command='python train.py arg1 arg2', environment=my_env) trainStep = CommandStep(name='train step', runconfig=train_src) See for more details on creating pipelines in general. Methods create_node Create a node for CommandStep - <xref:_GraphContext> The graph context. Returns The created node. Return type Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.commandstep?view=azure-ml-py
2022-05-16T18:40:08
CC-MAIN-2022-21
1652662512229.26
[]
docs.microsoft.com
About Wormhole About Wormhole is a cross chain token bridge that enables many saber pools. Currently, Wormhole allows bridging between a variety of blockchains including Ethereum, Solana, Binance Smart Chain, Terra, and Polygon. For details on how Wormhole works and its security, check out this excellent article by "The Intern". At a high level, Wormhole deploys a simple cross-chain attestation model that relies on a group of cross-chain oracles called “Guardians” that operate as a set of node operators with venerated validation track records and strong incentive alignment with the long-term interest of the root chain — Solana. How To Send Tokens to Solana This example is for migrating tokens from the Ethereum blockchain, but the same steps can be applied for any source blockchain. First, navigate to Wormhole's website. Select your source chain and Solana as the destination, connect your Ethereum wallet, and select which asset you would like to transfer. In this Case we'll use ETH as an example. Next, connect your Solana wallet. Make sure you have some SOL in your Solana wallet to claim the tokens. Create the associated token account, and send tokens. Make sure to approve the transactions in your Solana and Ethereum wallets. Stay on the page and wait for 15 confirmations on the Ethereum network. Then, make sure to redeem your tokens on Solana, confirming a few transactions. Tokens will arrive in your wallet. Wormhole Token Abbreviations The w indicates that the token is wrapped from Wormhole. The optional second letter in the token indicates token origin. So, for example weDAI indicates Wormhole wrapped DAI bridged from Ethereum, while wpDAI indicates Wormhole wrapped DAI bridged from Polygon. A "V1" suffix indicates that the token is a legacy asset from Wormhole V1 and should be migrated to V2 Token Migration In order to migrate Wormhole tokens to V2, head to Wormhole's website and you should be prompted for a migration pool. If this does not appear, the pool may be out of liquidity. In this case, you will need to send tokens back to their original blockchain, before bridging back to Solana. This will give you V2 tokens. Verifying Mints Wormhole has a useful tool to verify the wormhole token addresses, where you can copy and paste an address to verify its origin.
https://docs.saber.so/assets/wormhole-assets/about
2022-05-16T19:39:01
CC-MAIN-2022-21
1652662512229.26
[]
docs.saber.so
Velocity Riferimento - Pannello The initial velocity of particles can be set through different parameters, based on the type of the particle system. If the particle system type is Emitter or Hair, then the following parameters give the particle an initial velocity. - Normale The emitter’s surface normals (i.e. let the surface normal give the particle a starting speed). - Tangent Let the tangent speed give the particle a starting speed. - Tangent Phase Rotates the surface tangent. - Object Align Give an initial velocity in the X, Y, and Z axes. X, Y, Z - Object Velocity The emitter objects movement (i.e. let the object give the particle a starting speed). - Randomize Gives the starting speed a random variation. You can use a texture to only change the value, see Controlling Emission, Interaction and Time.
https://docs.blender.org/manual/it/dev/physics/particles/emitter/velocity.html
2022-05-16T18:31:42
CC-MAIN-2022-21
1652662512229.26
[]
docs.blender.org
Why does cfengine install into /var/cfengine instead of following the FHS? The Unix Filesystem Hierarchy Standard is a specification for standardizing where files and directories get installed on a Unix-like system. When you install CFEngine from source you can choose to build with FHS support, it places all files in their expected locations. In addition, you may choose to follow this standard in locating your master configuration and work areas. CFEngine was introduced at about the same time as the FHS standard and since cfengine 2.x, cfengine defaults to placing all components under /var/cfengine (similar to /var/cron): /var/cfengine /var/cfengine/bin /var/cfengine/inputs /var/cfengine/outputs Installing all components into the same sub-directory of /var is intended to increase the probability that all components are on a local file system. This agrees with the intention of the FHS as described in section 5.1 of the FHS-2.3. The location of this workspace is configurable, but the default is determined by backward compatibility. In other words, particular distributions may choose to use a different location, and some do. References: - -
https://docs.cfengine.com/docs/3.17/guide-faq-fhs.html
2022-05-16T18:29:39
CC-MAIN-2022-21
1652662512229.26
[]
docs.cfengine.com
Welcome to the EDJX Documentation Site! The EDJX Documentation Site (EdjDocs) presents a series of technical content designed to help you utilize the EDJX platform for all your edge-computing needs. EDJX provides developers with multiple edge-computing solutions with one product, the EDJX Platform. Let’s get started! See Next: Create your C++ and Rust functions using the EDJX Platform. New to the EDJX platform, see the Getting Started Guide. Want to get to know your way around the UI, see the Web Console Reference Guide. Ready to deploy functions, see Managing Serverless Applications. Do you have a domain you’d like EDJX to manage, see Managing Domain Name System. Need object storage, see Managing Object Storage.
https://docs.edjx.io/docs/latest/documentation.html
2022-05-16T18:28:53
CC-MAIN-2022-21
1652662512229.26
[]
docs.edjx.io
QoS policies details Contributors You can view details of QoS policies from the Management tab. ID The system-generated ID for the QoS policy. Name The user-defined name for the QoS policy. Min IOPS The minimum number of IOPS guaranteed for the volume. Max IOPS The maximum number of IOPS allowed for the volume. Burst IOPS The maximum number of IOPS allowed over a short period of time for the volume. Default = 15,000. Volumes Shows the number of volumes using the policy. This number links to a table of volumes that have the policy applied.
https://docs.netapp.com/us-en/element-software/storage/reference_data_manage_volumes_qos_policies_details.html
2022-05-16T17:40:33
CC-MAIN-2022-21
1652662512229.26
[]
docs.netapp.com
There are several options that are available within the Hierarchical LOD (Level of Detail) Outliner that you can use to define how your HLOD meshes are set up. Once enabling the HLOD system, you can access the HLOD Outliner from the window menu option under Level Editor. This page breaks down the available properties within the HLOD Outliner, refer to each section for more information. HLOD Actions - these options enable you to generate your HLOD Proxy Meshes for each of the clusters in your Level, re-generate clusters and build Proxy Meshes for each of the generated clusters in the Level, save all external HLOD data, or switch between LOD viewing options. HLOD Scene Actors - this contains each of the clusters (or Proxy Meshes) that have been generated along with information about each Actor. You can also Generate or Delete Clusters from this panel as well as perform contextual actions by right-clicking on a Scene Actor. LODSystem - this enables you to define how many HLOD Levels you want to include along with the Cluster and Mesh Generation Settings per HLOD Level. You can also override the Material used for Proxy Materials or override the HLODSetup Asset. Property and Interface Reference HLOD Actions Across the top of the HLOD Outliner, you will find the available options: HLOD Scene Actors The HLOD Scene Actors panel enables you to Generate Clusters (but not the Proxy Meshes) for Meshes in the Level or Delete Clusters (which will delete all clusters in the Level). This panel also displays all the LODActors for a given LOD Level along with information about the Actor such as their original triangle count, the reduced number of triangles in an LOD mesh, the percentage of triangle reduction retained by an LOD mesh, and what Level the LOD Mesh resides in. Additional actions can be accessed by right-clicking on an LOD Actor or Static Mesh Actor in the panel: LOD Actor Context Menu Right-click any LOD Actor listed under the Scene Actor Name column to bring up the menu below and available options. Actor Context Menu Expanding an LODActor exposes the scene Actors included in the generated HLOD cluster. Right-click a Scene Actor for the following options: Properties Below, broken out by major section, are the properties that can be found in a Hierarchical LODSetup in the LODSystem panel located in the lower portion of the Hierarchical LOD Outliner interface. Cluster Generation Settings The Cluster Generation settings enable you to control how HLOD clusters will be generated to include Actors from your levels by setting the desired bounds of your clusters, how full the cluster should be, and the minimum number of Actors that must be used to generate a cluster. Mesh Generation Settings The Mesh Generation settings enable you to control specific properties that will be used when HLOD cluster Actors are merged, like generating lightmaps, combining Materials, transition size, and more.
https://docs.unrealengine.com/4.26/en-US/BuildingWorlds/HLOD/Reference/
2022-05-16T20:02:05
CC-MAIN-2022-21
1652662512229.26
[]
docs.unrealengine.com
TR-OT-17296 Fixed the rendering issue if model compare is done before the previously approved model file is loaded. TR-OT-17936 PDFtron - Fixed the issue in rendering doc file that contains hidden formats and logo. TR-OT-19065 Attachments icon is now visible when exporting the document using visible markups. TR-OT-19124 Fixed the time taken for the IFC file to successfully process and update the model. TR-OT-19164 Fixes the issue with filter remaining in place while the rest of the panel shifts to accommodate the hamburger menu in Scheduling and Issue modules. TR-OT-19311 All codes-activity relationships that are associated in P6 are successfully associated automatically in Smart Build Insight after retrieving from P6. TR-OT-19342 An incorrect toast message is displayed while changing the project, when the model upload is in progress.
https://docs.hexagonppm.com/r/en-US/HxGN-Smart-Build-Insight-Release-Bulletin/1239682
2022-05-16T19:07:43
CC-MAIN-2022-21
1652662512229.26
[]
docs.hexagonppm.com
89428 · Issue 550391 Data Flow StartTime uses locale timezone Resolved in Pega Version 8.3.3 The start time of the dataflow was displayed in GMT instead of the operator locale timezone. This has been corrected. SR-D77157 · Issue 544471 DataSet preview will use date instead of datetime Resolved in Pega Version 8.3.3 While using a DataSet preview functionality, the date appeared as reduced by one day. This has been resolved by parsing date as 'date' instead of 'datetime' to avoid issues with timezone interactions. SR-D87709 · Issue 552397 Default context check added for saving adaptive model with locked rulesets Resolved in Pega Version 8.3.3 When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned. SR-D89012 · Issue 550799 DelegatedRules refresh icon made accessible Resolved in Pega Version 8.3.3 When using Accessibility, the refresh icon in pzDelegatedRules was being read as "Link". This has been corrected by adding text for the refresh icon. SR-D85558 · Issue 548285 Handling added for prolonged Heartbeat Update Queries Resolved in Pega Version 8.3.
https://docs.pega.com/platform/resolved-issues?f%5B0%5D=%3A29991&f%5B1%5D=resolved_capability%3A9041&f%5B2%5D=resolved_capability%3A9076&f%5B3%5D=resolved_capability%3A28506&f%5B4%5D=resolved_version%3A7106&f%5B5%5D=resolved_version%3A32621&f%5B6%5D=resolved_version%3A32691
2022-05-16T17:52:18
CC-MAIN-2022-21
1652662512229.26
[]
docs.pega.com
1) How to Add SR from PrismERP???? Process: Go to --> SR --> Add SR Entry Process: Select booking no. (if any customer has booking) or Customer code (if any customer has no booking), then all information of the customer will fill up automatically. Write SR no. & Carrying Cost (if need) into manual SR no. field & Carrying cost field respectively, select Agent name (if need) & Inventory. Select Product (potato) name, write product quantity & rent amount. Then click save or Save & Print button(to get print option directly).Then Print the SR (Dolil). 2) SR List Page View Process: Go to --> SR --> SR List Note: It is all SR list of all the customer. Select the desire SR no. then click on accept button. Here you can see SR number, SR Quantity, Booking no. (from which the SR Create), Agent name & Carrying Cost. 3) Adding a particular SR to the Black List Process: Go to --> SR --> Black List --> Add Black List Entry Process: Select SR no. then all the SR information will fill up automatically. Select date, Write applier name & contact no. into the applier name field & contact no. filed respectively. Then click save button. 4) List Page View of Black List Process: Go to --> SR --> Black List --> Black List Note: It is all Black list of all customers. Here you can see SR no, SR quantity, Customer name & Applier name.
https://docs.prismerp.net/en/whm-cold-storage/sr-management/
2022-05-16T18:39:22
CC-MAIN-2022-21
1652662512229.26
[]
docs.prismerp.net
The Housekeeping subsystem takes into account the Hotel reservations, per day, and produces daily lists of rooms that need to be serviced, with either breakfast or cleaning. Furthermore, for the cleaning schedule, Hotelist provides an Android app for your cleaning staff, that lists all the rooms and their various servicing details, per day. Manage daily Housekeeping actions Navigate to Housekeeping -> Settings to manage your housekeeping settings. You may add a new cleaning action by writing its name, the frequency that it should be executed and the room types that it applies to and clicking the "Add action" button. Under the addition form, you will find a list of all the currently created and enabled actions in the system.
http://docs.hotelist.gr/book/export/html/10
2022-05-16T19:03:03
CC-MAIN-2022-21
1652662512229.26
[]
docs.hotelist.gr
Device Identification Overview To update their database with consistent information, Engines must correctly identify the different devices from which they receive Collector data. The Engine is able to distinguish devices from one another thanks precisely to the hardware information and operating system-level data sent by the Collectors. Because device hardware may get upgraded and the device data stored at the operating system-level may change with time, the Engine uses an algorithm to either recognize a device to be the same as a device seen before, despite possible minor changes, or decide that a new device joined the network. Failing to correctly identify a device may result in a single device being split into two or in two different devices being merged into one in the database of the Engine. To prevent Engines from misidentifying special groups of devices, such as those in virtualized environments, replace the default identification algorithm by an algorithm exclusively based on the name of the device, as seen by the operating system. Apply this name-based recognition method to groups of devices selected by name patterns. Applies to platforms | Windows | macOS | Default algorithm to identify a device To identify a device, the default algorithm considers the following pieces of information: The name of the device, as reported by the operating system. A hardware identifier that is derived from: The BIOS serial number. The chassis serial number. The motherboard serial number. The MAC addresses of the network adapters that are enabled on the device. The Machine SID of the device. Considerations about the data that identifies a device Devices that have not joined a domain may share the same name. For devices in a domain, the name of the device is unique at a given time within a given domain. Name uniqueness is ensured by the domain controller, but two different devices may have the same name at different points in time. The list of MAC addresses that are enabled by the operating system change whenever a network adapter is added or removed. The derived hardware identifier is usually unique for branded PCs but it may not be unique for no name or self-assembled PC. In the case of devices being virtual machines, VMWare defines a BIOS serial number that is unique and thus yields a valid hardware id. The Machine SID of a Windows device is the Security Identifier of the Windows operating system. The SID is generated during the Windows installation process and is supposed to be globally unique. However if Windows is installed using a cloned image which has not been carefully crafted using sysprep, the SID may not be unique. Experience shows that SIDs are rarely unique within corporate network and they appear in bunches of 10 to 50 machines. How the device identification algorithm works The exact identification algorithm is quite intricate; therefore, it is not described here in detail, but only sketched out. Basically, when the Collector sends to the Engine all the pieces of information about a device mentioned above, the device identification algorithm compares them with the corresponding data of each device that is already present in the database of the Engine: If the received information precisely matches that of an existing device, the algorithm concludes that the information belongs to the same device that is already in the database. If most of the information at least partially matches that of an existing device, in a majority of cases the algorithm still concludes that the information belongs to the same device. The Engine updates therefore the existing device with the received information. For instance, if the received hardware id, MAC addresses and SID all match those of an existing device, but the received name is different from the name of the device as recorded in the database, the algorithm determines that it is the same device and updates its name in the database. If the received information differs significantly from that of any of the existing devices, the Engine adds a new device to its database. Identifying devices solely by their name Starting from the Engine release V5.3.3, it is possible to override the default algorithm to identify devices and instruct the Engine to exclusively identify Windows devices with domain membership by their name. From release V6.8 on, the feature has been extended to support all devices regardless of their platform (Windows or Mac) and membership type. And from release V6.20 on, the option to identify devices exclusively by their name is configurable through the Web Console and applied to all Engines at once. If an Engine was configured to identify devices solely by their name previously to V6.20, the patterns in the configuration file of the Engine take precedence over the configuration specified in the Web Console. To unify the configuration of device identification, manually remove the device identification settings from the configuration file of each Engine ( /var/nexthink/engine/01/etc/nxengine.xml). Note that the default device identification algorithm should be preferred in most cases. Use this alternative method only in setups where the default algorithm fails to reliably identify a specific group of devices. A misconfiguration may lead to devices being artificially merged or split, so use the identification of devices by name carefully. This feature is particularly useful in virtualized environments, where devices are virtual machines (VMs) recreated at every user session. By applying the default algorithm for identifying devices, the Engine regards every new instance of a VM as a new device and ends up with multiple devices that share the same name and that succeed each other over time. By identifying devices on the basis of their name only, the Engine consistently maps a particular VM to a single device time after time, even when its hardware properties change. To apply the identification by name to a set of devices, specify name patterns in the corresponding configuration page of the Web Console. Only those groups of devices whose names match any of the specified patterns will be identified solely by their name. All other devices follow the usual identification process: Log in to the Web Console as administrator. Select the APPLIANCE tab at the top of the Web Console. Click Collector management on the left-hand side menu. Under Collector identification, type in the desired name patterns inside the box Device name patterns and separate each pattern with a new line.For instance, if the name of all your virtual machines begins with vm1-ws or vm2-ws, type in: vm1-ws* vm2-ws* Click SAVE to apply your changes. Valid substitution characters in the name patterns are: The asterisk * to substitute for zero or more characters. The question mark ? to substitute for one single character.
https://docs-v6.nexthink.com/V6/6.30/Device-Identification.330338554.html
2022-05-16T18:05:34
CC-MAIN-2022-21
1652662512229.26
[]
docs-v6.nexthink.com
Use regular expressions in Flux This page documents an earlier version of InfluxDB. InfluxDB v2.2 is the latest stable version. See the equivalent InfluxDB v2.2 documentation: Use regular expressions in Flux. Regular expressions (regexes) are incredibly powerful when matching patterns in large collections of data. With Flux, regular expressions are primarily used for evaluation logic in predicate functions for things such as filtering rows, dropping and keeping columns, state detection, etc. This guide shows how to use regular expressions in your Flux scripts. If you’re just getting started with Flux queries, check out the following: - Get started with Flux for a conceptual overview of Flux and parts of a Flux query. - Execute queries to discover a variety of ways to run your queries. Go regular expression syntax Flux uses Go’s regexp package for regular expression search. The links below provide information about Go’s regular expression syntax. Regular expression operators Flux provides two comparison operators for use with regular expressions. =~ When the expression on the left MATCHES the regular expression on the right, this evaluates to true. !~ When the expression on the left DOES NOT MATCH the regular expression on the right, this evaluates to true. Regular expressions in Flux When using regex matching in your Flux scripts, enclose your regular expressions with /. The following is the basic regex comparison syntax: Basic regex comparison syntax expression =~ /regex/ expression !~ /regex/ Examples Use a regex to filter by tag value The following example filters records by the cpu tag. It only keeps records for which the cpu is either cpu0, cpu1, or cpu2. from(bucket: "db/rp") |> range(start: -15m) |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_user" and r.cpu =~ /cpu[0-2]/ ) Use a regex to filter by field key The following example excludes records that do not have _percent in a field key. from(bucket: "db/rp") |> range(start: -15m) |> filter(fn: (r) => r._measurement == "mem" and r._field =~ /_percent/ ) Drop columns matching a regex The following example drops columns whose names do not being with _. from(bucket: "db/rp") |> range(start: -15m) |> filter(fn: (r) => r._measurement == "mem") |> drop(fn: (column) => column !~ /_.*/) Helpful links Syntax documentation regexp Syntax GoDoc RE2 Syntax Overview Go regex testers Regex Tester - Golang Regex.
https://docs.influxdata.com/influxdb/v1.7/flux/guides/regular-expressions/
2022-05-16T19:11:04
CC-MAIN-2022-21
1652662512229.26
[]
docs.influxdata.com
The Fastly Documentation Site presents a series of articles and tutorials to help you get the most out of using Fastly for your site, service, or application. Start Here Go from zero to Fastly with this step-by-step guide. Guides & Tutorials An extensive knowledge base to help you perform specific tasks. Need more info about our security safeguards? Check out our security measures guide.
https://docs.fastly.com/index.html
2021-09-16T17:51:48
CC-MAIN-2021-39
1631780053717.37
[]
docs.fastly.com
branches organization of Godot's Git repository. Git source repository¶ The repository on GitHub is a Git code repository together with an embedded issue tracker and PR system. Muista Jos osallistut dokumentaation tekemiseen, sen säilö löytyy tästä osoitteesta. The Git version control system is the tool used to keep track of successive edits to the source code - to contribute efficiently to Godot, learning the basics of the Git command line is highly recommended. There exist some graphical interfaces for Git, but they usually encourage users to take bad habits regarding the Git and PR workflow, and we therefore recommend not to use them. In particular, we advise not to use GitHub's online editor for code contributions (although it's tolerated for small fixes or documentation changes) as it enforces one commit per file and per modification, which quickly leads to PRs with an unreadable Git history (especially after peer review). Katso myösbranch is where the development of the next major version occurs. As a development branch, it can be unstable and is not meant for use in production. This is where PRs should be done in priority. The stable branches are named after their version, e.g. 3.1and 2.1. They are used to backport bugfixes and enhancements from the masterbranch to the currently maintained stable release (e.g. 3.1.2 or 2.1.6). As a rule of thumb, the last stable branch is maintained until the next major). If you haven't already, download Git from its website if you're using Windows or macOS, or install it through your package manager if you're using Linux. Muista If you are on Windows, open Git Bash to type commands. macOS and Linux users can use their respective terminals. To clone your fork from GitHub, use the following command: $ git clone Muista In our examples, the "$" character denotes the command line prompt on typical UNIX shells. It is not part of the command and should not be typed. After a little while, you should have a godot directory in your current working directory. Move into it using the cd command: $ cd godot We will start by setting up a reference to the original repository that we forked: $ ( USERNAME/godot). You only need to do the above steps once, as long as you keep that local godot folder (which you can move around if you want, the relevant metadata is hidden in its .git subfolder). Muista as an example Be sure to always go back to the master branch before creating a new branch, as your current branch will be used as the base for the new one. Alternatively, you can specify a custom base branch after the new branch's name: $ git checkout -b my-new-feature. Muista. Vihje configuration) to let you write a commit log. You can use git commit -m "Cool commit log"to write the log directly. git commit --amendlets you amend the last commit with your currently staged changes (added with git add). This is the best option if you want to fix a mistake in the last commit (bug, typo, style issue, etc.).ize! Don't worry, just check this cheat sheet when you need to make changes, and learn by doing. Here's how the shell history could look like on our example: # It's nice to know where you're starting from $ git log # Do changes to the project manager with the nano text editor $ omitted), so instead of authoring a new commit, considering Muista '' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. This is a sane behavior,. Deleting a Git branch¶ After your pull request gets merged, there's one last thing you should do: delete your Git branch for the PR. There won't be issues if you don't delete your branch, but it's good practice to do so. You'll need to do this twice, once for the local branch and another for the remote branch on GitHub. To delete our better project manager branch locally, use this command: $ git branch -d better-project-manager Alternatively, if the branch hadn't been merged yet and we wanted to delete it anyway, instead of -d you would use -D. Next, to delete the remote branch on GitHub use this command: $ git push origin -d better-project-manager You can also delete the remote branch from the GitHub PR itself, a button should appear once it has been merged or closed.
https://docs.godotengine.org/fi/latest/community/contributing/pr_workflow.html
2021-09-16T18:32:41
CC-MAIN-2021-39
1631780053717.37
[array(['../../_images/github_fork_button.png', '../../_images/github_fork_button.png'], dtype=object) array(['../../_images/github_fork_url.png', '../../_images/github_fork_url.png'], dtype=object) array(['../../_images/github_fork_make_pr.png', '../../_images/github_fork_make_pr.png'], dtype=object)]
docs.godotengine.org
2.1.35 [CSS-Level2-2009] Section 8.5.1, Border width: 'border-top-width', 'border-right-width', 'border-bottom-width', 'border-left-width', and 'border-width' V0043: The specification states: 'border-width' Value: <border-width>{1,4} | inherit Initial: see individual properties Applies to: all elements Inherited: no Percentages: N/A Media: visual Computed value: see individual properties Quirks Mode (All Versions) The border-width property is not applied to the bottom border of elements that are specified with a display type of inline. Quirks Mode and IE7 Mode (All Versions) When a border-width property has an invalid unit identifier, the value is converted to pixels instead of being ignored and is assigned a value of medium. IE7 Mode (All Versions) The border-width property is applied to the bottom border; the bottom border is clipped at the content edge.
https://docs.microsoft.com/en-us/openspecs/ie_standards/ms-css21/467ade22-dc01-4144-b867-a39b7ff43723
2021-09-16T20:28:05
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
Pagination is used to divide objects into pages ( page) with one or more items per page ( page_size?). The idea behind it to make result sets more manageable. In total there are 2 parameters that allow to configure the pagination: Completely Optional Please note, that the Pagination is completely optional for all endpoints. //Example Request to GET the first 20 variables curl -X GET '' \
https://docs.ubidots.com/reference/pagination
2021-09-16T19:21:13
CC-MAIN-2021-39
1631780053717.37
[]
docs.ubidots.com
This is an old revision of the document! Version 3.0 of EasyQuery introduces the most. Such assemblies as .NET 4.0 but you need to do the same for any other edition and version of .NET framework as well. Version 3.0 assemblies have different names (see the next step for details). Again we suppose here that your project is for .NET 4.0 and WebForms edition of EasyQuery. Use similar assembly names for other editions and .NET versions. So you need to reference the following assemblies in your project: Since we have separated our core classes on two groups (the ones which). Since Query and DataModel classes now represents some basic entities (not dealed to database) you need: Since all SQL generation functionality in version 3.0 was moved into the; Here is the map of the changes you may need to do: That's all. Now your project must work well with EasyQuery 3.0! Discussion The VS2010 projects that I have are built to compile using the .NET 3.5 framework. In the installation program, there are three options for .NET frameworks that will be installed. I have left them all checked and complete the installation. When I go into my project(s), I do not see any .NET35 dll's. A step by step for users with existing projects would be helpful
http://docs.korzh.com/easyquery/how-to/upgrade-to-version-3-0?rev=1397483219
2021-09-16T19:38:25
CC-MAIN-2021-39
1631780053717.37
[]
docs.korzh.com
buildPhylipLineage - Infer an Ig lineage using PHYLIP Description¶ buildPhylipLineage reconstructs an Ig lineage via maximum parsimony using the dnapars application, or maximum liklihood using the dnaml application of the PHYLIP package. Usage¶ buildPhylipLineage( clone, phylip_exec, dist_mat = getDNAMatrix(gap = 0), rm_temp = FALSE, verbose = FALSE, temp_path = NULL, onetree = FALSE, branch_length = c("mutations", "distance") ) Arguments¶ - clone - ChangeoClone object containing clone data. - phylip_exec - absolute path to the PHYLIP dnapars executable. - dist_mat - character distance matrix to use for reassigning edge weights. Defaults to a Hamming distance matrix returned by getDNAMatrix with gap=0.. - rm_temp - if TRUEdelete the temporary directory after running dnapars; if FALSEkeep the temporary directory. - verbose - if FALSEsuppress the output of dnapars; if TRUESTDOUT and STDERR of dnapars will be passed to the console. - temp_path - specific path to temp directory if desired. - onetree - if TRUEsave only one tree. - branch_length - specifies how to define branch lengths; one of "mutations"or "distance". If set to "mutations"(default), then branch lengths represent the number of mutations between nodes. If set to "distance", then branch lengths represent the expected number of mutations per site, unaltered from PHYLIP output. Value¶ An igraph graph object defining the Ig lineage tree. Each unique input sequence in clone is a vertex of the tree, with additional vertices being either the germline (root) sequences or inferred intermediates. The graph object has the following attributes. Vertex attributes: name: value in the sequence_idcolumn of the dataslot of the input clonefor observed sequences. The germline (root) vertex is assigned the name “Germline” and inferred intermediates are assigned names with the format “Inferred1”, “Inferred2”, .... sequence: value in the sequencecolumn of the dataslot of the input clonefor observed sequences. The germline (root) vertex is assigned the sequence in the germlineslot of the input clone. The sequence of inferred intermediates are extracted from the dnapars output. label: same as the nameattribute. Additionally, each other column in the data slot of the input clone is added as a vertex attribute with the attribute name set to the source column name. For the germline and inferred intermediate vertices, these additional vertex attributes are all assigned a value of NA. Edge attributes: weight: Hamming distance between the sequenceattributes of the two vertices. label: same as the weightattribute. Graph attributes: clone: clone identifier from the cloneslot of the input ChangeoClone. v_gene: V-segment gene call from the v_geneslot of the input ChangeoClone. j_gene: J-segment gene call from the j_geneslot of the input ChangeoClone. junc_len: junction length (nucleotide count) from the junc_lenslot of the input ChangeoClone. Alternatively, this function will return an phylo object, which is compatible with the ape package. This object will contain reconstructed ancestral sequences in nodes attribute. Details¶ buildPhylipLineage builds the lineage tree of a set of unique Ig sequences via maximum parsimony through an external call to the dnapars application of the PHYLIP package. dnapars is called with default algorithm options, except for the search option, which is set to “Rearrange on one best tree”. The germline sequence of the clone is used for the outgroup. Following tree construction using dnapars, the dnapars output is modified to allow input sequences to appear as internal nodes of the tree. Intermediate sequences inferred by dnapars are replaced by children within the tree having a Hamming distance of zero from their parent node. With the default dist_mat, the distance calculation allows IUPAC ambiguous character matches, where an ambiguous character has distance zero to any character in the set of characters it represents. Distance calculation and movement of child nodes up the tree is repeated until all parent-child pairs have a distance greater than zero between them. The germline sequence (outgroup) is moved to the root of the tree and excluded from the node replacement processes, which permits the trunk of the tree to be the only edge with a distance of zero. Edge weights of the resultant tree are assigned as the distance between each sequence. References¶ - Felsenstein J. PHYLIP - Phylogeny Inference Package (Version 3.2). Cladistics. 1989 5:164-166. - Stern JNH, Yaari G, Vander Heiden JA, et al. B cells populating the multiple sclerosis brain mature in the draining cervical lymph nodes. Sci Transl Med. 2014 6(248):248ra107. Examples¶ ### Not run: # Preprocess clone # db <- subset(ExampleDb, clone_id == 3138) # clone <- makeChangeoClone(db, text_fields=c("sample_id", "c_call"), # num_fields="duplicate_count") # # # Run PHYLIP and process output # phylip_exec <- "~/apps/phylip-3.695/bin/dnapars" # graph <- buildPhylipLineage(clone, phylip_exec, rm_temp=TRUE) # # # Plot graph with a tree layout # library(igraph) # plot(graph, layout=layout_as_tree, vertex.label=V(graph)$c_call, # vertex.size=50, edge.arrow.mode=0, vertex.color="grey80") # # # To consider each indel event as a mutation, change the masking character # # and distance matrix # clone <- makeChangeoClone(db, text_fields=c("sample_id", "c_call"), # num_fields="duplicate_count", mask_char="-") # graph <- buildPhylipLineage(clone, phylip_exec, dist_mat=getDNAMatrix(gap=-1), # rm_temp=TRUE) See also¶ Takes as input a ChangeoClone. Temporary directories are created with makeTempDir. Distance is calculated using seqDist. See igraph and igraph.plotting for working with igraph graph objects.
https://alakazam.readthedocs.io/en/latest/topics/buildPhylipLineage/
2021-09-16T18:08:07
CC-MAIN-2021-39
1631780053717.37
[]
alakazam.readthedocs.io
SchedulerControl.ResourceSharing Property Gets a value indicating whether appointments can be shared between multiple resources. Namespace: DevExpress.Xpf.Scheduler Assembly: DevExpress.Xpf.Scheduler.v21.1.dll Declaration [Browsable(false)] public bool ResourceSharing { get; } <Browsable(False)> Public ReadOnly Property ResourceSharing As Boolean. Note that the ResourceSharing property is read-only. If you want specify whether resource sharing should be enabled or disabled, use the AppointmentStorage.ResourceSharing property. See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduler.SchedulerControl.ResourceSharing
2021-09-16T18:49:49
CC-MAIN-2021-39
1631780053717.37
[]
docs.devexpress.com
Date: Wed, 13 Sep 1995 23:32:06 -0500 (CDT) From: [email protected] To: [email protected] Cc: mikebo (Mike Borowiec) Subject: FBSD v2.0.5: shared libraries - HOW TO...? Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help Greetings -'ve done this many times on SunOS, following very simple set of instructions. Nothing I can find in the docs, FAQs or the past few months traffic in the questions list tells how to do this. HELP! Thanks in advance... - Mike -- -------------------------------------------------------------------------- Michael Borowiec Network Operations Tellabs Operations, Inc. [email protected] 1000 Remington Blvd. MS109 708-378-6007 FAX: 708-378-6714 Bolingbrook, IL, USA 60440 -------------------------------------------------------------------------- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=348669+0+archive/1995/freebsd-questions/19950910.freebsd-questions
2021-09-16T19:41:31
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
Connecting Dremio to SingleStore Open the following URL: http://<IP Address of the server running the Dremio service>:9047/ Enter a username and password to log in. Click Sources on the left hand side of the page. Select memSQL in the Select New Source Type pop-up. In the New Source window provide the following information: Name of the connection Host: IP address of the SingleStore server Database: Name of the database to connect with Username and Password: User ID and password of the SingleStore database Click Save. Your new SingleStore connection should be available under Sources.
https://docs.singlestore.com/db/v7.5/en/query-data/connect-with-analytics-and-bi-tools/connect-with-dremio/connecting-dremio-to-singlestore.html
2021-09-16T19:45:32
CC-MAIN-2021-39
1631780053717.37
[]
docs.singlestore.com
Managing site license keys Xperience requires a license key for every domain that your sites use. You cannot view or edit sites running on domains that do not have a matching license in the system. If you do not have a valid license for the current administration domain, only global applications are available in the administration interface. To manage your system's licenses, open the License keys application in the Xperience administration interface. When you obtain a license key (full or trial) for a domain: - Click New license. - Copy the full license key into the field. - Click Save. The License keys you obtained after buying Kentico Xperience. You can then generate new license keys based on the registered license. If you cannot sign in to the Client Portal, please contact the person in your company who has the credentials and the license. How licensing works Site licensing If your live site and Kentico Xperience administration applications run on different domains, you need to have valid licenses for both domains. To learn more, see: Licensing for Xperience a Presentation URL with a different domain or domain aliases (alternative domain names for the same website), you need to add extra license keys for these domains. You can generate the domain alias license keys on the Kentico Client Portal. Domain alias licenses are free of charge if you already own a license for the main domain. Web farm count checks Each license key supports a certain number of web farms, ranging from one to unlimited. The system distinguishes between web farms for the administration interface and the live site. Each key can increase the total number of administration or live site web farms, based on the domain for which the license key is generated. This is based on the Administration domain name and Presentation URL properties configurable for each site in the Sites application. The number of allowed web farms is cumulative and not tied to individual sites (license key domains). For example, if you have the following live site license keys registered in the system (keys generated for domains that match a Presentation URL in the Sites application): - livesiteA.com; AllowedWebFarms:2 - livesiteB.com; AllowedWebFarms:3 You can have a total of five live site web farms distributed arbitrarily across livesiteA.com and livesiteB.com. Domain alias license keys License keys generated for the purpose of enabling site domain aliases do not contribute to the total number of web farm servers allowed for either application. The system completely disregards such keys during validation. The following table summarizes common hosting configurations and resulting web farm limitations: Was this page helpful?
https://docs.xperience.io/configuring-xperience/managing-sites/managing-site-license-keys
2021-09-16T19:25:32
CC-MAIN-2021-39
1631780053717.37
[]
docs.xperience.io
UDP Zero Checksum Plugin¶ UDP uses a 16-bit field to store a checksum for data integrity. The UDP checksum field [RFC768] is calculated using information from the pseudo-IP header, the UDP header, and the data is padded at the end if necessary to make a multiple of two octets. The checksum is optional when using IPv4, and if unused a UDP checksum field carrying all zeros indicates the transmitter did not compute the checksum. The UDPZero plugin for PATHspider aims to detect breakage in the Internet due to the use a zero-checksum UDPZero plugin, specify udpzero as the plugin to use on the command-line: pspdr measure -i eth0 udpzero </usr/share/doc/pathspider/examples/webtest.ndjson >results.ndjson This will run two DNS request connections over UDP for each job input, one with the checksum field unmodified and one with the checksum field set to all zeros. Supported Connection Modes¶ This plugin supports the following connection modes: - dnsudp - Performs a DNS query using UDP Output Conditions¶ The following conditions are generated for the UDPZero plugin: udpzero.connectivity.Y¶ For each connection that was observed by PATHspider, a connectivity condition will be generated to indicate whether or not connectivity was successful using UDP zero-checksum validated against a connection with the calculated checksum left intact. Y may have the following values: - works - Both connections succeeded - broken - Baseline connection succeeded where experimental connection failed - offline - Both connections failed - transient - Baseline connection failed where experimental connection succeeded (this can be used to give an indication of transient failure rates included in the “broken” set)
https://pathspider.readthedocs.io/en/latest/plugins/udpzero.html
2018-11-13T04:25:13
CC-MAIN-2018-47
1542039741219.9
[]
pathspider.readthedocs.io
Mint Invoice - Create, Send, Pay Invoice with Ease Mint Invoice - Create, Send, Pay Invoice with Ease PHP Framework - Most Popular Laravel 5.5.x Javascript Framework - Lightweight Vue.js Frontend Framework - Most Popular Bootstrap 4.0.0 beta Single Page Application Browser Synchronization Webpack Bundle Analyzer Compressed asset output. To know more about license, please visit The script supports REST Api & uses JSON based authentication token. The script is well documented which is available at This script will be updated regularily with latest version of framework & plugins. Please share your feedback, feature request which will be surely implemented in upcoming versions.Author: ScriptMint Contact: [email protected] Follow: codecanyon.net/user/ScriptMint Skype: ScriptMint This script is built with following: - Laravel 5.5.x - Vue.js 2.5.2 - Bootstrap 4.0.0 beta - jQuery 3.2.0 - Aixos 0.16.2 Here are the composer packages used in this script: - barryvdh/laravel-cors - barryvdh/laravel-dompdf - guzzlehttp/guzzle - intervention/image - laravel/socialite - mews/purifier - nexmo/laravel - spatie/laravel-permission - tymon/jwt-auth - paypal/rest-api-sdk-php Here are the npm packages used in this script: - laravel-mix - browser-sync - browser-sync-webpack-plugin - webpack-bundle-analyzer - compression-webpack-plugin - js-cookie - laravel-vue-pagination - lodash - uuid - v-mask - v-tooltip - vue-multiselect - vue-password-strength-meter - vue-router - vue-sortable - vue-switches - vuejs-datepicker - vuejs-dialog - vuex - vuex-persistedstate - zxcvbn If you only want to use this application, then you can install the application directly and start using. If you are planning to further develop this script then, Author assumes that you have basic knowledge of Laravel & Javascript. If you have no experience with Javascript, then you can take a look at this tutorial. For any kind of support, please email us at [email protected] I am looking forward for your suggestions & feedback for this script & I will surely implement your feature request if feasible. Don't forget to rate my application. Features Components: Pre Requisites Read 5.5, it requires to follow all the pre-requisites of Laravel 5.5. Click here to visit installation guidelines of Laravel. - PHP >= 7.0.0 - OpenSSL PHP Extension - PDO PHP Extension - Mbstring PHP Extension - Tokenizer PHP Extension - XML PHP Extension You need to register a one time to verify your purchase. Please follow below steps if you have purchased the script from codecanyon.net - Get your purchase code of Mint Invoice Manager by visiting - Login into, Visit - Add your Envato Username & Envato Purchase Code to get ScriptMint License. - Download script by clicking on icon of the ScriptMint License. - Verify your purchase during Installation by entering your Email Id & ScriptMint License. Installation If you are only planning to use this script without further customization/development, follow these steps: - Register yourself in scriptmint.com, If you have purchased this script from codecanyon.net then add a purchase in scriptmint.com and get ScriptMint license. If you have purchased directly from scriptmint.com then you should have your ScriptMint license. - Download zip file from codecanyon.net or from and extract into your server. - Navigate to your application as {your_domain}, it will automatically redirect you to install directory. - If everything is ok, then you will get all green signals in pre-requisites page. If you get any red signals, then you have to fix your server requirement. - Move to next step, provide database details and then click on next button. - Provide your user detail, enter default value of department, designation & location then click next. - Enter your ScriptMint license and then click on the 'Finish' button to complete the installation. This may take around 1-2 minute time depending upon your server configuration. Once installation is completed, you will be redirected to login page of script. If you are only planning to further customize this script, follow these steps: "atm" in webpack.mix.js file. If you miss any of these step, you might not get the login page. For any query please send us an email at [email protected]. Structure The script follows conventional Laravel+Vue.js folder structure. The folders & files are grouped and placed in desired location. Here is the folder structure of Laravel application. send us an email at [email protected]. Compiling Assets You send us an email at [email protected]. Configuration - First thing First! If you are able to get login page when you access the script from your browser, it means you have successfully installed the application. Now its time to configure the script, to select the features you want in the script. This is very important to use the script. You might be worry that you got login page, but you don't have login credential to login. Actually, you need to created admin account after installation. So just click on the 'Sign Up' link available in the login page, enter required details & complete sign up process. will. - Optional Move to "Logo" tab, select both logo i.e. Main logo & Sidebar Logo. - Required Move to "System" tab, choose your color theme, direction to display, date format, time format, notification position, locale & timezone. Next, choose the features you want to enable by switching on the various feature.. - Required Move to "Designation" tab available in top header, Mint Invoice Manager provide multi-level designation feature. "Top Designation" users can manage "Lower Designation" user. By default, admin designation is "Top Designation" of all other users. You can have n number of level for designation. If you leave "Top Designation" field empty, it means, that designation will be directly under default admin designation. Lets take example of below designations: Here, "System Administrator" is default designation of admin & it has no "Top Designation". "Chief Account Officer", "Chief Development Officer", "Chief HR Officer" has "Top Designation" as "System Administrator". So these designations can be managed (Add/Edit/Delete) by "System Administrator". Similarily "Chief Account Officer" can manage "Account Manager" and so on. API Laravel Vue.js starter kit supports REST API which you can use to integrate into any other application. Here is full documentation of API including parameters & response. localStorage.setItem('auth_token',response.token); axios.defaults.headers.common['Authorization'] = 'Bearer ' + localStorage.getItem('auth_token'); have any query, please visit or email us at [email protected] Social Login The. After addition, you need to login into the script as admin & enable the Social Login feature in configuration -> authentication tab. Now you will get all oAuth providers list with some additional input. Below is the detail: Enabling oAuth provider will generate respective provider login option in the login page. If all the input details including client Id, Secret & Redirect URL are correct, then you should be able to login with that oAuth providers.
http://docs.scriptmint.com/mint-invoice-manager
2018-11-13T04:46:19
CC-MAIN-2018-47
1542039741219.9
[array(['./assets/images/documentation/folder_structure_highlight.png', None], dtype=object) array(['./assets/images/documentation/folder_structure.jpg', None], dtype=object) array(['./assets/images/documentation/detailed_folder_structure.jpg', None], dtype=object) ]
docs.scriptmint.com
Primary SEN Components The secure, virtual IP fabric of the Viptela Secure Extensible Network (SEN) is made up of four fundamental. Of these four components, the vEdge router can be a Viptela hardware device or software that runs as a virtual machine, and the remaining three are software-only components. The software vEdge router, vManage NMS, and vSmart controller software runs on servers, and the vBond orchestrator software runs as a process (daemon) on a vEdge router. The figure below illustrates the components of the Viptela SEN. The sections below describe each component in detail. vManage NMS The vManage NMS is a centralized network management system. The vManage NMS dashboard provides a visual window into the network, and it allows you to configure and manage Viptela network devices. The vManage NMS software runs on a server in the network. This server is typically situated in a centralized location, such as a data center. It is possible for the vManage NMS software to run on the same physical server as vSmart controller software. You can use vManage NMS to store certificate credentials, and to create and store configurations for all Viptela network components. As these components come online in the network, they request their certificates and configurations from the vManage NMS. When the vManage NMS receives these requests, it pushes the certificates and configurations to the Viptela network devices. For vEdge Cloud routers, vManage NMS can also sign certificates and generate bootstrap configurations, and it can decommission the devices. vSmart Controller The vSmart controller oversees the control plane of the Viptela overlay network, establishing, adjusting, and maintaining the connections that form the Viptela fabric. The major components of the vSmart controller are: - Control plane connections—Each vSmart controller establishes and maintains a control plane connection with each vEdge router in the overlay network. (In a network with multiple vSmart controllers, a single vSmart controller may have connections only to a subset of the vEdge routers, for load-balancing purposes.). - OMP (Overlay Management Protocol)—The OMP protocol is a routing protocol similar to BGP that manages the Viptela. The vSmart controller maintains a centralized route table that stores the route information, called OMP routes, that it learns from the vEdge routers and from any other vSmart controllers in the Viptela overlay network. Based on the configured policy, the vSmart controller shares this route information with the Viptela network devices in the network so that they can communicate with each other. The vSmart controller is software that runs as a virtual machine on a server configured with ESXi or VMware hypervisor software. The vSmart software image is a signed image that is downloadable from the Viptela website. A single Viptela root-of-trust public certificate is embedded into all vSmart software images. During the initial startup of a vSmart controller, you enter minimal configuration information, such as the IP addresses of the controller and the vBond orchestrator. With this information and the root-of-trust public certificate, the vSmart controller authenticates itself on the network, establishes a DTLS control connection with the vBond orchestrator, and receives and activates its full configuration from the vManage NMS if one is present in the domain. (Otherwise, you can manually download a configuration file or create a configuration directly on the vSmart controller through a console connection.) The vSmart controller is now also ready to accept connections from the vEdge routers in its domain. To provide redundancy and high availability, a typical overlay network includes multiple vSmart controllers in each domain. A domain can have up to 20 vSmart controllers. To ensure that the OMP network routes remain synchronized, all the vSmart controllers must have the same configuration for policy and OMP. However, the configuration for device-specific information, such as interface locations and addresses, system IDs, and host names, can be different. In a network with redundant vSmart controllers, the vBond orchestrator tells the vSmart controllers about each other and tells each vSmart controller which vEdge routers in the domain it should accept control connections from. (Different vEdge routers in the same domain connect to different vSmart controllers, to provide load balancing.) If one vSmart controller becomes unavailable, the other controllers automatically and immediately sustain the functioning of the overlay network. vBond Orchestrator The vBond orchestrator automatically coordinates the initial bringup of vSmart controllers and vEdge routers, and it facilities connectivity between vSmart controllers and vEdge routers. During the bringup processes, the vBond orchestrator authenticates and validates the devices wishing to join the overlay network. This automatic orchestration process prevents tedious and error-prone manual bringup. The vBond orchestrator is the only Viptela device that is located in a public address space. This design allows the vBond orchestrator to communicate with vSmart controllers and vEdge routers that are located behind NAT devices, and it allows the vBond orchestrator to solve any NAT-traversal issues of these Viptela devices. The major components of the vBond orchestrator are: - Control plane connection—Each vBond orchestrator has a persistent control plane connection in the form of a DTLS tunnel with each vSmart controller in its domain. In addition, the vBond orchestrator uses DTLS connections to communicate with vEdge routers when they come online, to authenticate the router, and to facilitate the router's ability to join the network. Basic authentication of a vEdge router is done using certificates and RSA cryptography. - NAT traversal—The vBond orchestrator facilitates the initial orchestration between vEdge routers and vSmart controllers when one or both of them are behind NAT devices. Standard peer-to-peer techniques are used to facilitate this orchestration. - Load balancing—In a domain with multiple vSmart controllers, the vBond orchestrator automatically performs load balancing of vEdge routers across the vSmart controllers when routers come online..) The vBond orchestrator orchestrates the initial control connection between vSmart controllers and vEdge routers. It creates DTLS tunnels to the vSmart controllers and vEdge routers to authenticate each node that is requesting control plane connectivity. This authentication behavior assures that only valid customer nodes can participate in the Viptela overlay network. The DTLS connections with vSmart controllers are permanent so that the vBond controller can inform the vSmart controllers as vEdge routers join the network. The DTLS connections with vEdge routers are temporary; once the vBond orchestrator has matched a vEdge router with a vSmart controller, there is no need for the vBond orchestrator and the vEdge router to communicate with each other. The vBond orchestrator shares only the information that is required for control plane connectivity, and it instructs the proper vEdge routers and vSmart controllers to initiate secure connectivity with each other. The vBond orchestrator maintains no state. To provide redundancy for the vBond orchestrator, you can create multiple vBond entities in the network and point all vEdge routers to those vBond orchestrators. Each vBond orchestrator maintains a permanent DTLS connection with each vSmart controller in the network. If one vBond orchestrator becomes unavailable, the others are automatically and immediately able to sustain the functioning of the overlay network. In a domain with multiple vSmart controllers, the vBond orchestrator pairs a vEdge router with one of the vSmart controllers to provide load balancing. vEdge Routers: - DTLS control plane connection—Each vEdge router has one permanent DTLS connection to each vSmart controller it talks to. This permanent connection is established after device authentication succeeds, and it carries encrypted payload between the vEdge router and the vSmart controller. This payload consists of route information necessary for the vSmart controller to determine the network topology, and then to calculate the best routes to network destinations and distribute this route information to the vEdge routers. - OMP (Overlay Management Protocol)—As described for the vSmart controller, OMP runs inside the DTLS connection and carries the routes, next hops, keys, and policy information needed to establish and maintain the overlay network. OMP runs between the vEdge router and the vSmart controller and carries only control information. - Protocols—The vEdge router supports standard protocols, including OSPF, BGP, VRRP, and BFD. - RIB (Routing Information Base)—Each vEdge router has multiple route tables that are populated automatically with direct interface routes, static routes, and dynamic routes learned via BGP and OSPF. Route policies can affect which routes are stored in the RIB. - FIB (Forwarding Information Base)—This is a distilled version of the RIB that the CPU on the vEdge router uses to forward packets. - Netconf and CLI—Netconf is a standards-based protocol used by the vManage NMS to provision a vEdge router. In addition, each vEdge router provides local CLI access and AAA. - Key management—vEdge routers generate symmetric keys that are used for secure communication with other vEdge routers, using the standard IPsec protocol. - Data plane—The vEdge router provides a rich set of data plane functions, including IP forwarding, IPsec, BFD, QoS, ACLs, mirroring, and policy-based forwarding. The vEdge router has local intelligence to make site-local decisions regarding routing, high availability (HA), interfaces, ARP management, ACLs, and so forth. The OMP session with the vBond orchestrator. With this information and the information on the Trusted Board ID chip, the vEdge router authenticates itself on the network, establishes a DTLS connection with the vSmart controller in its domain, and receives and activates its full configuration from the vManage NMS if one is present in the domain. Otherwise, you can manually download a configuration file or create a configuration directly on the vEdge router through a console connection. Additional Information Software Services Viptela Terminology
https://sdwan-docs.cisco.com/Product_Documentation/Getting_Started/System_Overview/Components_of_the_Viptela_SEN/01Components_of_the_Viptela_Solution
2018-11-13T04:48:04
CC-MAIN-2018-47
1542039741219.9
[array(['https://sdwan-docs.cisco.com/@api/deki/files/177/s00012.png?revision=9', 's00012.png'], dtype=object) ]
sdwan-docs.cisco.com
Edit Media File You can view and manage all the media files - images, video, pdf's, anything you need to upload or customers need to download - from this one easy location. If you think this looks a lot like the product images tab, you are right! This is just ALL product images in one location.
http://docs.virtuemart.net/manual/shop-menu/edit-media-file.html
2018-11-13T04:53:01
CC-MAIN-2018-47
1542039741219.9
[]
docs.virtuemart.net
The Core DC/OS components DC/OS components are the services which work together to bring the DC/OS ecosystem alive. While the core component is of course Apache Mesos, DC/OS is actually made of of many more services than just this. If you log into any host in the DC/OS cluster, you can view the currently running services by inspecting /etc/systemd/system/dcos.target.wants/. ip-10-0-6-126 system # ls dcos.target.wants/ dcos-adminrouter-reload.service dcos-exhibitor.service dcos-marathon.service dcos-adminrouter-reload.timer dcos-gen-resolvconf.service dcos-mesos-dns.service dcos-adminrouter.service dcos-gen-resolvconf.timer dcos-mesos-master.service dcos-cluster-id.service dcos-history-service.service dcos-minuteman.service dcos-cosmos.service dcos-signal.service dcos-ddt.service dcos-logrotate.service dcos-signal.timer dcos-epmd.service dcos-logrotate.timer dcos-spartan.service Admin Router ServiceAdmin Router Service Admin Router is our core internal load balancer. Admin Router is a customized NGINX which allows us to proxy all the internal services on :80. Without Admin Router being up, you could not access the DC/OS UI. Admin Router is a core component of the DC/OS ecosystem. Cluster ID ServiceCluster ID Service The cluster-id service allows us to generate a UUID for each cluster. We use this ID to track cluster health remotely (if enabled). This remote tracking allows our support team to better assist our customers. The cluster-id service runs an internal tool called zk-value-consensus which uses our internal ZooKeeper to generate a UUID that all the masters agree on. Once an agreement is reached, the ID is written to disk at /var/lib/dcos/cluster-id. We write it to /var/lib/dcos so the ID is ensured to persist cluster upgrades without changing. Cosmos ServiceCosmos Service The Cosmos service is our internal packaging API service. You access this service everytime you run dcos package install... from the CLI. This API allows us to deploy DC/OS packages from the DC/OS universe to your DC/OS cluster. Diagnostics (DDT) ServiceDiagnostics (DDT) Service The diagnostics service (also known as 3DT or dcos-ddt.service, no relationship to the pesticide!) is our diagnostics utility for DC/OS systemd components. This service runs on every host, tracking the internal state of the systemd unit. The service runs in two modes, with or without the -pull argument. If running on a master host, it executes /opt/mesosphere/bin/3dt -pull which queries Mesos-DNS for a list of known masters in the cluster, then queries a master (usually itself) :5050/statesummary and gets a list of agents. From this complete list of cluster hosts, it queries all 3DT health endpoints ( :1050/system/health/v1/health). This endpoint returns health state for the DC/OS systemd units on that host. The master 3DT processes, along with doing this aggregation also expose /system/health/v1/ endpoints to feed this data by unit or node IP to the DC/OS user interface. Distributed DNS ProxyDistributed DNS Proxy Distributed DNS Proxy is our internal DNS dispatcher. It conforms to RFC5625 as a DNS forwarder for DC/OS cluster services. Downloads ServiceDownloads Service This component ( dcos-download.service) downloads the DC/OS installation tarball on first boot. Erlang Port Mapper (EPMD) ServiceErlang Port Mapper (EPMD) Service The erlang port mapper is designed to support our internal layer 4 load balancer we call minuteman. Exhibitor ServiceExhibitor Service Exhibitor is a project originally from Netflix that allows us to manage and automate the deployment of ZooKeeper. Generate resolv.conf (gen-resolvconf) ServiceGenerate resolv.conf (gen-resolvconf) Service The gen-resolvconf service allows us to dynamically provision /etc/resolv.conf for your cluster hosts. History ServiceHistory Service The history service provides a simple service for storing stateful information about your DC/OS cluster. This data is stored on disk for 24 hours. Along with storing this data, the history service also exposes a HTTP API for the DC/OS user interface to query. All DC/OS cluster stats which involve memory, CPU and disk usage are driven by this service (including the donuts!). Logrotate ServiceLogrotate Service This service does what you think it does: ensures DC/OS services don’t blow up cluster hosts with too much log data on disk. Marathon ServiceMarathon Service Marathon shouldn’t need any introduction, it’s the distributed init system for the DC/OS cluster. We run an internal Marathon for packages and other DC/OS services. Mesos-DNS ServiceMesos-DNS Service Mesos-DNS is the internal DNS service for the DC/OS cluster. Mesos-DNS provides the namespace $service.mesos to all cluster hosts. For example, you can login to your leading Mesos master with ssh leader.mesos. Minuteman ServiceMinuteman Service This is our internal layer 4 load balancer. Signal ServiceSignal Service The DC/OS signal service queries the diagnostics service /system/health/v1/report endpoint on the leading master and sends this data to SegmentIO for use in tracking metrics and customer support.
https://docs.mesosphere.com/1.7/overview/components/
2018-11-13T05:43:19
CC-MAIN-2018-47
1542039741219.9
[]
docs.mesosphere.com
Most of the non-english language files are currently machine translated. You can change the interface language under Settings -> Application Settings -> Locale. We are currently looking for translators to fix any of the errors in the translations. If you are a native speaker of the language and want to help out then open a support ticket though our client area. Make sure when running your first scan that automatic quarantine is disabled so that you can work out any false positives. Scantab then select all server from the domain dropdown. Reportstab. Click on malware hits to view what was detected. Actionstab when clicking on the scan report. The Web Risk API is free for up to 100,000 API calls per month. Create a new Project, unless you already have one created. Add a project name and click on the Createbutton (wait a few moments after you click the create button to load your project, otherwise you can manually select it). API Manager. Web Risk API, access it and click on the Enable button. Credentialsfrom the left panel. Create credentialsdrop-down, then choose API key. Sentinel Anti-malware-> Settings-> Domain Monitoring> Web Risk API keyfield. Sentinel Anti-malware-> Domainsand select all your domains then press the "check" button to check if any of your domains are blacklisted. More information can be found at: You can choose who gets domain monitoring alerts under Tools & Settings -> Notifications -> Sentinel Anti-malware domain monitoring alert
https://docs.danami.com/sentinel/basics/getting-started
2020-01-17T18:45:09
CC-MAIN-2020-05
1579250590107.3
[]
docs.danami.com
The name of this filter should be “Radial”. It creates a blur in all directions. center and blurring factor. Press Alt key to vary center only. Press Ctrl key to vary blurring factor only.
https://docs.gimp.org/2.10/nl/gimp-filter-motion-blur-zoom.html
2020-01-17T19:10:14
CC-MAIN-2020-05
1579250590107.3
[]
docs.gimp.org
On this page we have developed a series of “How to” video tutorials which will help to guide you through using the NBN Atlas. We will be adding to these over the coming months, so if there is something that you think would be useful for us to include, please do let us know. Please also see our Step by Step downloadable guides. Introducing the NBN and NBN Atlas Setting Up & Using Your NBN Atlas Account NBN Atlas – Finding Basic Species Information NBN Atlas – Viewing Species Information on a Map NBN Atlas – Species Records NBN Atlas – Searching by Location NBN Atlas – Using the Advanced Search NBN Atlas – How to Download Data NBN Atlas – How to Upload Data NBN Atlas – Licences NBN Atlas – Species Lists NBN Atlas – Record Cleaner NBN Atlas Spatial Portal – Basic Functions NBN Atlas Spatial Portal – Exporting Data NBN Atlas Spatial Portal – In-Out Reports NBN Atlas Spatial Portal – Point Comparisons NBN Atlas Spatial Portal – Area of Occupancy (AOO) & Extent of Occurence (EOO) NBN Atlas – Finding Species Literature using the Biodiversity Heritage Library NBN Atlas – Finding Species Sequence Data using NCBI Genbank Using the QGIS NBN Atlas Tool to create WMS distribution maps An introduction to the NBN and NBN Atlas – presentation made to BirdFair 2017 This video is a good introduction to the functions of the NBN Atlas. As it was created as a presentation, there is no audio, but captions are inserted to explain the actions being carried out. Introduction of the NBN Atlas as a replacement for the NBN Gateway This is a webinar, which was created for CIEEM, but which will be of interest to a wide audience. We recommend that you start viewing the webinar at 6 minutes 10 seconds, which is when the NBN section commences.
https://docs.nbnatlas.org/how-to-video-tutorials/
2020-01-17T19:42:44
CC-MAIN-2020-05
1579250590107.3
[]
docs.nbnatlas.org
When the barking of their beloved dog leads to an eviction notice from their tiny Los Angeles apartment, John and Molly Chester make a choice film is an inspiring ode to uncompromising idealism and a beautiful homage to the natural world. Cinema Screenings Throughout the Fall and Winter, cinema screenings will be announced. Be sure you are on the Docs For Schools email list for updates. Fee: All screenings are free. Grade levels: Programs are available to Grade 7-12 classes. We allow teachers to determine what content is appropriate to share with their students based on their own teaching methods, their community, and the maturity of their students. Films with mature content will be labeled as such, noting that they are recommended for high school only. Please view any trailers and links posted below each film summary for additional information. Contact: Lesley Sparks at [email protected] for questions or to be added to receive the DFS email list. Hot Docs Ted Rogers Cinema Location: 506 Bloor Street West | Bathurst Subway Past Events Wed, Oct 9, 10 AM // THE BIGGEST LITTLE FARM - USA - 2018 - 91 Director: - John Chester Wed, Oct 16, 10 AM // MAIDEN - USA - 2018 - 97 Director: - equal of men. Wed, Nov 13, 10 AM // SPECIAL EVENT - FULLY BOOKED - USA - 2017 - 32 Director: - Linda Booker Keeping in line with Docs For Schools Fall environment content we are pleased to offer this special event. A screening of the short doc Straws, along with segments from the CBC show Marketplace, will showcase the prevalence of plastics in our daily lives and the realities of what is recycled in Toronto and where our waste goes. This will be followed by a panel discussion hosted by David Common, including an opportunity for Q&A with students. - David Common: CBC Senior Correspondent and Host of Marketplace - Jo-Anne St.Godard: Executive Director, Recycling Council of Ontario - Matt Keliher, General Manager: Solid Waste Management, City of Toronto Lead Partner Founding Partner Exclusive Education Partner Supported By Additional support is provided by The S. M. Blair Family Foundation, Patrick and Barbara Keenan Foundation, Pitblado Family Foundation, Flavelle Family Foundation, The McLean Foundation and through contributions by individual donors.
https://www.hotdocs.ca/p/docs-for-schools-monthly
2020-01-17T19:22:06
CC-MAIN-2020-05
1579250590107.3
[]
www.hotdocs.ca
Declares members implemented by the UI elements that can be made invisible or visible by a conditional appearance rule. Namespace: DevExpress.ExpressApp.Editors Assembly: DevExpress.ExpressApp.v19.2.dll public interface IAppearanceVisibility : IAppearanceBase Public Interface IAppearanceVisibility Inherits IAppearanceBase The Conditional Appearance module allows you to make different UI elements visible/invisible when they are displayed in certain conditions. To allow the AppearanceController to make UI elements visible/invisible, these elements should implement the IAppearanceVisibility interface. This interface exposes the IAppearanceVisibility.Visibility property, to get or set the visibility state, and the IAppearanceVisibility.ResetVisibility method, to reset the visibility state to the required initial value. This interface is already implemented by base classes representing built-in XAF Detail View Items, Layout Items, Action Appearance Items and auxiliary adapters that provide access to List Editor cells. Implement the IAppearanceVisibility interface in a custom class representing a UI element so that this element can also be made visible/invisible by the AppearanceController.
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Editors.IAppearanceVisibility
2020-01-17T18:24:43
CC-MAIN-2020-05
1579250590107.3
[]
docs.devexpress.com
to maintain, to remove ports from an interface group, and to modify the usage mode and load distribution pattern of the ports in an interface group. Modifying the MTU size of a VLAN If you want to modify the MTU size of a VLAN interface that is not part of a broadcast domain, you can use System Manager to change the size. Deleting VLANs You. Ports and adapters Ports. Parent topic: Managing the network Related information Network and LIF management ONTAP concepts Part number: 215-13784_2019-08_en-us August 2019 Updated for ONTAP 9.5
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-950/GUID-1C60805E-D475-4FB5-B4A6-7CFB19EF5DD4.html
2020-01-17T19:18:48
CC-MAIN-2020-05
1579250590107.3
[]
docs.netapp.com
According to Zeit: "Now is a global deployment network built on top of all existing cloud providers. It makes teams productive by removing servers and configuration, makes serverless application deployment easy." We strongly recommend this service as it is serverless, cheap, includes CDN, and really easy to set up. __It also supports the cache technique stale-while-revalidate (they name it Serverless Pre-Rendering), a powerful way to improve your website speed. First of all, you have to develop your project following the steps in the Quick start guide. This deploy is supposed to be made once you have finished and you want to deploy it on production. These are the instructions to deploy Now on Frontity, once you are ready to deploy your project: Create this now.json file with your preferred text editor, change the url and save it in your Frontity project. {"alias": "","version": 2,"builds": [{"src": "package.json","use": "@frontity/now"}]} Log in the terminal: > npx now login Deploy Frontity using this command: > npx now Now will assign you a domain (something like your-site.now.sh) that works exactly like your real domain and allows you to check and make sure that everything is ok. You need to add a CNAME of to alias.zeit.co in your domain DNS settings. If you don't know how to do this, contact your domain provider (GoDaddy, CloudFlare, etc) Then, deploy Frontity using this command: > npx now --target production This will create a deploy and assign it to your real site url. Still have questions? Ask the community! We are here to help 😊
https://docs.frontity.org/installation-and-deploy/deploy-on-now
2020-01-17T18:26:04
CC-MAIN-2020-05
1579250590107.3
[]
docs.frontity.org
You can set up a network by enabling an IP address range. The IP address range enables you to enter IP addresses that are in the same netmask range or in the different netmask range. Enter AutoSupport message details and event notifications in the Support page to continue with the cluster setup.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-wfg-900/GUID-79782907-2255-492D-9963-663913104E60.html
2020-01-17T18:43:54
CC-MAIN-2020-05
1579250590107.3
[]
docs.netapp.com
Upgrading Plesk Using Administrator GUI This is the quickest and easiest way to upgrade Plesk. Follow the steps below, and you will be running the latest Plesk version in no time: Click Tools & Settings, then click Updates and Upgrades. You can see the currently installed Plesk version, as well as the latest available Plesk version, under “Products summary”. (Optional) If you want to change the upgrade Onyx if the OS on the server is 32-bit). See the list of all supported OSes here. -. Selecting a different release tier By default, this method only allows you to upgrade to the latest stable Plesk release (for an explanation about the different Plesk release tiers, click here). If you want to upgrade to an early adopter release, you need to change the desired release tier as explained below. This method cannot be used to upgrade to a preview (or, as it is called, testing) release. To do so, read Upgrading via the Command Line or Upgrading via the Web GUI instead. To change the release tier, click Tools & Settings, then click Update and Upgrade Settings. Scroll down to the “Plesk release tiers” section and select the “Early adopter release” option to be able to upgrade to the early adopter release via the Administrator GUI. Changing the upgrade. Was this page helpful? Thank you for the feedback! Please tell us if we can improve further. Sorry to hear that. Please tell us how we can improve.
https://docs.plesk.com/en-US/onyx/deployment-guide/plesk-installation-and-upgrade-on-single-server/upgrading-plesk-using-administrator-gui.76756/
2020-01-17T18:44:16
CC-MAIN-2020-05
1579250590107.3
[array(['/en-US/onyx/deployment-guide/images/77240.png', 'image-77240.png'], dtype=object) ]
docs.plesk.com
Custom. Custom code should be copied into your child theme’s functions.php file. How are checkout fields loaded to WooCommerce? ↑ Back to top The billing and shipping fields for checkout pull from the countries class ( class-wc-countries.php) and the get_address_fields function. This allows WooCommerce to enable/disable fields based on the user’s location. Before returning these fields, WooCommerce puts the fields through a filter. This allows them to be edited by third-party plugins, themes and your own custom code. Billing: $address_fields = apply_filters('woocommerce_billing_fields', $address_fields); Shipping: $address_fields = apply_filters('woocommerce_shipping_fields', $address_fields); The checkout class adds the loaded fields to its ‘checkout_fields’ array, as well as adding a few other fields like “order notes”. ') ) ); This array is also passed through a filter: $this->checkout_fields = apply_filters('woocommerce_checkout_fields', $this->checkout_fields); That means you have full control over checkout fields – you only need to know how to access them. Overriding core fields ↑ Back to top Hooking into the woocommerce_checkout_fields filter lets you override any field. As an example, let’s change the placeholder on the order_comments fields. Currently, it’s set to: _x('Notes about your order, e.g. special notes for delivery.', 'placeholder', 'woocommerce') We can change this by adding a function to our theme functions.php file: // Hook in add_filter( 'woocommerce_checkout_fields' , 'custom_override_checkout_fields' ); // Our hooked in function - $fields is passed via the filter! function custom_override_checkout_fields( $fields ) { $fields['order']['order_comments']['placeholder'] = 'My new placeholder'; return $fields; } You can override other parts, such as labels: // Hook in add_filter( 'woocommerce_checkout_fields' , 'custom_override_checkout_fields' ); // Our hooked in function - $fields is passed via the filter! function custom_override_checkout_fields( $fields ) { $fields['order']['order_comments']['placeholder'] = 'My new placeholder'; $fields['order']['order_comments']['label'] = 'My new label'; return $fields; } Or remove fields: // Hook in add_filter( 'woocommerce_checkout_fields' , 'custom_override_checkout_fields' ); // Our hooked in function - $fields is passed via the filter! function custom_override_checkout_fields( $fields ) { unset($fields['order']['order_comments']); return $fields; } Here’s a full Each field contains an array of properties: type– type of field (text, textarea, password, select) label– label for the input field placeholder– placeholder for the input class– class for the input required– true or false, whether or not the field is require clear– true or false, applies a clear fix to the field/label label_class– class for the label element options– for select boxes, array of options (key => value pairs) In specific cases you need to use the woocommerce_default_address_fields filter. This filter is applied to all billing and shipping default fields: country company address_1 address_2 city state postcode For example, to make the address_1 field optional: // Hook in add_filter( 'woocommerce_default_address_fields' , 'custom_override_default_address_fields' ); // Our hooked in function - $address_fields is passed via the filter! function custom_override_default_address_fields( $address_fields ) { $address_fields['address_1']['required'] = false; return $address_fields; } Defining select options ↑ Back to top If you are adding a field with type ‘select’, as stated above you would define key/value pairs. For example: $fields['billing']['your_field']['options'] = array( 'option_1' => 'Option 1 text', 'option_2' => 'Option 2 text' ); Priority ↑ Back to top Priority in regards to PHP code helps establish when a bit of code — called a function — runs in relation to a page load. It is set inside of each function and is useful when overriding existing code for custom display. Code with a higher number set as the priority will run after code with a lower number, meaning code with a priority of 20 will run after code with 10 priority. The priority argument is set during the add_action function, after you establish which hook you’re connecting to and what the name of your custom function will be. In the example below, blue text is the name of the hook we’re modifying, green text is the name of our custom function, and red is the priority we set. Examples ↑ Back to top In this example, the code is set to redirect the “Return to Shop” button found in the cart to a category that lists products for sale at. There, we can see the priority is set to 10. This is the typical default for WooCommerce functions and scripts, so that may not be sufficient to override that button’s functionality. Instead we can change the priority to any number greater than 10. While 11 would work, best practice dictates we use increments of ten, so 20, 30, and so on. With priority, we can have two functions that are acting on the same hook. Normally this would cause a variety of problems, but since we’ve established one has a higher priority than the other, our site will only load the appropriate function, and we will be taken to the Specials page as intended with the code below. Adding custom shipping and billing fields ↑ Back to top Adding fields is done in a similar way to overriding fields. For example, let’s add a new field to shipping fields – shipping_phone: What do we do with the new field? Nothing. Because we defined the field in the checkout_fields array, the field is automatically processed and saved to the order post meta (in this case, _shipping_phone). If you want to add validation rules, see the checkout class where there are additional hooks you can use. Adding a custom special field ↑ Back to top To add a custom field is similar. Let’s add a new field to checkout, after the order notes, by hooking into the following: This gives us: Next we need to validate the field when the checkout form is posted. For this example the field is required and not optional: A checkout error is displayed if the field is blank: Finally, let’s save the new field to order custom fields using the following code: The field is now saved to the order. If you wish to display the custom field value on the admin order edition page, you can add this code: This is the result: Example: Make phone number not required ↑ Back to top Adding custom fields to emails ↑ Back to top To add a custom field value to WooCommerce emails — a completed order email, for example — use the following snippet:
https://docs.woocommerce.com/document/tutorial-customising-checkout-fields-using-actions-and-filters/
2020-01-17T18:34:40
CC-MAIN-2020-05
1579250590107.3
[array(['https://docs.woocommerce.com/wp-content/uploads/2012/04/priority-markup.png', None], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2012/04/WooCommerce-Codex-Shipping-Field-Hook.png', "It's alive!"], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2012/04/WooCommerce-Codex-Checkout-Field-Hook.png', 'WooCommerce Codex - Checkout Field Hook'], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2012/04/WooCommerce-Codex-Checkout-Field-Notice.png', 'WooCommerce Codex - Checkout Field Notice'], dtype=object) ]
docs.woocommerce.com
JWST ETC NIRISS Target Acquisition The JWST Exposure Time Calculator (ETC) has a target acquisition (TA) mode for the Near infrared Imager and Slitless Spectrograph (NIRISS) which allows the user to estimate the exposure time required to obtain sufficient signal to noise for the TA source to achieve the desired centroiding accuracy. On this page The JWST Near Infrared Imaging and Slitless Spectroscopy (NIRISS) instrument uses Target Acquisition (TA) for two of its observing modes, Single Object Slitless Spectroscopy (SOSS) and Aperture Masking Interferometry (AMI). The NIRISS SOSS mode enables slitless spectroscopy of a bright target and is a key mode for exoplanet transit spectroscopy, while AMI enables high contrast imaging to identify faint companions close to bright targets. The NIRISS ETC TA mode can be used to estimate the exposure times to obtain the required signal-to-noise ratio (SNR) for the NIRISS TA by using options available under instrument and detector setups. The recommended SNR for the NIRISS Target Acquisition is an integrated SNR = 30 or higher to obtain a centroid accuracy of 0.15 pixels for the TA source. The centroiding accuracy improves to about ≤0.10 pixel at SNR = 50 and to about ≤0.05 pixel at SNR = 100. How to create a TA calculation The steps involved in creating a TA calculation are: (1) define the TA scene with a source having appropriate spectral type and magnitude on the Scenes and Sources page, and (2) Specify the instrument and detector setup on the Calculations page. Defining the TA Scene and Source The scene definition and source definition for the TA calculations are defined along the same lines as for the other observing modes. The default ETC TA scene has a single TA source located in the center of the scene. The spectral type for the source can be selected from the various options available for the continuum, and be normalized as required. Creating a TA calculation The Target Acquisition is one of the mode options available for each instrument. To initialize a NIRISS TA calculation select Target Acquisition from the NIRISS instrument drop-down menu. This default calculation uses the default scene with a single point source with flat continuum. If the user wishes to change the default to a pre-defined TA source, use the Scene tab on the Configuration pane to select the scene that contains the pre-defined TA source. What's supported The ETC supports TA for the following NIRISS observing modes: Single Object Slitless Spectroscopy (SOSS) and Aperture Masking Interferometry (AMI). Instrument Setup The Instrument Setup has options which are common to SOSS and AMI because the TA for both these modes are operationally similar except that the 64 × 64 pixels subarrays are located in different regions on the detector. The SOSS or AMI Faint option performs a normal imaging calculation using the imager or CLEARP aperture, while the SOSS or AMI Bright will use the NRM aperture to reduce the flux from very bright targets. See the NIRISS Target Acquisition article for information on how the "bright" and "faint" are defined for NIRISS TA. The only filter choice available is F480M which is the filter used for SOSS and AMI TA. Detector Setup The detector subarray setup for NIRISS TA uses SOSS or AMI TA sub-array which is 64 × 64 pixels. Both SOSS and AMI TA observations use the same size for the subarray. While the two TA subarrays are on different locations on the NIRISS detector, the ETC does not take detector location into account. The TA region for SOSS is located at 1923 ≤ X ≤ 1986 and 1167 ≤ Y ≤ 1230 on the detector (in the science coordinate frame), while it is at 1054 ≤ X ≤ 1117 and 81 ≤ Y ≤ 144 for AMI. Readout Pattern The readout pattern used for NIRISS TA in the ETC is NISRAPID when using the SOSS or AMI Bright mode. Both the NISRAPID and NIS readout patterns are available when using the SOSS or AMI Faint mode. The subarray (SOSS or AMI TA) is a fixed value and no choices are available to the user. The number of groups available are from 3 to 19 and allow only odd numbers to account for the weighting scheme used by the TA observing program scripts. The minimum number of groups is 3. The TA mode allows only one exposure with one integration and cannot be changed by the user. Strategy The NIRISS TA mode only offers Target Acquisition as the option for strategy. The signal to noise is computed within a region of size 5 × 5 pixels. There is no background subtraction that is currently implemented for the TA strategy. It is assumed that the SNR is dominated by the photon noise from the bright target and the contribution from the sky is negligible. If the scene has multiple sources, the user should select the TA source from the Aperture centered on source drop-down menu in the Strategy tab. Outputs The exposure specification for the TA should be chosen to obtain at least the minimum required SNR = 30 to achieve a centroiding accuracy of ≤0.15 pixel for the TA source. The ETC will issue a "TA may fail" warning if the SNR is below the required value. However, increasing the exposure time for TA (by increasing the value of Groups) will infringe only slightly on the time needed for the TA procedure, and this should be considered while planning observations for which accurate centroiding is deemed crucial. For example, the centroiding accuracy improves to about ≤0.10 pixel at SNR = 50 and to about ≤0.05 pixel at SNR = 100. References Goudfrooij, P. 2017, JWST-STScI-005934 NIRISS Target Acquisition: the sensitivity of centroid accuracy to the presence of saturated pixels
https://jwst-docs.stsci.edu/jwst-exposure-time-calculator-overview/jwst-etc-calculations-page-overview/jwst-etc-target-acquisition/jwst-etc-niriss-target-acquisition
2020-01-17T20:03:51
CC-MAIN-2020-05
1579250590107.3
[]
jwst-docs.stsci.edu
If you have an ASM database instance in NFS environment, mounting of ASM log backups as part of recovery operation might fail if the appropriate ASM disk path is not defined in the asm_diskstring parameter. ASM-00015: Mounting of ASM Disk Group <ASM_DISKGROUP_NAME> failed: ORACLE-10003: Error executing SQL "ALTER DISKGROUP <ASM_DISKGROUP_NAME> MOUNT RESTRICTED" against Oracle database +ASM: ORA-15032: not all alterations performed ORA-15017: diskgroup "<ASM_DISKGROUP_NAME>" cannot be mounted ORA-15040: diskgroup is incomplete You should add the ASM disk path /var/opt/snapcenter/scu/clones/*/* to the existing path defined in the asm_diskstring parameter.
http://docs.netapp.com/ocsc-41/topic/com.netapp.doc.ocsc-dpg-oracle/GUID-CAB78371-767A-4A62-87A3-1E850CC1DA82.html
2020-01-17T18:54:01
CC-MAIN-2020-05
1579250590107.3
[]
docs.netapp.com
Occurs after the List Editor's control customizations have been saved to the Application Model. Namespace: DevExpress.ExpressApp.Editors Assembly: DevExpress.ExpressApp.v19.2.dll public event EventHandler<EventArgs> ModelSaved Public Event ModelSaved As EventHandler(Of EventArgs) The ModelSaved event handler receives an argument of the EventArgs type. Handle this event to save custom information on the List Editor control to the appropriate Application Model node.
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Editors.ListEditor.ModelSaved
2020-01-17T18:58:27
CC-MAIN-2020-05
1579250590107.3
[]
docs.devexpress.com
: Want to increase the likelihood that a results comes back? Change the radius search settings to allow all areas to be searched. See Search radius for more information. Want a location to appear based on the persons location? See Location sensor, part of the Power add-on.
https://docs.storelocatorplus.com/no-results-found-label/
2020-01-17T19:22:58
CC-MAIN-2020-05
1579250590107.3
[array(['https://i1.wp.com/docs.storelocatorplus.com/wp-content/uploads/2016/10/2018-front-end-no-results-text.jpg?resize=300%2C198&ssl=1', None], dtype=object) array(['https://i0.wp.com/docs.storelocatorplus.com/wp-content/uploads/2016/10/2018-04no-results-label-admin-setting.jpg?resize=300%2C170&ssl=1', None], dtype=object) ]
docs.storelocatorplus.com
Data Discovery The MAST Discovery Portal is the primary web interface for discovering, visualizing, assessing, and retrieving calibrated data products from the JWST calibration pipeline. The portal also serves the associated calibration guide-star, and engineering data products. On this page Accessing JWST data There are a variety of ways for researchers to access JWST data of interest, including the MAST Discovery Portal (hereafter, Portal) which is the primary web interface for discovering, visualizing, assessing, and retrieving archived data. The portal also provides access to ancillary and engineering data related to the observations. Users may evaluate contemporaneous calibration reference data that were used to remove the instrumental signature from the science data products. They may also discover data through a programmatic (software-based) interface such as the MAST Applications Programming Interface (API), and through various community tools. The Portal also provides a subscription service that allow users to be notified as data from new JWST observations become available through MAST. Data product types The JWST Data Management System (DMS) produces many products for each JWST observation, including the science files produced by the data reduction pipeline. The exact type and number of products depends upon the instrument, its configuration, and operating mode. Consult the Data Products article for a detailed description of each science product and the concomitant data. Most of the science data files are images or tables in FITS format (Pence, et al. 2010), while others are in structured or unstructured ASCII. Table 1 contains a short summary of the data product types that may be included with each data set along with the semantic content of the various data products, including some that are produced outside the science data reduction pipeline. Table 1. Summary of data product types that may be included with each JWST observation Minimum recommended data products Of the many different data products produced by the calibration pipeline, a subset has been identified as essential for extracting the intended science from the data. These are termed the "minimum recommended products" (MRP). The selection of data products that are included in this set depends upon the instrument used to obtain the data, and its configuration and operating mode. Generally, products in the MRP include the lowest-level calibrated science product (i.e., those for which the instrumental signature has been removed) along with the associated data products, but exclude calibration reference files, preview images, and guide-star data products. MRP Checkbox The MRP checkbox in the Download Manager must be de-selected in order to retrieve raw or intermediate-level data products, and ancillary products. MAST Discovery Portal MAST implements various protocols of the Virtual Observatory (VO) including those for image, table, and spectral data access. As part of these protocals, MAST core services operate using the Common Archive Observation Model (CAOM). As a result, MAST data can be searched and retrieved by VO-aware applications. Archived data The MAST Portal offers great flexibility in customizing queries to identify datasets from several hosted NASA missions to explore and retrieve. See the Data Exploration article for a detailed example of a search for JWST science data using the Portal. The following tutorials may be helpful for new Portal users: Engineering data Data associated with the thousands of engineering telemetry points on JWST are stored in the Engineering Database. The data take the form of timeseries, and they may be searched with the AUI by means of an identifier, or mnemonic. Most of them can be queried, visualized, and downloaded by general users through a MAST Portal interface. Users may access the engineering database via a direct link to the query interface or, after querying the Portal for science data, through a link on each row of a search results table. See the JWST Engineering Database Browser for details. Engineering data are obtained contemporaneously, but are not packaged with, the science data. Rather, EDB data are searched by mnemonic and a time range which spans the interval of the science exposure(s) of interest. The data files are provided in CSV format, which can be read by a wide variety of software applications, including MS Excel. The following tutorial may help first-time users of the EDB interface: Anticipated data At any given time some observations in a program may have been executed, archived, and become available to the community; some archived observations may temporarily be restricted to those with exclusive access; while still other observations may remain to be obtained. Investigating teams and the broader community each have an interest in data availability. In order to encourage the greatest possible use of JWST data in MAST, a subscription service will notify registered and subscribed users when one of the following observation-related events occur: - new observations are archived, - archived data have been reprocessed, or - restricted-access data become available to the public. Users may tune the notifications by mission, program ID, event type, and science product. Users may establish or cancel their subscription through the MAST Discovery Portal, change the media and frequency of notifications, and change the selection criteria for notifications. See the following tutorial: Planned Observations MAST provides the capability to compare the position of one or more user targets against extant and planned observations to identify potentially duplicate observations. Duplicated observations are, in general, not allowed so it is in the proposer's best interest to perform this check prior to submitting a proposal for review. Read the article Identifying Potential Duplicate Observations to see how to query planned observations. Virtual observatory tools Many community software applications are capable of accessing remote data using protocols developed for the Virtual Observatory. You may have used them before without being fully aware of how such data were obtained. Table 2 provides an incomplete list of community, VO-aware applications for visualizing and exploring archived astronomical images, spectra, and catalogs: Table 2. An incomplete list of community, VO-aware applications for visualizing and exploring archived astronomical data References Pence, W. D., et al. 2010, A&A, 524, A42 Definition of the Flexible Image Transport System (FITS), version 3.0
https://jwst-docs.stsci.edu/obtaining-data/data-discovery
2020-01-17T18:27:04
CC-MAIN-2020-05
1579250590107.3
[]
jwst-docs.stsci.edu
Flutter Application¶ Flutter is Google’s UI toolkit for building natively compiled applications for mobile, web, and desktop from a single codebase. In page you will learn how to pack a Flutter desktop project for Linux using the AppImage format. For the purpose we will be using a simple hello world application which is available at: Preparing your system¶ Building the flutter app¶ We will use the linux desktop target of Flutter to generate our application binaries. This target is currently only available in the beta channel therefore we need to enable it. Once it’s enable we can generate the binaries. # enable desktop builds flutter channel beta flutter upgrade flutter config --enable-linux-desktop # build desktop release flutter build linux Our application binaries should be somewhere inside the build dir, usually build/linux/x64/release/bundle. We will copy this this folder to our work dir as AppDir: cp build/linux/x64/release/bundle $PWD/AppDir Generating the recipe¶ We will use the –generate method to draft an initial recipe for our project. In the process you’ll be prompted with a set of questions that will help the tool to process your project. Notice that the application must run in order to properly analyse it’s runtime dependencies. appimage-builder --generate Basic Information : ? ID [Eg: com.example.app] : com.example.flutter_hello ? Application Name : Flutter Hello ? Icon : utilities-terminal ? Version : latest ? Executable path relative to AppDir [usr/bin/app] : hello_flutter ? Arguments [Default: $@] : $@ ? Update Information [Default: guess] : guess ? Architecture : amd64 Generating the AppImage¶ At this point we should have a working recipe that can be used to generate an AppImage from our flutter project. To do so execute appimage-builder and the packaging process will start. After deploying the runtime dependencies to the AppDir and configuring then the tool will proceed to test the application according to test cases defined in the recipe. This will give us the certainty that our app runs in the target system. It’s up to you to manually verify that all features work as expected. Once the tools completes its execution you should find an AppImage file in your current work dir. appimage-builder --recipe AppImageBuilder.yml Polishing the recipe¶ Hooray! You should have now an AppImage that can be shipped to any GLibC based GNU/Linux distribution. But there is some extra work to do. The recipe we have is made to freeze the current runtime which include certain parts of your system (such as theme modules) that may not be required in the final bundle. Therefore we will proceed to remove them from the recipe. Grab your favourite text editor and open the AppImageBuilder.yml file. Deploy binaries from the script section¶ Every time we run appimage-builder we need to first copy the application binaries into the AppDir. This step can be made part of the recipe as using the script section as follows: script: - rm -rf AppDir | true - cp -r build/linux/x64/release/bundle $APPDIR Notice the usage of the APPDIR environment variable, this is exported by appimage-builder at runtime. Refine the packages include list¶ In the apt > include section you may find a list of packages. Those packages that are tightly related to your desktop environment (in my case KDE) or to some external system service can be removed in order to save some space but you will have to always validate the resulting bundle using the tests cases. You can even try to boil down your list to only libgtk-3-0 and manually add the missing libs (if any). Final recipe¶ After following the tutorial you should end with a recipe similar to this one. It could be used as starting point if you don’t want to use the –generate method. # appimage-builder recipe see for details version: 1 script: - rm -rf AppDir || true - cp -r build/linux/x64/release/bundle AppDir - mkdir -p AppDir/usr/share/icons/hicolor/64x64/apps/ - cp flutter-mark-square-64.png AppDir/usr/share/icons/hicolor/64x64/apps/ AppDir: path: ./AppDir app_info: id: org.appimagecrafters.hello-flutter name: Hello Flutter icon: flutter-mark-square-64 version: latest exec: hello_flutter exec_args: $@ apt: arch: amd64 allow_unauthenticated: true sources: - sourceline: deb bionic main restricted universe multiverse - sourceline: deb bionic-updates main restricted universe multiverse - sourceline: deb bionic-backports main restricted universe multiverse - sourceline: deb bionic-security main restricted universe multiverse include: - libgtk-3-0 exclude: - humanity-icon-theme - hicolor-icon-theme - adwaita-icon-theme - ubuntu-mono: x86_64 update-information: guess sign-key: None
https://appimage-builder.readthedocs.io/en/latest/examples/flutter.html
2022-08-08T07:21:03
CC-MAIN-2022-33
1659882570767.11
[]
appimage-builder.readthedocs.io
Prioritize groups for Advanced Analysis You can specify device groups for Advanced Analysis based on their importance to your network. Groups are ranked in an ordered list. Here are some important considerations about Advanced Analysis: - Devices on the watchlist are guaranteed Advanced Analysis and are prioritized over device groups. - Devices within a device group that are inactive do not affect Advanced Analysis capacity. - Custom metrics are only available for devices in Advanced Analysis. If you want to see custom metrics for a specific device, prioritize a group that contains the device or add the device to the watchlist. - You must have full write privileges to edit analysis priorities. -. - Prioritize groups by completing the following steps: - In the For Advanced Analysis section, click adding a group to add the initial group or Add Group to add additional groups. - In the Group drop-down list, type the name of a device group and then click the group name from the search results. For example, type HTTP servers and select the HTTP Servers device group. - (Optional): In the Note field, type information about the group. - In the Automatically Fill section, make sure On is selected. - At the top of the page, click Save. Next steps Here are some additional ways to manage and refine groups that receive?
https://docs.extrahop.com/8.8/analysis-priorities-advanced/
2022-08-08T07:50:36
CC-MAIN-2022-33
1659882570767.11
[]
docs.extrahop.com
Menus Menu Item Menu Item Alias From Joomla! Documentation Description[edit] Used to create a link from one Menu Item to another Menu Item. This link can be to another Menu's, Menu Item or within the same Menu. See Quick Tips for use. How to Access[edit] Add a new menu item 'Menu Item Alias' - Select Menus → [name of the menu] → Add New Menu Item from the dropdown menu of the Administrator Panel - Select System Links → Menu Item Alias in the modal popup window. Edit an existing menu item 'Menu Item Alias' -. - Link. The system-generated link for this menu item. This field cannot be changed and is for information only. - Use Redirection. (Yes/No) If set to Yes then visitors will be redirected to the linked menu item. -.
https://docs.joomla.org/Help310:Menus_Menu_Item_Menu_Item_Alias
2022-08-08T07:46:01
CC-MAIN-2022-33
1659882570767.11
[]
docs.joomla.org
Contents: You can share plans with one or more users to work together on the same plan. You can share the plan through the Plans page. NOTE: Sharing must be enabled in your environment. For more information, see Dataprep Project Settings Page. Limitations - When a plan is shared, the underlying flow tasks are not shared directly. - The plan can be executed only if the user has access to all the underlying flow tasks. - Plan schedules cannot be shared with users. Permissions When a user is provided access to a plan, the user becomes a collaborator on the plan and is assigned a subset of the permissions assigned to the owner of the plan. If the user has minimal permissions for overall plans then sharing the plan as collaborator would be downgraded. NOTE: A collaborator on a plan cannot delete the plan. NOTE: In addition to the shared plan, you must have collaborator access to all underlying flows to execute a plan. For more information, see Overview of Sharing. Steps - From the context menu of the Plans page, select Share. In the Share dialog box, add users as collaborators for the plan; start typing the name of a user or enter the email address of the user with whom you would want to share the plan. - Specify the privilege level of the user to whom you are sharing. For more information on sharing privileges, see Overview of Sharing. Repeat the process to add multiple users. - Click Save. This page has no comments.
https://docs.trifacta.com/display/DP/Share+a+Plan
2022-08-08T07:39:46
CC-MAIN-2022-33
1659882570767.11
[]
docs.trifacta.com
Overview of Boardable Understanding roles and learning how to get started in the Boardable platform. Boardable Basics: Boardable best practices to set you up for success with quick-start resources to run better board meetings. Learn about Boardable's Mobile (take it with you for updates on the go!) and Desktop apps. Integrate your Boardable calendar into your personal/work calendar: Google, Outlook, or Apple Stay connected and share ideas with discussions Document Center hosts all the files your board needs in one place. Simplicity of organizing documents without digging up email attachments. Goals feature to help boards see how they’re performing on a variety of values so that they can achieve everything they want. Learn more about Boardable Groups Everything you need to know to run a more productive and efficient meeting. The essential meeting platform for boards and teams Boardable gives each user the ability to update their own profile and accessibility settings! Learn more about how Notifications work in the Boardable platform. Inviting people and managing users in the People directory. Polling helps you develop active, contributing board members. It highlights important decision-making items between and leading up to meetings. Track and measure the success of your organization’s goals and board initiatives Organization Settings and Broadcast Announcements Essentials, Professional, and Enterprise Plans Keep track of tasks and due dates, increasing every board member’s productivity and accountability. FAQs and Miscellaneous Boardable product tips and answers Here you’ll find the latest scheduled maintenance and outage updates for the Boardable web app, mobile app, and desktop app.
https://docs.boardable.com/en/
2022-08-08T07:18:59
CC-MAIN-2022-33
1659882570767.11
[]
docs.boardable.com
Course administration¶ As a course administrator, you can simply access its management page by clicking on “Course administration” in the main course page. Students submissions¶ Statistics over students submissions are largely available in INGinious, and all the files related to them are stored and can be downloaded. General overview¶ The administration page gives you several global list views : All the tasks of a course, with the number of students who viewed the task at least one time, who tried and the number of tries, as well as the number of students who succeded the task. This view is the first displayed when you click on “Manage” to enter the administration. All the students/groups of a course, with the number of tasks tried and done, as well as its global progression for students. This view can be accessed by switching to “Students”/”Groups” in the main administration page. All the students/groups who tried a given task, if they succeded it, and the number of submissions they did. You can show these information by clicking “View results” on the main administration page or by clicking “Statistics” on the task page. All the tasks tried by a given student/group, if (s)he/they succeded it and the number of submissions (s)he/they did. These information can be displayed by clicking “View” in the student/group list of a course. All the submissions made by a student/group for a given tasks, with date of submission and the global result. Submissions can be displayed by clicking “View submissions” in tasks lists. All the tables can be downloaded in CSV format to make some further treatment in your favourite spreadsheet editor. More information about groups possibilities can be found below. Downloading submissions¶ Student submissions can be downloaded from the Download submissions and statistics pages or the submission inspection page. You are able to only download the set of evaluation submissions (according to the task parameters) or all the submissions. Submissions are downloadable as gzip tarball (.tgz) files. You may need some third-party software if your operating system does not support this format natively. The files contain, for each submissions, a test file with extension test containing the all the submission metadata and the associated archive folder containing all the files that have been exported (via the /archive folder inside the container) (See Run file). Replaying submissions¶ Student submissions can be replayed from the submission inspection page. You can either replay a specific submission or replay all the submissions queried (with the replay button in the table’s header). Different replay scheme are available: As replacement of the current student submission result. This is the default scheme for the Replay submissions page. When replayed, submission input are put back in the grading queue. When the job is completed, the newly computed result will replace the old one. This is useful if you want to change the grading scripts during or after the assignment period and want all students to be graded the same way. You can replay only the evaluation submission or all submissions. However, please note that if replayed, the best submission can be replaced by an older best submission. As a personal copy. This mode is only available from the submission inspection page and copy the student input to generate a new personal copy. This is useful for debugging if a problem occur with a specific student submission. Submission copy is also available with SSH debug mode. Warning This feature is currently under testing. As the same job queue is used for standard submissions and submission replay, it is recommended not to launch huge replay jobs on a very active INGInious instance. Task edition¶ All tasks can be edited from the webapp. To access the task editor, just click on Tasks from the main administration page. Then click on Edit task for the concerned task. You can also add new tasks from this Tasks page by clicking Add tasks for a tasks section and entering a new task id. When editing a task, you can enter basic informations and parameters in the Basic settings tab. Based on the type of problem you want to put for the task, you can select one of the two available grading environment in the Environment tab: Select Multiple Choice Question solver if you only want to add mcq or match types of problems. Note, the math problem type from the problems-math plugin also uses this grading environment. Select Docker container if you want to add some more complex problems which requires to write a grading script to access the students inputs. Adding/removing problems¶ Adding and removing problems in a task is very easy with the task editor. Go to the Subproblems tab and add a new problem-id (alphanumerical characters) and a problem type. You can configure the problem context from this page. There are two ways to grade a problem: - Using check_answer which is only implemented for mcq and match problems - Using a specific grading script which is required for more complex problems mcq and match problems can be entirely configured from the subproblem page with the option to set up answers. When editing a multiple choice problem, you’re asked if the student is shown a multiple-answers- or single-answer-problem and which of the possible choices is (are) good answer(s). check_answer is only available for mcq and match problems and is automatically used when using the Multiple Choice Question Solver environment. So if you are adding more complex problems such as asking students for code implementation, you will have to write your own grading script. If you are creating this kind of problems, remember to select Docker container as grading environment in the Environment tab. Note only a few types of problems are initially shipped with INGInious but many others are available via plugins. A list is available here Task files¶ Task files can be created, uploaded and modified from the task edition page with the Tasks files tab. Only text-base files can be edited from the webapp. Binary files can however be uploaded. The behaviour of the Move action is Unix-like : it can be used for renaming files. Audiences¶ Audiences are useful to administratively separate students following the same course. They offer separate statistics to help the teacher identify problems students may encounter in this particular context. Creation and edition¶ Audiences are created and edited from the web app in the course administration. In the audiences list view, specify an audience description, and click on “Create new audience”. The newly created audience will appear in the list. To edit an audience, click on the quick link “Edit audience” located on the right side of the table. You’ll be able to change the audience description, the associated teaching staff, and to specify the students. Assigning tutors will help them to retrieve their audience statistics. The student list is entirely managed by drag-and-drop. Course structure upload¶ You can generate the course audience structure with an external tool and then upload it on INGInious. This is done with a YAML file, which structure is described below. The course structure can be uploaded on the audience list view in the course administration. Audiences YAML structure¶ - description: Audience 1 tutors: - tutor1 - tutor2 students: - user1 - user2 - description: Audience 2 tutors: - tutor1 - tutor2 students: - user3 - user4 description is a string and corresponds to your audience description tutors is a list of strings representing the usernames of the assigned audience tutors. students is a list of strings representing the usernames of the audience students. Groups¶ Collaborative work is possible in INGInious. Groups define a set of users that will submit together. Their submissions will contain as authors all the students that were members of the group at submission time. Creation and edition¶ Groups are created and edited from the web app in the course administration. To create a new group, simply press on the “New group” button in the group list view. You’ll then be able to specify the group description, its maximum size, assigned tutors and students, as well as the required audiences to enter the group. The student list is entirely managed by drag-and-drop. Students can be moved from one group to another by simply moving his name to the new group. Group attribution¶ If you do not really matter the way students work together, you can set empty groups with maximum size and allowed audiences and let the students choose their groups or groups themselves. Just check the option in the course settings to allow them to gather. When submissions will be retrieved, the group members will be displayed as the authors as with staff-defined groups or groups. Course structure upload¶ You can generate the course group structure with an external tool and then upload it on INGInious. This is done with a YAML file, which structure for groups are similar and described below. The course structure can be uploaded on the group list view in the course administration. Group YAML structure¶ - description: Group 1 tutors: - tutor1 - tutor2 students: - user1 - user2 audiences: - 5daffce21d064a2fb1f67527 - 5daf00d61d064a6c25ed7be1 - description: Group 2 tutors: - tutor1 - tutor2 students: - user3 - user4 description is a string and corresponds to your group description tutors is a list of strings representing the usernames of the assigned group tutors. students is a list of strings representing the usernames of the group students. audiences is a list of authorized audiences identifiers. Backup course structure¶ Course structures (audiences and groups) can be exported for backup or manual edition via the audience/group list page in the course administration pages. Simply click on the “Download structure” button. The download file will have the same format as described above.
https://docs.inginious.org/en/latest/teacher_doc/course_admin.html
2022-08-08T07:15:42
CC-MAIN-2022-33
1659882570767.11
[]
docs.inginious.org
The NSX Migration for VMware Cloud Director tool version 1.3.2 supports several new features: Step: [vcdOperations]:[resetTargetExternalNetwork]:3911 [INFO] [VDC-demo]| Rollback:Reset the target external network Exception: Failed to reset the target external network 'external-network-name' to its initial state: [ xx-xx-xx-xx-xx] The provided list 'ipRanges.values' should have at least one item in it. Reason: During rollback, the migration tool removes the IP address/s used by the target edge gateway from the target external network. If the target external network has no spare IP in its static IP Pool apart from the ones used by target edge gateway/s, then the migration tool will not be able to remove the IPs as a minimum of one IP should be present in every subnet of an external network. Workaround: Add additional IP(s) to the static IP pool of the target external network and run the rollback. Step: [vcdNSXMigratorCleanup]:[run]:3542 [INFO] [VDC-demo]| Updating the source External network. Exception: Failed to update source external network ‘external-network-name' : [ xx-xx-xx-xx-xx] The provided list 'ipRanges.values' should have at least one item in it. Reason: During cleanup, the migration tool removes the IP address/s used by the source edge gateway from the source external network. If the source external network has no spare IP in its static IP Pool apart from the ones used by source edge gateway/s, then the migration tool will not be able to remove the IPs as a minimum of one IP should be present in every subnet of an external network. Workaround: IP/s need to be cleaned manually from the static IP Pool of the source external network in case of failure. After rollback is completed, VMs may lose N-S connectivity. VM loses N-S traffic following vMotion to an NSX for vSphere host after NSX-v to NSX-T Edge migration cutover was done. Fixed in NSX-T 3.1.3.3 (for more details, see NSX-T Release Notes). After the migration is completed, the NAT rules created at target are not editable using VMware Cloud Director UI. A lock symbol can be seen while selecting NAT rules. Workaround: Use VMware Cloud Director API to edit the NAT rules. Issue fixed in VMware Cloud Director 10.3.2. VMs connected to distributed Org VDC networks lose network connectivity after N-S network switchover and bridging does not work. Workaround: Ensure that the MAC Address of the NSX-T Virtual Distributed Router is using a different MAC address than the NSX-V distributed logical router. For more details, see NSX-T documentation. When non-distributed routing is enabled on Org VDC networks with NSX-T data center and DNS IP same as the default gateway IP on that network, then the migration tool will create two DNAT rules to handle the DNS traffic. These 2 DNAT rules will not get created if the Org VDC network is part of the Data center group. Will be fixed in the future VMware Cloud Director version. Workaround: Create DNAT rules manually for DNS traffic after migration. Migration of encrypted running VM fails with "A powered-on encrypted VM is not allowed to change its profile." even though the underlying VC policy is not changing. Will be fixed in the future VMware Cloud Director version. Workaround: Power Off the VM before migration. Migration of VM fails if a network is assigned to the VM NIC, but it's in disconnected state (by unchecking the "Connected" box in VMware Cloud Director Tenant Portal). Workaround: Set the Network value for the VM to "None". The operation failed because no suitable resource was found. Out of 1 candidate hubs: 1 hubs eliminated because: Only contains rejected VM Groups(s): [[VM+Group1], [VM+group1]] Rejected hubs: resgroup-4416 PlacementException NO_FEASIBLE_PLACEMENT_SOLUTION Workaround: make sure that VM groups backing the source and target placement policy are identically named. Even though the migration is successful, the Route re-distribution rules are not set on Tier-0/VRF gateway by VMware Cloud Director. The Tier-0/VRF gateway will not advertise Tier-1 gateway services upstream. Hence external connectivity will not work. Workaround: Create Route Re-distribution set named SYSTEM-VCD-EDGE-SERVICES-REDISTRIBUTION in NSX-T, if not already present on the Tier-0/VRF gateway to which the Org VDC gateway Tier-1 is connected, then set the following Tier-1 networking services for route re-distribution if not already set. Additionally, enable the following services if static or dynamic routing is enabled on T1.
https://docs.vmware.com/en/VMware-NSX-Migration-for-VMware-Cloud-Director/1.3.2/rn/vmware-nsx-migration-for-vmware-cloud-director-132-release-notes/index.html
2022-08-08T07:32:33
CC-MAIN-2022-33
1659882570767.11
[]
docs.vmware.com
All ports that are imported are unmanaged. The IP Manager sets a port to managed if the port has a connection. The port may be a trunk port or an access port. The IP Manager imports only those ports that are managed. Furthermore, the IP Manager imports only managed objects and performs correlation and root-cause analysis on managed objects only. Managed and unmanaged objects are described in the IP Manager User Guide.
https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/ip-manager-concepts-guide/GUID-757495D0-BA8F-4697-A631-03239D2877F4.html
2022-08-08T08:20:18
CC-MAIN-2022-33
1659882570767.11
[]
docs.vmware.com
5.4. Open MPI Java bindings Open MPI head of development provides support for Java-based MPI applications. Warning The Open MPI Java bindings are provided on a “provisional” basis – i.e., they are not part of the current or proposed MPI standards. Thus, inclusion of Java support is not required by the standard. Continued inclusion of the Java bindings is contingent upon active user interest and continued developer support. The rest of this document provides step-by-step instructions on building OMPI with Java bindings, and compiling and running Java-based MPI applications. Also, part of the functionality is explained with examples. Further details about the design, implementation and usage of Java bindings in Open MPI can be found in its canonical reference paper 1. The bindings follow a JNI approach, that is, we do not provide a pure Java implementation of MPI primitives, but a thin layer on top of the C implementation. This is the same approach as in mpiJava 2; in fact, mpiJava was taken as a starting point for Open MPI Java bindings, but they were later totally rewritten. 5.4.1. Building the Java bindings Java support requires that Open MPI be built at least with shared libraries (i.e., --enable-shared). Note that this is the default for Open MPI, so you don’t have to explicitly add the option. The Java bindings will build only if --enable-mpi-java is specified, and a JDK is found in a typical system default location. If the JDK is not in a place where we automatically find it, you can specify the location. For example, this is required on the Mac platform as the JDK headers are located in a non-typical location. Two options are available for this purpose: --with-jdk-bindir=<foo>: the location of javacand javah --with-jdk-headers=<bar>: the directory containing jni.h Some example configurations are provided in Open MPI configuration platform files under contrib/platform/hadoop. These examples can JDK is in a “standard” place that configure can automatically find. 5.4.2. Building Java MPI applications The mpijavac wrapper compiler is available for compiling Java-based MPI applications. It ensures that all required Open MPI libraries and classpaths are defined. For example: $ mpijavac Hello.java You can use the --showme option to see the full command line of the Java compiler that is invoked: $ mpijavac Hello.java --showme /usr/bin/javac -cp /opt/openmpi/lib/mpi.jar Hello.java Note that if you are specifying a -cp argument on the command line to pass your application-specific classpaths, Open MPI will extend that argument to include the mpi.jar: $ mpijavac -cp /path/to/my/app.jar Hello.java --showme /usr/bin/javac -cp /path/to/my/app.jar:/opt/openmpi/lib/mpi.jar Hello.java Similarly, if you have a CLASSPATH environment variable defined, mpijavac will convert that into a -cp argument and extend it to include the mpi.jar: $ export CLASSPATH=/path/to/my/app.jar $ mpijavac Hello.java --showme /usr/bin/javac -cp /path/to/my/app.jar:/opt/openmpi/lib/mpi.jar Hello.java 5.4.3. Running Java MPI applications Once your application has been compiled, you can run it with the standard mpirun command line: $ mpirun <options> java <your-java-options> <my-app> mpirun will detect the java token and ensure that the required MPI libraries and class paths are defined to support execution. You therefore do not need to specify the Java library path to the MPI installation, nor the MPI classpath. Any classpath definitions required for your application should be specified either on the command line or via the CLASSPATH environment variable. Note that the local directory will be added to the classpath if nothing is specified. Note The java executable, all required libraries, and your application classes must be available on all nodes. 5.4.4. Basic usage of the Java bindings There is an MPI package that contains all classes of the MPI Java bindings: Comm, Datatype, Request, etc. These classes have a direct correspondence with handle types defined by the MPI standard. MPI primitives are just methods included in these classes. The convention used for naming Java methods and classes is the usual camel-case convention, e.g., the equivalent of MPI_File_set_info(fh,info) is fh.setInfo(info), where fh is an object of the class File. Apart from classes, the MPI package contains predefined public attributes under a convenience class MPI. Examples are the predefined communicator MPI.COMM_WORLD and predefined datatypes such as MPI.DOUBLE. Also, MPI initialization and finalization are methods of the MPI class and must be invoked by all MPI Java applications. The following example illustrates these concepts: import mpi.*; class ComputePi { public static void main(String args[]) throws MPIException { MPI.Init(args); int rank = MPI.COMM_WORLD.getRank(), size = MPI.COMM_WORLD.getSize(), nint = 100; // Intervals. double h = 1.0/(double)nint, sum = 0.0; for (int i=rank+1; i<=nint; i+=size) { double x = h * ((double)i - 0.5); sum += (4.0 / (1.0 + x * x)); } double sBuf[] = { h * sum }, rBuf[] = new double[1]; MPI.COMM_WORLD.reduce(sBuf, rBuf, 1, MPI.DOUBLE, MPI.SUM, 0); if (rank == 0) { System.out.println("PI: " + rBuf[0]); } MPI.Finalize(); } } 5.4.5. Exception handling The Java bindings in Open MPI support exception handling. By default, errors are fatal, but this behavior can be changed. The Java API will throw exceptions if the MPI.ERRORS_RETURN error handler is set: MPI.COMM_WORLD.setErrhandler(MPI.ERRORS_RETURN); If you add this statement to your program, it will show the line where it breaks, instead of just crashing in case of an error. Error-handling code can be separated from main application code by means of try-catch blocks, for instance: try { File file = new File(MPI.COMM_SELF, "filename", MPI.MODE_RDONLY); } catch(MPIException ex) { System.err.println("Error Message: "+ ex.getMessage()); System.err.println(" Error Class: "+ ex.getErrorClass()); ex.printStackTrace(); System.exit(-1); } 5.4.6. How to specify buffers In MPI primitives that require a buffer (either send or receive), the Java API admits a Java array. Since Java arrays can be relocated by the Java runtime environment, the MPI Java bindings need to make a copy of the contents of the array to a temporary buffer, then pass the pointer to this buffer to the underlying C implementation. From the practical point of view, this implies an overhead associated to all buffers that are represented by Java arrays. The overhead is small for small buffers but increases for large arrays. There is a pool of temporary buffers with a default capacity of 64K. If a temporary buffer of 64K or less is needed, then the buffer will be obtained from the pool. But if the buffer is larger, then it will be necessary to allocate the buffer and free it later. The default capacity of pool buffers can be modified with an Open MPI MCA parameter: $ mpirun --mca ompi_mpi_java_eager SIZE ... The value of SIZE can be: N: An integer number of bytes Nk: An integer number (suffixed with k) of kilobytes Nm: An integer number (suffixed with m) of megabytes An alternative is to use “direct buffers” provided by standard classes available in the Java SDK such as ByteBuffer. For convenience, Open MPI provides a few static methods new[Type]Buffer in the MPI class to create direct buffers for a number of basic datatypes. Elements of the direct buffer can be accessed with methods put() and get(), and the number of elements in the buffer can be obtained with the method capacity(). This example illustrates its use: int myself = MPI.COMM_WORLD.getRank(); int tasks = MPI.COMM_WORLD.getSize(); IntBuffer in = MPI.newIntBuffer(MAXLEN * tasks), out = MPI.newIntBuffer(MAXLEN); for (int i = 0; i < MAXLEN; i++) out.put(i, myself); // fill the buffer with the rank Request request = MPI.COMM_WORLD.iAllGather( out, MAXLEN, MPI.INT, in, MAXLEN, MPI.INT); request.waitFor(); request.free(); for (int i = 0; i < tasks; i++) { for (int k = 0; k < MAXLEN; k++) { if (in.get(k + i * MAXLEN) != i) throw new AssertionError("Unexpected value"); } } Direct buffers are available for: BYTE, CHAR, SHORT, INT, LONG, FLOAT, and DOUBLE. Note There is no direct buffer for booleans. Direct buffers are not a replacement for arrays, because they have higher allocation and deallocation costs than arrays. In some cases arrays will be a better choice. You can easily convert a buffer into an array and vice versa. Important All non-blocking methods must use direct buffers. Only blocking methods can choose between arrays and direct buffers. The above example also illustrates that it is necessary to call the free() method on objects whose class implements the Freeable interface. Otherwise, a memory leak will occur. 5.4.7. Specifying offsets in buffers In a C program, it is common to specify an offset in a array with &array[i] or array+i to send data starting from a given position in the array. The equivalent form in the Java bindings is to slice() the buffer to start at an offset. Making a slice() on a buffer is only necessary, when the offset is not zero. Slices work for both arrays and direct buffers. import static mpi.MPI.slice; // ... int numbers[] = new int[SIZE]; // ... MPI.COMM_WORLD.send(slice(numbers, offset), count, MPI.INT, 1, 0); 5.4.8. Supported APIs Complete MPI-3.1 coverage is provided in the Open MPI Java bindings, with a few exceptions: The bindings for the MPI_Neighbor_alltoallwand MPI_Ineighbor_alltoallwfunctions are not implemented. Also excluded are functions that incorporate the concepts of explicit virtual memory addressing, such as MPI_Win_shared_query. 5.4.9. Known issues There exist issues with the Omnipath (PSM2) interconnect involving Java. The problems definitely exist in PSM2 v10.2; we have not tested previous versions. As of November 2016, there is not yet a PSM2 release that completely fixes the issue. The following mpirun command options will disable PSM2: shell$ mpirun ... --mca mtl ^psm2 java ...your-java-options... your-app-class 5.4.10. Questions? Problems? The Java API documentation is generated at build time in $prefix/share/doc/openmpi/javadoc. Additionally, this Cisco blog post has quite a bit of information about the Open MPI Java bindings. If you have any problems, or find any bugs, please feel free to report them to Open MPI user’s mailing list. Footnotes
https://docs.open-mpi.org/en/main/features/java.html
2022-08-08T07:57:28
CC-MAIN-2022-33
1659882570767.11
[]
docs.open-mpi.org
pyswarms.utils.decorators package¶ The pyswarms.decorators module implements a decorator that can be used to simplify the task of writing the cost function for an optimization run. The decorator can be directly called by using @pyswarms.cost. pyswarms.utils.decorators. cost(cost_func)[source]¶ A decorator for the cost function This decorator allows the creation of much simpler cost functions. Instead of writing a cost function that returns a shape of (n_particles, 0)it enables the usage of shorter and simpler cost functions that directly return the cost. A simple example might be: The decorator expects your cost function to use a d-dimensional array (where d is the number of dimensions for the optimization) as and argument. Note Some numpyfunctions return a np.ndarraywith single values in it. Be aware of the fact that without unpacking the value the optimizer will raise an exception.
https://pyswarms.readthedocs.io/en/latest/api/pyswarms.utils.decorators.html
2022-08-08T07:17:24
CC-MAIN-2022-33
1659882570767.11
[]
pyswarms.readthedocs.io
ReportDesignerModel Class A model that contains information about a report. The Web Report Designer is bound to this model. Namespace: DevExpress.XtraReports.Web.ReportDesigner Assembly: DevExpress.XtraReports.v22.1.Web.dll Declaration Remarks To generate the ReportDesignerModel object and assign values to the object properties, use the ReportDesignerExtension.GetModel method. After that, the MVCxReportDesigner can be bound to a ReportDesignerModel via the ReportDesignerExtension.Bind method in the web application’s View. The following code demonstrates how to generate this model and bind the MVCxReportDesigner to it in an ASP.NET MVC web application. Controller code: using DevExpress.Web.Mvc; using DevExpress.XtraReports.Native; using DevExpress.XtraReports.UI; //... public class DesignerController : Controller { //... public ActionResult Designer() { var reportDesignerModel = ReportDesignerExtension.GetModel(new MyReport()); return View(reportDesignerModel); } } View code: @Html.DevExpress().ReportDesigner(settings => { settings.Name = "designer"; }).Bind(Model)
https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.Web.ReportDesigner.ReportDesignerModel
2022-08-08T06:51:32
CC-MAIN-2022-33
1659882570767.11
[]
docs.devexpress.com
Installation guidance for SQL Server on Linux Applies to: SQL Server (all supported versions) - Linux This article provides guidance for installing, updating, and uninstalling SQL Server 2017 (14.x), and SQL Server 2019 (15.x) on Linux. Tip For installing SQL Server 2022 (16.x) Preview CTP 2.1 on Linux, see Installation guidance for SQL Server 2022 (16.x) Preview on Linux. For other deployment scenarios, see:. To upgrade SQL Server, first change your configured repository to the desired version of SQL Server. Then use the same update command to upgrade your version of SQL Server. This is only possible 2019. - Machine Learning Services (R, Python) - SQL Server Integration Services Tip For answers to frequently asked questions, see the SQL Server on Linux FAQ. Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-ver16
2022-08-08T08:36:53
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
Contents: This document describes how to migrate your existing usage of the v3 endpoints to their v4 equivalents. As of Release 6.4, the v3 endpoints have reached End of Life (EOL) and are longer be available in the product. You must migrate your API endpoint usage to v4 immediately. This section contains a mapping of documentation between the publicly available v3 endpoints and their v4 equivalents. NOTE: Except as noted, these v3 behaviors should be reflected in the v4 endpoints. Please be sure to review the notes. Legend: Connections Datasets and Recipes Flows Flow import and export Jobgroups and Jobs Deployments and Releases Users This page has no comments.
https://docs.trifacta.com/display/r064/API+Migration+to+v4
2022-08-08T07:47:05
CC-MAIN-2022-33
1659882570767.11
[]
docs.trifacta.com
If per-user access to S3 has been enabled in your Trifacta® deployment, you can apply your personal S3 access credentials through the AWS Storage page. You can use the following properties to define the S3 buckets to use for uploads, job results, and temporary files. Steps: - In the menu bar, click the Settings menu. - Select Storage. click Edit for AWS Credentials and Storage Settings, where you can review and modify your S3 access credentials. Credential Provider IAM Role NOTE: This role must be created through AWS for you. For more information, please contact your AWS administrator. Tip: This method is recommended for access AWS resources. Figure: Apply your IAM role and credentials AWS Key and Secret Per-user access must be enabled by your Trifacta administrator. See Enable S3 Access. Figure: AWS Storage page The following settings apply to S3 access. NOTE: The values that you should use for these settings should be provided by your S3 administrator. If they have already been specified, do not modify unless you have been provided instructions to do so. This page has no comments.
https://docs.trifacta.com/display/r068/Configure+Your+Access+to+S3
2022-08-08T07:49:29
CC-MAIN-2022-33
1659882570767.11
[]
docs.trifacta.com
.75%to quickly incentivize arbitragers to close the gap and recollateralize the protocol to the target ratio. The bonus rate can be adjusted or changed to a dynamic PID controller adjusted variable through governance. $250,000worth of collateral needed to reach the target ratio. Anyone can call the recollateralize function and place up to $250,000 of collateral into pools to receive an equal value of XUS plus a bonus rate of .75%. 250,000 DAIat a price of $1.00/DAIand a market price of $3.80/XUSis as follows: $1,000,000 worth of XUSto receive excess collateral. 238,095.238 XUSat a price of $4.20/XUSto receive DAI at a price of $.99/DAIis as follows:
https://docs.xusd.money/buybacks-and-recollateralization
2022-08-08T07:23:51
CC-MAIN-2022-33
1659882570767.11
[]
docs.xusd.money
Scitime Quick Start¶ Scitime currently supports: - RandomForestClassifier - SVC - KMeans - RandomForestRegressor Usage¶ Example of getting runtime estimation for KMeans: from sklearn.cluster import KMeans import numpy as np import time from scitime import RuntimeEstimator # example for kmeans clustering estimator = RuntimeEstimator(meta_algo='RF', verbose=3) km = KMeans() # generating inputs for this example X = np.random.rand(100000,10) # run the estimation estimation, lower_bound, upper_bound = estimator.time(km, X) How Scitime works¶ Scitime predicts the runtime to fit by pre-trained runtime estimators, we call it meta algorithm (meta_algo), whose weights are stored in a dedicated pickle file in the package metadata. For each Scikit Learn model (if supported), you will find a corresponding meta algo pickle file in Scitime’s code base. You can also train your own meta algorithm using your own generated data (see section “Use _data.py to generate data” below). More information about how the models are pre-trained can be found here. Use _data.py to generate data¶ (for contributors) $ python _data.py --help usage: _data.py [-h] [--drop_rate DROP_RATE] [--meta_algo {RF,NN}] [--verbose VERBOSE] [--algo {RandomForestRegressor,RandomForestClassifier,SVC,KMeans}] [--generate_data] [--fit FIT] [--save] Gather & Persist Data of model training runtimes optional arguments: -h, --help show this help message and exit --drop_rate DROP_RATE drop rate of number of data generated (from all param combinations taken from _config.json). Default is 0.999 --meta_algo {RF,NN} meta algo used to fit the meta model (NN or RF) - default is RF --verbose VERBOSE verbose mode (0, 1, 2 or 3) --algo {RandomForestRegressor,RandomForestClassifier,SVC,KMeans} algo to train data on --generate_data do you want to generate & write data in a dedicated csv? --fit FIT do you want to fit the model? If so indicate the csv name --save (only used for model fit) do you want to save / overwrite the meta model from this fit? Contribute¶ The preferred way to contribute to scitime is to fork the main repository on GitHub, then submit a “pull request” (PR) - as done for scikit-learn contributions: - Create an account on GitHub if you do not already have one. - Fork the project repository: click on the ‘Fork’ button near the top of the page. This creates a copy of the code under your account on the GitHub user account. - Clone your fork of the scitime repo from your GitHub account to your local disk: git clone [email protected]:YourLogin/scitime.git cd scitime # Install library in editable mode: pip install --editable . - Create a branch to hold your development changes: git checkout -b my-feature and start making changes. Always use a feature branch. It’s good practice to never work on the masterbranch! - Develop the feature on your feature branch on your computer, using Git to do the version control. When you’re done editing, add changed files using git add and then git commit files: git add modified_files git commit - to record your changes in Git, then push the changes to your GitHub account with: git push -u origin my-feature - Follow GitHub instructions to create a pull request from your fork. Some quick additional notes: - We use appveyor and travis.ci for our tests - We try to follow the PEP8 guidelines (using flake8, ignoring codes E501 and F401)
https://scitime.readthedocs.io/en/latest/quickstart.html
2022-08-08T08:02:29
CC-MAIN-2022-33
1659882570767.11
[]
scitime.readthedocs.io
BigchainDB Networks¶ A BigchainDB network is a set of connected BigchainDB nodes, managed by a BigchainDB consortium (i.e. an organization). Those terms are defined in the BigchainDB Terminology page. Consortium Structure & Governance¶ The consortium might be a company, a foundation, a cooperative, or some other form of organization. It must make many decisions, e.g. How will new members be added? Who can read the stored data? What kind of data will be stored? A governance process is required to make those decisions, and therefore one of the first steps for any new consortium is to specify its governance process (if one doesn’t already exist). This documentation doesn’t explain how to create a consortium, nor does it outline the possible governance processes. It’s worth noting that the decentralization of a BigchainDB network depends, to some extent, on the decentralization of the associated consortium. See the pages about decentralization and node diversity. DNS Records and SSL Certificates¶ We now describe how we set up the external (public-facing) DNS records for a BigchainDB network. Your consortium may opt to do it differently. There were several goals: - Allow external users/clients to connect directly to any BigchainDB node in the network (over the internet), if they want. - Each BigchainDB node operator should get an SSL certificate for their BigchainDB node, so that their BigchainDB node can serve the BigchainDB HTTP API via HTTPS. (The same certificate might also be used to serve the WebSocket API.) - There should be no sharing of SSL certificates among BigchainDB node operators. - Optional: Allow clients to connect to a “random” BigchainDB node in the network at one particular domain (or subdomain). Node Operator Responsibilities¶ - Register a domain (or use one that you already have) for your BigchainDB node. You can use a subdomain if you like. For example, you might opt to use abc-org73.net, api.dynabob8.ioor figmentdb3.ninja. - Get an SSL certificate for your domain or subdomain, and properly install it in your node (e.g. in your NGINX instance). - Create a DNS A Record mapping your domain or subdomain to the public IP address of your node (i.e. the one that serves the BigchainDB HTTP API). Consortium Responsibilities¶ Optional: The consortium managing the BigchainDB network could register a domain name and set up CNAME records mapping that domain name (or one of its subdomains) to each of the nodes in the network. For example, if the consortium registered bdbnetwork.io, they could set up CNAME records like the following: - CNAME record mapping api.bdbnetwork.ioto abc-org73.net - CNAME record mapping api.bdbnetwork.ioto api.dynabob8.io - CNAME record mapping api.bdbnetwork.ioto figmentdb3.ninja
https://docs.bigchaindb.com/en/latest/installation/network-setup/networks.html
2022-08-08T06:24:53
CC-MAIN-2022-33
1659882570767.11
[]
docs.bigchaindb.com
What is Test Base for Microsoft 365? Test Base is an Azure service that enables data-driven application testing while providing user access to intelligent testing from anywhere in the world. The following entities are encouraged to onboard their applications, binaries, and test scripts onto the Test Base for Microsoft 365 service: Independent Software Vendors (ISVs), System Integrators (SIs) to validate their applications and IT Professionals who want to validate their line-of-business (LOB) applications through integration with Microsoft Intune. Why test your application with Test Base? The Test Base for Microsoft 365 service can accommodate the expansion of your testing matrix as necessary so you will have confidence in the integrity, compatibility, and usability of your applications. Test Base enables your application to continue working as expected even as platform dependencies vary, and new updates are applied by the Windows update service. With Test Base, you can avoid the aggravation, protracted time commitments, and the expense of setting up and maintaining a complex lab environment for testing your applications. In addition, you can automatically test compatibility against security and feature updates for Windows by using secure virtual machines (VMs) while also obtaining access to world-class intelligence for testing your applications. You can also get your apps tested for compatibility against pre-release windows security updates by submitting a request to get the access. How does Test Base work? To sign up for the Test Base service, see Create a new Test Base account. After a customer has enrolled in the Test Base service, it is a simple matter to begin uploading application packages for testing. Following a successful upload, packages are tested against Windows pre-release updates. After initial tests are successfully completed, the customer can do a deep dive with insights on performance and regression analysis to detect whether pre-release content updates have degraded application performance in any way. However, if the package failed any test, then the customer can also leverage Insights from memory or CPU regressions to remediate the failure and then update the package as necessary. With Test Base, the customer can use a single location to manage all packages being tested, which can also facilitate uploading and updating packages to generate new application versions as needed. Note So that customers can take advantage of pre-release update content, they must specifically request access to it. Once your request for access to pre-release updates is approved, your uploaded packages will automatically get scheduled to be tested against the pre-release Windows updates for the OS versions selected during onboarding. Then, as new Windows pre-release updates become available, application packages are automatically tested with new pre-release content. Thereafter, an additional round of insights may be required. If customers do not specifically request access, then application packages will be tested against only the current released version of Windows. After packages are successfully tested, customers can deliver them to their software customers and end users with confidence and the assurance that Test Base did its job. Next steps Follow the link to get started Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/microsoft-365/test-base/overview?view=o365-worldwide
2022-08-08T07:33:46
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com