content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Contents Now Platform Administration Previous Topic Next Topic Translate individual field labels and values Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Translate individual field labels and values When translating just a few field labels or values, such as when you add customizations to a translated instance, use the procedure that applies to the type of text being translated. Three types of ServiceNow fields store translated strings: Translated_field: Stores field labels, related list names, and certain field values. The value of the translated_field replaces the label, list name, or field value when the user selects the matching language. Translated_field values have a one-to-many relationship with their associated keys. As a result, multiple records can reference one translated_field value. Translated_text: Stores long text values in plain text. The value of the translated_text replaces the plain text when the user selects the matching language. Translated_text values have a one-to-one relationship with their associated keys. As a result, only one record can reference a translated_text value. Translated_html: Stores long text values in HTML. The value of the translated_html replaces the HTML when the user selects the matching language. Translated_html values have a one-to-one relationship with their associated keys. As a result, only one record can reference a translated_html value. All three translated field types support list sorting. To determine the field type, right-click the field on the form, select Configure Dictionary, and check the Type field. ServiceNow stores the translated values as separate records and displays the proper value according to the end user’s language. You can translate an entire instance by exporting the translation tables and then importing the translated strings as described under Translate the Interface.Note: In addition to translated field types, currency fields display the same price in different currencies based on the user's language. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/localization/concept/c_TranslateIndFieldLabelsAndValues.html | 2019-10-14T01:29:23 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.servicenow.com |
Campaign management
Campaign management includes creating, searching and downloading data. Please read the below to understand how to manage campaigns properly in Tenjin.
Campaign Search
You can search your campaigns with either app, platform, or channel as shown below. Also you can use any string match for your campaign name in "Refine by" field. In this example below, you will search any campaigns for Word Search that has "US" in campaign name.
Campaign batch upload
Batch uploading campaigns from a CSV will save you time if you have a lot of campaigns to create. Just click "Batch Upload", select your channel, and upload csv.
The csv should contain the following headers.
- campaign_name: Name of the campaign.
- campaign_id: ID of the campaign (optional).
- bundle_id: Bundle ID of the app being promoted.
- store_id: App Store ID of the app being promoted.
- platform: Platform of the app being promoted (ios, android).
After your batch upload succeeds, you will receive the email notification.
Campaign CSV download
You can also download the list of campaigns by clicking "Download Results as CSV". The csv contains all campaigns that you selected in the search filter.
Campaign Tags
You can add Targeting Tags for each campaign, so you can analyze campaign performance grouped by those tags. Go to each campaign page, and enter the tag with the following format.
Tag name: Tag value
There are pre-configured tags such as Gender, Age. You can add as many tags as possible if you want. | https://docs.tenjin.com/en/tracking/campaigns.html | 2019-10-14T02:07:54 | CC-MAIN-2019-43 | 1570986648481.7 | [array(['../images/campaign_search.png', None], dtype=object)
array(['../images/batch_upload.png', None], dtype=object)
array(['../images/campaign_download.png', None], dtype=object)
array(['../images/targeting.png', None], dtype=object)] | docs.tenjin.com |
Contains one row for each restored filegroup. This table is stored in the msdb database.
Remarks
To reduce the number of rows in this table and in other backup and history tables, execute the sp_delete_backuphistory stored procedure.
See Also
Backup and Restore Tables (Transact-SQL)
restorefile (Transact-SQL)
restorehistory (Transact-SQL)
System Tables (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/restorefilegroup-transact-sql | 2017-07-20T20:17:10 | CC-MAIN-2017-30 | 1500549423320.19 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
You can create a resource action to allow the consumers of the XaaS create a user blueprint to change the password of the user after they provision the user.
Before you begin
Log in to the vRealize Automation console as an XaaS architect.
Verify that you create a custom resource action that supports provisioning Active Directory users. See Create a Test User as a Custom Resource.
Procedure
- Click the New icon (
).
- Navigate to vRealize Orchestrator workflow library, and select the Change a user password workflow. in the
- Click Next.
- Select Test User from the Resource type drop-down menu.
This selection Finish.
- On the Resource Actions page, select the Change the password of the Test User row and click Publish.
Results
You created a resource action for changing the password of a user, and you made it available to add to an entitlement.
What to do next
Add the Create a test user blueprint to a service. See Create a Service and Add Creating a Test User Blueprint to the Service. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.config.doc/GUID-0BB27603-246F-4946-8CB6-6B8AA1D08CAB.html | 2017-07-20T18:46:28 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
Custom properties are vRealize Automation-supplied properties. You can also define your own properties. Properties are name-value pairs used to specify attributes of a machine or to override default specifications.
You can use custom properties to control different provisioning methods, types of machines, and machine options as disk drives.
Customize the guest OS for a machine, for instance, by including specified users in selected local groups.
Specify network and security settings.
When you add a property to a blueprint, reservation, or other form you can specify if the property is to be encrypted and also if the user must be prompted to specify a value when provisioning. These options cannot be overridden when provisioning.
A property specified in a blueprint overrides the same property specified in a property group. This enables a blueprint to use most of the properties in a property group while differing from the property group in some limited way. For example, a blueprint that incorporates a standard developer workstation property group might override the US English settings in the group with UK English settings.
You can apply properties in reservations and business groups to many machines. Their use is typically limited to purposes related to their sources, such as resource management. Specifying the characteristics of the machine to be provisioned is generally done by adding properties to blueprints and property groups. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.custom.props.doc/GUID-3649E826-6532-4C24-8C7F-09CF0D03A073.html | 2017-07-20T18:46:20 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
Lxc clustered services on OVH¶
Introduction¶
OVH provides powerful internet-connected servers at an affordable price and a scriptable IPv4 takeover. This is a great combination for clustered services driven by opensvc. This cookbook explains the steps involved in integrating such a cluster with LXC services on a local disk to gain a decent partitioning between services without compromise on performance and memory usage.
Preparing a node¶
Before moving on to the next step, you should have a couple of servers delivered by OVH, setup with Debian Squeeze, which has initscripts and kernel adapted for LXC. You should also have an 'IP failover' available. Finally, the OpenSVC agent should be installed on both nodes (doc)
Additional packages¶
Install:
apt-get install lxc bridge-utils python2.6 python2.5 debootstrap rsync lvm2 ntp python-soappy
And opensvc from
Ethernet bridge¶
Create a backend bridge connected to a dummy interface. In
/etc/network/interfaces add the following block and activate the bridge using ifup br0:
auto br0 iface br0 inet static bridge_ports dummy0 bridge_stp off bridge_fd 0 bridge_maxwait 5 address 192.168.0.1 netmask 255.255.255.0 pre-up /sbin/modprobe dummy
Kernel parameters¶
In
/etc/sysctl.conf set the following parameters and reload the configuration using
sysctl -p:
# lxc routing net.ipv4.ip_forward=1 net.ipv4.conf.br0.proxy_arp=1
Preparing the service¶
Disk setup¶
OVH servers come with a 4 GB root filesystem, a ~4 GB swap partition and the rest of the disk is allocated to /home. The /home filesystem can be replaced by a single physical volume. Create a volume group over this pv and one or a set of logical volumes for each container. Format the logical volumes using the filesystem that suits you. Mount the logical volume set of the first container to create:
umount /home vi /etc/fstab # remove the /home entry pvcreate /dev/your_home_dev vgcreate vg0 /dev/your_home_dev lvcreate -n service_name -L 20G vg0 mkfs.ext4 /dev/vg0/opt/opensvc_name mkdir /opt/opensvc_name mount /dev/vg0/opt/opensvc_name /opt/opensvc_name
Container creation¶
Prepare the lxc container creation wrapper:
gzip -dc /usr/share/doc/lxc/examples/lxc-debian.gz >/tmp/lxc-debian
Create the container rootfs:
/tmp/lxc-debian -p /opt/opensvc_name
Basic container setup
- network
- locale
- tz
- hosts
- rc.sysinit (remove swaps and udev actions)
Create the container¶
create a lxc config file as
/tmp/lxc.conf containing:
lxc.utsname = service_name lxc.tty = 4 lxc.pts = 1024 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.rootfs = /opt/opensvc_name/rootfs lxc.cgroup.devices.deny = a lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm lxc.cgroup.devices.allow = c 254:0 rwm
and create the container with:
lxc-create -f /tmp/lxc.conf -n service_name
Start the container:
lxc-start -n service_name
Opensvc service creation¶
Trust the node root account to ssh-login into the container:
mkdir /opt/opensvc_name/rootfs/root/.ssh cat /root/.ssh/id_dsa.pub >>/opt/opensvc_name/rootfs/root/.ssh/authorized_keys
Create the service configuration file:
[default] app = MYAPP vm_name = service_name mode = lxc service_type = PRD nodes = node1.mydomain node2.mydomain autostart_node = node1.mydomain drpnode = [fs#1] dev = /dev/mapper/vg0-service_name mnt = /opt/opensvc_name mnt_opt = defaults type = ext4 always_on = nodes [ip#1] ipdev = br0 ipname = service_name post_start = /etc/opensvc/opensvc_name.d/ovh_routes start service_name 1.2.3.4 pre_stop = /etc/opensvc/opensvc_name.d/ovh_routes stop service_name 1.2.3.4 [sync#0] src = /opt/opensvc_name/ dst = /opt/opensvc_name dstfs = /opt/opensvc_name target = nodes snap = true
OVH routing and ipfailover¶
create the trigger scripts store, which is synchronized across nodes:
mkdir -p /etc/opensvc/opensvc_name.dir cd /etc/opensvc/ ln -s opensvc_name.dir opensvc_name.d
create and adapt the trigger scripts as
/etc/opensvc/opensvc_name.dir/ovh_routes:
#!/bin/bash svc=$2 vip=$3/dev/null >&1 } case $1 in start) has_route || ip route add $route /etc/opensvc/etc/$svc.d/ipfailover # make sure proxy_arp and ip_forwarding settings are set sysctl -p >/dev/null 2>&1 # containers are not able to load kernel modules. # trigger loading of common ones from here iptables -L -n >/dev/null 2>&1 ;; stop) has_route && ip route del $route ;; esac
and
/etc/opensvc/opensvc_name.dir/ipfailover:
#!/usr/bin/python2.5 vip = '1.2.3.4' nodes_ip = { 'n2': dict( otheracc='ksXXXXX.kimsufi.com', thisip='a.b.c.d'), 'n1': dict( otheracc='ksYYYYY.kimsufi.com', thisip='d.c.b.a'), } # login information nic = 'xxxx-ovh' password = 'xxxx' # # don't change below # from SOAPpy import WSDL import sys soap = WSDL.Proxy('') try: session = soap.login( nic, password ) except: print >>sys.stderr, "Error login" from os import uname x, nodename, x, x, x = uname() # dedicatedFailoverUpdate try: result = soap.dedicatedFailoverUpdate(session, nodes_ip[nodename]['otheracc'], vip, nodes_ip[nodename]['thisip']); print "dedicated Failover Update successfull"; except: print >>sys.stderr, "Error dedicated Failover Update" # logout try: result = soap.logout( session ) except: print >>sys.stderr, "Error logout"
Make sure this last script is owned by
root and has
700 permissions, as it contains important credentials. | http://docs.opensvc.com/howto.lxc.html | 2017-07-20T18:28:11 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.opensvc.com |
The Template Toolkit OPAC will now load all Added Content by the Record ID, not just jacket images. This will allow added content providers that support it to load additional content by other identifiers.
The OpenILS::WWW::AddedContent::ContentCafe provider has been updated to use the newer Content Cafe 2 API in full. With this update the ability to load content based on ISBN or UPC is now enabled.
With the updated code the option for displaying a "No Image" image or a 1x1 pixel image is no longer available. Instead the Apache-level "blank image" rules will trigger when no image is available. The configuration option controlling this behavior can thus be removed from opensrf.xml entirely.
By default the module will prefer ISBNs over UPCs, but will request information for both. If you wish for UPCs to be preferred, or wish one of the two identifier types to not be considered at all, you can change the "identifier_order" option in opensrf.xml. When the option is present only the identifier(s) listed will be sent.
The OPAC now displays RDA bib tag 264 information for Producer, Distributor, Manufacturer, and Copyright within a full bib record’s summary. This is in addition to the RDA bib tag 264 publisher information, indicator 2 equal to 1, that was already being displayed in previous versions of Evergreen. The OPAC full bib view also now contains the Schema.org copyrightYear value.
Additionally, this information is now available in search results as well when viewing more details.
A: | http://docs.evergreen-ils.org/2.7/_opac.html | 2017-07-20T18:28:50 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.evergreen-ils.org |
R1 2017 SP2
This article explains the manual changes required when upgrading to Telerik Reporting R1 2017 SP2 (11.0.17.406).
Changes
Built-in functions
Format(format, value) text function is evaluated using the current item's culture instead of using the culture of the current thread.
HTML5 Report Viewer
The report viewer's default template file telerikReportViewerTemplate.html has been modified:
A "trv-report-pager" class has been added to the toolbar page number input list item, so it can be easily accessible and customizable via CSS rules.
A non-breaking space has been added between the Export button icons for readability.
It is recommended to update the template manually with the above changes when using a custom template file.
Dependencies
WPF Report Viewer Dependencies
The viewer is build with Telerik UI Controls for WPF 2017.1.222.40. If you are using a newer version consider adding binding redirects. For more information see: How to: Add report viewer to a WPF application
If you connect to a REST service or Report Server instance, you have to install the Microsoft ASP.NET Web API Client v.4.0.30506 NuGet package. Installing a newer version would require upgrading the project's target framework.
Silverlight Report Viewer Dependencies
The viewer is build with Telerik UI Controls for Silverlight 2017.1.222.1050.
Standalone Report Designer
TRDX and TRDP files created by the Standalone Report Designer use schema version
HTML5 Report Viewer Dependencies
The HTML5 Report Viewer depends on the following libraries:
Telerik Kendo UI (2015.3.930 or later)
jQuery (1.9.1 or later). | http://docs.telerik.com/reporting/upgrade-path-2017-r1-sp2 | 2017-07-20T18:33:47 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.telerik.com |
Turnkey Plugins \ Social Login for OpenCart 2.x
Social Login for OpenCart 2.x
Social Login allows your users to connect with one click to your OpenCart 2.x shop by using their social network accounts. Gather rich demographic information (age, gender, phone numbers ...) about your users without requiring them to fill out any forms. Obtain pre-validated email addresses and increase your data quality.
Social Login seamlessly integrates into your OpenCart from the official repository
The download links are on the bottom of the page that opens when you click on the button below.
Download Social Login for OpenCart
b. Extract the /root/ folder
Extract all files and folders included in the downloaded .ZIP file to the root directory of your OpenCart installation. Existing files have to be overwritten. The following files need to be extracted:
Enabld the social networks that you would like to use by ticking the corresponding do not see Social Login in my OpenCart shop
First open the Social Login settings in your OpenCart administration area and make sure that the Social Login Status is set to Enabled. Then click on the Layout Positions tab on top of the page and add Social Login to the positions where it should be displayed. OpenCart GitHub repository to contribute to the development of this extension. | http://docs.oneall.com/plugins/guide/social-login-opencart/2/ | 2017-07-20T18:25:56 | CC-MAIN-2017-30 | 1500549423320.19 | [array(['http://public.oneallcdn.com/img/docs/screenshots/opencart/opencart-2-extract-files.png',
'OpenCart Module Files'], dtype=object)
array(['http://public.oneallcdn.com/img/docs/screenshots/opencart/opencart-2-install-module.png',
'OpenCart Install Social Login'], dtype=object)
array(['http://public.oneallcdn.com/img/docs/screenshots/opencart/opencart-2-setup-module.png',
'OpenCart Setup Social Login'], dtype=object) ] | docs.oneall.com |
Archiving old Talks¶
The old Oxford Talks system is about to be closed down. Talks entered on that system before February 2015 will be deleted.
If you would like to get a copy of your old talks:
- Go to
- Find your list and note down its ID number
- Type the following into the address bar of your browser substituting your list ID number for [ID number][ID number]?start_time=0
or, for a table (but not quite so much information)[ID number]?layout=empty&start_time=0
- You can copy and paste the information on the resulting web page into Notepad or a similar plain text app. | http://talksox.readthedocs.io/en/latest/user/general/old-talks.html | 2017-07-20T18:27:17 | CC-MAIN-2017-30 | 1500549423320.19 | [] | talksox.readthedocs.io |
Using VMware Workstation Player for Windows is updated with each release of the product or when necessary. This table provides the update history of Using VMware Workstation Player for Windows. Revision Description EN-001871-02 Updated Virtual Machine Processor Support to reflect the supported functionality. Corrected the procedure in Import a Windows XP Mode Virtual Machine to reflect the supported functionality. Corrected Add a New Virtual Hard Disk to a Virtual Machine to remove functionality not supported in Workstation Player. Updated Run an Unattended Workstation Player Installation on a Windows Host to reflect the supported functionality. Updated Installation Properties to remove parameters no longer supported. Removed "REMOVE Property Values". Updated Connecting USB Devices to Virtual Machines to add a statement for how to manually connect a USB device to a virtual machine. Updated Add a Host Printer to a Virtual Machine to add a prerequisite that the virtual machine must be powered on or off before adding a printer. Updated the global configuration file location in Disable Smart Card Sharing. Added a note in Map or Mount a Virtual Disk to a Drive on the Host System that this functionality is not supported in the standalone version of Workstation Player. A note was also added that Workstation Player does not support taking or deleting snapshots. Corrected Compact a Virtual Hard Disk to remove functionality not supported in Workstation Player. Updated Limitations of Moving a Virtual Machine to a Different Host to reflect the supported functionality. Updated Expand a Virtual Hard Disk to provide information on how to determine whether a virtual machine is a linked clone or the parent of a linked clone. A note was also added that Workstation Player does not support taking or deleting snapshots. note to Removing Hardware from a Virtual Machine stating that you cannot remove hardware from a virtual machine while it is in suspended state. EN-001871-01 Removed references to deprecated guest operating systems in the document. Removed procedures for Linux hosts in the document. Removed the following sections because the functionality was removed in a previous release: "Stream a Virtual Machine from a Web Server " "Make a Virtual Machine Available for Streaming " Removed requirement in Processor Requirements for Host Systems for "LAHF/SAHF support in long mode". This requirement applies only to older 64-bit CPUs produced before 2006. Updated Prepare the Host System to Use 3D Accelerated Graphics to add a statement clarifying OpenGL3.3 support. Updated Guest Operating Systems That Support Shared Folders for supported guest operating systems. Added Changing Automatic Bridging Settings. EN-001871-00 Initial release. | https://docs.vmware.com/en/VMware-Workstation-Player/12.0/com.vmware.player.win.using.doc/GUID-92479B8D-B4EF-4327-9C6B-D2DEE22B3C01.html | 2017-07-20T18:49:00 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
When you run a PowerCLI cmdlet that assigns an image profile that is not Auto Deploy ready, a warning message appears.
Problem
When you write or modify rules to assign an image profile to one or more hosts, the following error results:
Warning: Image Profile <name-here> contains one or more software packages that are not stateless-ready. You may experience problems when using this profile with Auto Deploy.
Each VIB in an image profile has a stateless-ready flag that indicates that the VIB is meant for use with Auto Deploy. You get the error if you attempt to write an Auto Deploy rule that uses an image profile in which one or more VIBs have that flag set to FALSE.
You can use hosts provisioned with Auto Deploy that include VIBs that are not stateless ready without problems. However booting with an image profile that includes VIBs that are not stateless ready is treated like a fresh install. Each time you boot the host, you lose any configuration data that would otherwise be available across reboots for hosts provisioned with Auto Deploy.
Procedure
- Use Image Builder PowerCLI cmdlets to view the VIBs in the image profile.
- Remove any VIBs that are not stateless-ready.
- Rerun the Auto Deploy PowerCLI cmdlet. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.install.doc/GUID-FD61DB9B-7AF4-4025-9700-A76EB9D62936.html | 2017-07-20T18:49:20 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
HTTP Secondary Indexes
Secondary Indexes allows an application to tag a Riak object with one or more field/value pairs. The object is indexed under these field/value pairs, and the application can later query the index to retrieve a list of matching keys.
Request
Exact Match
GET /buckets/mybucket/index/myindex_bin/value
Range Query
GET /buckets/mybucket/index/myindex_bin/start/end
Range query with terms
To see the index values matched by the range, use
return_terms=true.
GET /buckets/mybucket/index/myindex_bin/start/end?return_terms=true
Pagination
Add the parameter
max_results for pagination. This will limit the results and provide for the next request a
continuation value.
GET /buckets/mybucket/index/myindex_bin/start/end?return_terms=true&max_results=500 GET /buckets/mybucket/index/myindex_bin/start/end?return_terms=true&max_results=500&continuation=g2gCbQAAAAdyaXBqYWtlbQAAABIzNDkyMjA2ODcwNTcxMjk0NzM=
Streaming
GET /buckets/mybucket/index/myindex_bin/start/end?stream=true
Response
Normal status codes:
200 OK
Typical error codes:
400 Bad Request- if the index name or index value is invalid.
500 Internal Server Error- if there was an error in processing a map or reduce function, or if indexing is not supported by the system.
503 Service Unavailable- if the job timed out before it could complete
Example
$ curl -v * About to connect() to localhost port 8098 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8098 (#0) > GET /buckets/mybucket/index/field1_bin/val1 HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8098 > Accept: */* > < HTTP/1.1 200 OK < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Date: Fri, 30 Sep 2011 15:24:35 GMT < Content-Type: application/json < Content-Length: 19 < * Connection #0 to host localhost left intact * Closing connection #0 {"keys":["mykey1"]}% | http://docs.basho.com/riak/kv/2.2.0/developing/api/http/secondary-indexes/ | 2017-07-20T18:39:55 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.basho.com |
Sitefinity CMS comes with a build-in data provider that stores non-system libraries data in database and set of blob storage providers, which manage the storing of the binary content. However, you are also capable of creating your own providers and use them to manage your binary content.
This tutorial guides you through the steps required to create a custom provider. As I example you will create Dropbox libraries data provider.
Back To Top | http://docs.sitefinity.com/tutorial-create-a-dropbox-libraries-data-provider | 2017-07-20T18:36:25 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.sitefinity.com |
eventsource basics
Streaming updates with server-sent events
This article is Ready to Use.
By Eric Bidelman
Originally published Nov. 30, 2010, updated: June 16, 2011
Summary
An introduction seat include:
Polling is a traditional technique used by the vast majority of AJAX applications. The basic idea is that the application repeatedly polls a server for data. If you're familiar with the HTTP protocol, you know that fetching data revolves around a request/response format. The client makes a request and waits for the server to respond with data. If none is available, an empty response is returned. So what's the big deal with polling? Extra polling creates greater HTTP overhead.
Long polling (Hanging GET / COMET) is a slight variation on polling. In long polling, if the server does not have data available, the server holds the request open until new data is made available. Hence, this technique is often referred to as a "hanging GET". When information becomes available, the server responds, closes the connection, and the process is repeated. The effect is that the server is constantly responding with new data as it becomes available. The shortcoming is that the implementation of such a procedure typically involves hacks such as appending script tags to an "infinite" iframe. We can do better than hacks!
Server-Sent Events, on the other hand, have been designed from the ground up to be efficient. When communicating using SSEs, a server can push data to your app whenever it wants, without the need to make an initial request. In other words, updates can be streamed from server to client as they happen. SSEs open a single unidirectional channel between server and client.
The main difference between Server-Sent Events and long polling is that SSEs are handled directly by the browser and the user simply has to listen for messages.
Server-Sent Events vs. WebSockets
Why would you choose Server-Sent Events over WebSockets? Good question.
One reason SSEs have been kept in the shadow is because later APIs like WebSockets provide a richer protocol to perform bi-directional, full-duplex communication. Having a two-way channel is more attractive for things like games, messaging apps, and for cases where you need near real-time updates in both directions. However, in some scenarios data doesn't need to be sent from the client. You simply need updates from some server action. A few examples would be friends' status updates, stock tickers, news feeds, or other automated data push mechanisms (e.g., updating a client-side Web SQL Database or IndexedDB object store). If you'll need to send data to a server,
XMLHttpRequest is always a friend.
SSEs are sent over traditional HTTP. That means they do not require a special protocol or server implementation to get working. WebSockets, on the other hand, require full-duplex connections and new Web Socket servers to handle the protocol. In addition, Server-Sent Events have a variety of features that WebSockets lack by design, such as automatic reconnection, event IDs, and the ability to send arbitrary events.
JavaScript API
To subscribe to an event stream, create an
EventSource object and pass it the URL of your stream:
if (!!window.EventSource) { var source = new EventSource('stream.php'); } else { // Result to xhr polling :( }
Note: If the URL passed to the
EventSource constructor is an absolute URL, its origin (scheme, domain, port) must match that of the calling page.
Next, set up a handler for the
message event. You can optionally listen for
open and
error:
source.addEventListener('message', function(e) { console.log(e.data); }, false); source.addEventListener('open', function(e) { // Connection was opened. }, false); source.addEventListener('error', function(e) { if (e.readyState == EventSource.CLOSED) { // Connection was closed. } }, false);
When updates are pushed from the server, the
onmessage handler fires and new data is available in its
e.data property. The magical part is that whenever the connection is closed, the browser will automatically reconnect to the source after ~3 seconds. Your server implementation can even have control over this reconnection timeout. See "Controlling the reconnection timeout" in the next section.
That's it. Your client is now ready to process events from
stream.php.
Event Stream Format
Sending an event stream from the source is a matter of constructing a plaintext response, served with a
text/event-stream Content-Type, that follows the SSE format. In its basic form, the response should contain a "
data:" line, followed by your message, followed by two "\n" characters to end the stream:
data: My message\n\n
Multiline Data
If your message is longer, you can break it up by using multiple "
data:" lines. Two or more consecutive lines beginning with "
data:" will be treated as a single piece of data, meaning only one
message event will be fired. Each line should end in a single "\n" (except for the last, which should end with two). The result passed to your
message handler is a single string concatenated by newline characters. For example:
data: first line\n data: second line\n\n
will produce "first line\nsecond line" in
e.data. One could then use
e.data.split('\n').join() to reconstruct the message sans "\n" characters.
Send JSON Data
Using multiple lines makes it easy to send JSON without breaking syntax:
data: {\n data: "msg": "hello world",\n data: "id": 12345\n data: }\n\n
and possible client-side code to handle that stream:
source.addEventListener('message', function(e) { var data = JSON.parse(e.data); console.log(data.id, data.msg); }, false);
Associating an ID with an Event
You can send a unique id with a stream event by including a line starting with "
id:":
id: 12345\n data: GOOG\n data: 556\n\n
Setting an ID lets the browser keep track of the last event fired so that if the connection to the server is dropped, a special HTTP header (
Last-Event-ID) is set with the new request. This lets the browser determine which event is appropriate to fire. The
message event contains a
e.lastEventId property.
Controlling the reconnection timeout
The browser attempts to reconnect to the source roughly 3 seconds after each connection is closed. You can change that timeout by including a line beginning with "
retry:", followed by the number of milliseconds to wait before trying to reconnect.
The following example attempts a reconnect after 10 seconds:
retry: 10000\n data: hello world\n\n
Specifying an event name
A single event source can generate different types of events by including an event name. If a line beginning with "
event:" is present, followed by a unique name for the event, the event is associated with that name. On the client, an event listener can be setup to listen to that particular event.
For example, the following server output sends three types of events, a generic 'message' event, 'userlogon', and 'update' event:
data: {"msg": "First message"}\n\n event: userlogon\n data: {"username": "John123"}\n\n event: update\n data: {"username": "John123", "emotion": "happy"}\n\n
With event listeners setup on the client:
source.addEventListener('message', function(e) { var data = JSON.parse(e.data); console.log(data.msg); }, false); source.addEventListener('userlogon', function(e) { var data = JSON.parse(e.data); console.log('User login:' + data.username); }, false); source.addEventListener('update', function(e) { var data = JSON.parse(e.data); console.log(data.username + ' is now ' + data.emotion); }, false);
Server Examples
A simple server implementation in PHP:
<?php header('Content-Type: text/event-stream'); header('Cache-Control: no-cache'); // recommended to prevent caching of event data. ?> /** * Constructs the SSE data format and flushes that data to the client. * * @param string $id Timestamp/id of this connection. * @param string $msg Line of text that should be transmitted. */ function sendMsg($id, $msg) { echo "id: $id" . PHP_EOL; echo "data: $msg" . PHP_EOL; echo PHP_EOL; ob_flush(); flush(); } $serverTime = time(); sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
Here's a similiar implementation using Node JS:
var http = require('http'); var sys = require('sys'); var fs = require('fs'); http.createServer(function(req, res) { //debugHeaders(req); if (req.headers.accept && req.headers.accept == 'text/event-stream') { if (req.url == '/events') { sendSSE(req, res); } else { res.writeHead(404); res.end(); } } else { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(fs.readFileSync(__dirname + '/sse-node.html')); res.end(); } }).listen(8000); function sendSSE(req, res) { res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive' }); var id = (new Date()).toLocaleTimeString(); // Sends a SSE every 5 seconds on a single connection. setInterval(function() { constructSSE(res, id, (new Date()).toLocaleTimeString()); }, 5000); constructSSE(res, id, (new Date()).toLocaleTimeString()); } function constructSSE(res, id, data) { res.write('id: ' + id + '\n'); res.write("data: " + data + '\n\n'); } function debugHeaders(req) { sys.puts('URL: ' + req.url); for (var key in req.headers) { sys.puts(key + ': ' + req.headers[key]); } sys.puts('\n\n'); }
sse-node.html:
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> </head> <body> <script> var source = new EventSource('/events'); source.onmessage = function(e) { document.body.innerHTML += e.data + '<br>'; }; </script> </body> </html>
Cancel an event stream
Normally, the browser auto-reconnects to the event source when the connection is closed, but that behavior can be canceled by either the client or server.
To cancel a stream from the client, simply call
source.close();. To cancel a stream from the server, respond with a non "
text/event-stream" Content-Type or return an HTTP status other than
200 OK (e.g.,
404 Not Found).
Both methods will prevent the browser from re-establishing the connection.
A Word on Security
From the WHATWG's section on Cross-document messaging security:
Authors should check the origin attribute to ensure that messages are only accepted from domains that they expect to receive messages from. Otherwise, bugs in the author's message handling code could be exploited by hostile sites.
So as an extra level of precaution, be sure to verify that
e.origin in your
message handler matches your app's origin:
source.addEventListener('message', function(e) { if (e.origin != '') { alert('Origin was not'); return; } ... }, false);
Another good idea is to check the integrity of the data you receive:
Furthermore, even after checking the origin attribute, authors should also check that the data in question is of the expected format...
Demo
A demo app written in PHP is available on googlecodesamples.com along with its source. | https://docs.webplatform.org/wiki/tutorials/eventsource_basics | 2015-02-27T07:31:31 | CC-MAIN-2015-11 | 1424936460577.67 | [] | docs.webplatform.org |
User Guide
Local Navigation
Downgrade or return to the previous version of the BlackBerry Device Software over the wireless network
Depending on the options that your wireless service provider or administrator sets, you might not be able to perform this task.
- On the Home screen or in a folder, click the Options icon.
- Click Device > Software Updates.
- If you recently updated your BlackBerry Device Software, to return to the previous software version, press the
key > View Result > Downgrade.
- To downgrade to an earlier software version, scroll to a software version that the
icon appears beside. Click Perform Downgrade. Follow the instructions on the screen.
Previous topic: Update the BlackBerry Device Software
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/37644/Downgrade_return_to_previous_device_sw_OTA_61_1553996_11.jsp | 2015-02-27T07:57:14 | CC-MAIN-2015-11 | 1424936460577.67 | [] | docs.blackberry.com |
-
1 Comment
Sebastien Brunot
Would it be possible to document mvn command line options too ? I can't manage to find them on the site
| http://docs.codehaus.org/display/MAVENUSER/Proposed+Documentation?showChildren=false | 2015-02-27T07:36:14 | CC-MAIN-2015-11 | 1424936460577.67 | [array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/emoticons/sad.png',
'(sad)'], dtype=object) ] | docs.codehaus.org |
Key enhancements to the previous release
- Support for Consul
- Ability to add storage to a head-only node
- Ability to import data from an external storage source
- Ability to bootstrap and deploy PX through external automated procedures. PX takes command line parameters and does not need a config.json.
- Support for Rancher
Key bugs addressed since the previous release
- Fix for occasional PX node restart. Occasionaly during heavy load, a PX node would get marked down by gossip, and asked to restart. While this did not cause any noticable delay in IO, it would flood the logs with gossip error messages. This bug has been fixed.
- Minor CLI enhancements around displaying IP addresses instead of node IDs (where possible). | https://docs.portworx.com/release-notes-1-0-6.html | 2017-08-16T23:48:51 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.portworx.com |
- Step 1: Install and license Docker UCP
- Step 2: Update your docker.service file
- Step 3: Launch a container
You can use Portworx to implement storage for Docker Universal Control Plane (UCP). This section is qualified using Docker 1.11 and Universal Control Plane 1.1.2.
Step 1: Install and license Docker UCP
Follow the instructions for Installing Docker UCP.
Note:
You must run Docker Commercially Supported (CS) Engine.
After installing Docker UCP, you must license your installation.
Step 2: Update your docker.service file
Not all nodes within a UCP cluster will necessarily be running Portworx. For UCP to properly identify Portworx nodes, 3: Launch a container. | https://docs.portworx.com/scheduler/docker/ucp.html | 2017-08-16T23:47:42 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['/images/constraints.png', 'UCP GUI constraints'], dtype=object)] | docs.portworx.com |
Automatic Invoice On Order
From PhpCOIN Documentation
phpCOIN can automatically create an invoice when a customer places an order, or it can leave invoice creation up to you.
To have phpCOIN automatically create an invoice for every order, set [Admin] -> [Parameters] -> [ordering] -> [invoices] -> [Order Invoice: Auto-Create From Order] to YES.
Within [Admin] -> [Parameters] -> [all] -> [invoices] are other parameters that will control the creation of the invoice, such as the billing cycle, delivery method, status and delivered flag. The default parameters will work for most installations, but you may wish to change the default behavior.
The "Default Billing Cycle" is the billing cycle that will be used for all auto-generated invoices. There can be only one "default billing cycle", so we recommend that you set it to your most commonly used billing cycle and manually change the invoice for those orders that use a different billing cycle. | http://docs.phpcoin.com/index.php?title=Automatic_Invoice_On_Order | 2017-08-16T23:46:08 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.phpcoin.com |
Getting Started
You can add RadRadioButton either at design time or at run time:
Design Time
- To add a RadRadioButton to your form, drag a RadRadioButton from the toolbox onto the surface of the form designer.
- Like a standard button, you can control the displayed text by setting the Text property.
- Double click the RadRadioButton at design time to generate the ToggleStateChanged event.
Run Time
To programmatically add a RadRadioButton to a form, create a new instance of a RadRadioButton, and add it to the form Controls collection.
Adding RadRadioButton at run time
RadRadioButton radioButton = new RadRadioButton(); radioButton.Text = "Medium size"; radioButton.ToggleState = Telerik.WinControls.Enumerations.ToggleState.On; this.Controls.Add(radioButton);
Dim radioButton As New RadRadioButton() radioButton.Text = "Medium size" radioButton.ToggleState = Telerik.WinControls.Enumerations.ToggleState.[On] Me.Controls.Add(radioButton)
The following tutorial demonstrates creating two groups of radio buttons that act independently of one another. Choices are reflected in a label as they are selected.
1. Drop two RadGroupBoxes on the form.
2. Drop three RadRadioButtons on the first groupbox. Set their Text properties to "Small", "Medium" and "Large".
3. Drop three RadRadioButtons on the second groupbox. Set their Text properties to "Latte", "Mocha", and "Hot Chocolate".
4. Drop a RadLabel on the form. Set the name of the RadLabel to "lblStatus".
5. Hold down the Shift key and select all six RadRadioButtons with the mouse.
6. Click the Events tab of the Properties Window.
7. Double click the ToggleStateChanged event to create an event handler. Replace the code with the following:
Handling the ToggleStateChanged Event
void radRadioButton1_ToggleStateChanged(object sender, StateChangedEventArgs args) { lblStatus.Text = (sender as RadRadioButton).Text + " is selected"; }
Private Sub radRadioButton1_ToggleStateChanged(ByVal sender As Object, ByVal args As StateChangedEventArgs) lblStatus.Text = (TryCast(sender, RadRadioButton)).Text + " is selected" End Sub
8. Press F5 to run the application. Notice that selections made on radio buttons in the panel are independent of the radio button choices on the form. RadRadioButton determines the radio groups by the control parent. All RadRadioButtons sharing the same parent e.g. RadGroupBox, RadPanel or a Form will be part of one group.
RadRadioButtons are grouped according to their parent. You can place a set of RadRadioButtons on a panel so that the choices made will be mutually exclusive, i.e. when one radio button is chosen, the others are deselected. By including multiple parents with their own RadRadioButtons you can have multiple groups of radio buttons acting independently. | http://docs.telerik.com/devtools/winforms/buttons/radiobutton/getting-started | 2017-08-16T23:48:42 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['images/buttons-radiobutton-getting-started001.png',
'buttons-radiobutton-getting-started 001'], dtype=object)] | docs.telerik.com |
How to track share analytics with Revive Old Post
Revive Old Post lets you track how much traffic to your website your shares are generating. Firstly you would need to set up Google Analytics on your website.
If you have not done this already before then a good guide could be found here: How to install Google Analytics in WordPress
If you already have Google Analytics installed then the next steps are simple!
Firstly go to Revive Old Post->General Settings Tab->Scroll down and enable "Google Analytics Campaign Tracking"
That's it! You're done. To see these statistics in Google Analytics you need to first login then go to Acquisition->All Traffic->Source/Medium from there you will be able to see all traffic generated by Revive Old Post using Google Analytics Campaign Tracking.
| http://docs.themeisle.com/article/494-how-to-track-share-analytics-with-revive-old-post | 2017-08-16T23:58:28 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/582a7c7ec697916f5d05008e/file-8HR7UYxtLx.png',
None], dtype=object) ] | docs.themeisle.com |
Database Replication and Clustering
The trepctl backup command performs a backup of the corresponding database for the selected service.
trepctl backup [
-backup agent ] [
-limit s ] [
-storage agent ]
Where:
Table 8.39. trepctl backup Command Options
Without specifying any options, the backup uses the default configured backup and storage system, and will wait indefinitely until the backup process has been completed:
shell>
trepctl backupBackup completed successfully; URI=storage://file-system/store-0000000002.properties
The return information gives the URI of the backup properties file. This
information can be used when performing a restore operation as the
source of the backup. See
Section 8.21.3.15, “trepctl restore Command”. Different
backup solutions may require that the replicator be placed into the
OFFLINE state before the backup is
performed.
A log of the backup operation will be stored in the replicator log
directory, if a file corresponding to the backup tool used (e.g.
mysqldump.log).
If multiple backup agents have been configured, the backup agent can be selected on the command-line:
shell>
trepctl backup -backup mysqldump
If multiple storage agents have been configured, the storage agent can
be selected using the
-storage option:
shell>
trepctl backup -storage file
A backup will always be attempted, but the timeout to wait for the
backup to be started during the command-line session can be specified
using the
-limit option. The
default is to wait indefinitely. However, in a scripted environment you
may want to request the backup and continue performing other operations.
The
-limit option specifies
how long trepctl should wait before returning.
For example, to wait five seconds before returning:
shell>
trepctl -service alpha backup -limit 5Backup is pending; check log for status
The backup request has been received, but not completed within the allocated time limit. The command will return. Checking the logs shows the timeout:
... management.OpenReplicatorManager Backup request timed out: seconds=5
Followed by the successful completion of the backup, indicated by the URI provided in the log showing where the backup file has been stored.
... backup.BackupTask Storing backup result... ... backup.FileSystemStorageAgent Allocated backup location: » uri =storage://file-system/store-0000000003.properties ... backup.FileSystemStorageAgent Stored backup storage file: » file=/opt/continuent/backups/store-0000000003-mysqldump_2013-07-15_18-14_11.sql.gz length=0 ... backup.FileSystemStorageAgent Stored backup storage properties: » file=/opt/continuent/backups/store-0000000003.properties length=314 ... backup.BackupTask Backup completed normally: » uri=storage://file-system/store-0000000003.propertiess
The URI can be used during a restore. | https://docs.continuent.com/tungsten-clustering-5.1/cmdline-tools-trepctl-command-backup.html | 2017-08-16T23:43:50 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.continuent.com |
Notification Strategies allow you to implement specific behaviors for when Notifications are created in your application.
You can generate specific algorithms to determine when the BeaconNotification.createNotification method is called.
For example, you can decide to display a beacon notification only for specific bus stops or specific bus lines.
You can also decide to only send a maximum of 2 notifications every 30 min.
Connecthings' SDK provides you with a sample set of Notifications Strategies, which you can freely use. You can also code your own, and implement as many as you want in your application.
You can configure the strategies you want to use directly on the App Management section of the Adtag Platform, including your own strategies.
And so, your strategies will be automatically active/deactive in your application depending on the schedulingInterval variable you seized on the App Mananger section.
Note:
Notification Strategies are simple, unitary rules to facilitate beacon notification management. Complex rules can be achieved by cumulating various Notification Strategies.
Using Notification Strategies provided by Connecthings
We provide three built-in Notification Strategies.
Two are directly available through the Adtag Application Management section under the Spam Management section in the Notifcations Tab.
The last strategy is the Spam Region Filter Strategy, which allows to condition the launch of new beacon notifications only when an entry into a new "region" is detected.
A region is an Adtag dynamic field, which enables you to dynamically organize beacons into regions at any time.
This strategy can be integrated using the ATBeaconNotificationStrategySpamRegionFilter class on iOs or the BeaconNotificationStrategySpamRegionFilter class on Android.
Testing The Region Filter Strategy
Register strategy in Adtag
Connect to the Adtag Platform
Click on your name at top right of your screen
Select Application Management in the menu
A page with the list of applications configured on your account opens
Select your application
Click on the Notifications tab
Activate the custom strategies
Add the region filter strategy under the name spamRegionFilter to the list of custom strategies
Test the strategy
- Clone the beacon-tutorial repository
git clone
Open the project/folder :
- Android: android > beacon> 7-Default-Notification-Strategy > Complete with your Android Studio
- iOs: ios > beacon > 7-Default-Notification-Strategies > Complete and launch cocoapod.
Configure the SDK as described in the quickstart tutorial
Start testing
The Notification Strategy interface/delegate
A Notification Strategy must implement the following methods:
getName: this method returns the string key associated to your notification strategy that was created. You must add this key to the appropriate custom notification field on the Adtag platform.
updateParameters: for now, this method is for Connecthings' SDK private use.
deleteCurrentNotification: query to know if the SDK has to delete the currently displayed notification. If no beacon notification is displayed, the method is not called. By default, the method must return true.
createNewNotification: query to know if the SDK has to create a new notification. If the method deleteCurrentNotification returns "false", this method is not called. By default, the method must return true.
onNotificationCreated: method to notify the Strategy that a notification has been created.
onNotificationDeleted: method to notify the Strategy that a notification has been deleted.
onBackground: method to notify the Strategy that the application is going into the background.
onForeground: method to notify the Strategy that the application is activated / in the foreground.
onStartMonitoringRegion: method to notify the Strategy that the SDK has started to monitor a new region based on the UUID, major, and minor of the nearest and last detected beacon.
save: allows to save data needed by the strategy when the application restarts.
load: allows to retrieve the data previously stored by the strategy.
Notification Lifecycle with Notification Strategies
The following diagram gives an overview of the notification lifecycle, and includes the various queries to the Notification Strategy methods.
| https://docs.connecthings.com/2.7/ios/discover-beacon-notification-strategies.html | 2017-08-16T23:29:36 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['img/strategynotification.png', None], dtype=object)] | docs.connecthings.com |
- Install and configure Docker
- Specify storage
- Run PX
- Access the pxctl CLI
- Adding Nodes
- Application Examples
To install and configure PX via the Docker CLI, use the command-line steps in this section.
Important:
PX stores configuration metadata in a KVDB (key/value store), such as Etcd or Consul. If you have an existing KVDB, you may use that. If you want to set one up, see the etcd example for PX
Install and configure Docker
PX requires a minimum of Docker version 1.10 to be installed. Follow the Docker install guide to install and start the Docker Service.
Important:
If you are running a version prior to Docker 1.12 or running docker on Ubuntu 14.4 LTS, then this command line:
# lsblk
Example output:
Note that devices without the partition are shown under the TYPE column as part.
#
Identify the storage devices you will be allocating to PX. PX can run in a heterogeneous environment, so you can mix and match drives of different types. Different servers in the cluster can also have different drive configurations.
Run PX
You can now run PX via the Docker CLI as follows:-dev -k etcd://myetc.company.com:2379 -c MY_CLUSTER_ID -s /dev/sdb -s /dev/sdc
Where the following arguments are provided to the PX daemon:
-daemon > Instructs PX to start in daemon mode. Other modes are for service users only. -k > Points to your key value database, such as an etcd cluster or a consul cluster. -userpwd > username and password for ETCD authentication in the form <user_name>:<passwd> -ca > location of CA file for ETCD authentication -cert > location of certificate for ETCD authentication -key > location of certificate key for ETCD authentication -acltoken > ACL token value used for Consul authentication -c > Specifies the cluster ID that this PX instance is to join. You can create any unique name for a cluster ID. -s > Specifies the various drives that PX should use for storing the data. -a > Instructs PX to use any available, unused and unmounted drive. PX will never use a drive that is mounted. -A > Instructs PX to use any available, unused and unmounted drives or partitions. PX will never use a drive or partition that is mounted. -f > Optional. Instructs PX to use an unmounted drive even if it has a filesystem on it. -z > Optional. Instructs PX to run in zero storage mode. In this mode, PX can still provide virtual storage to your containers, but the data will come over the network from other PX nodes. -d > Optional. Specifies the data interface. -m > Optional. Specifies the management interface.
The following Docker runtime command options are explained:
--privileged > Sets PX to be a privileged container. Required to export block device and for other functions. --net=host > Sets communication to be on the host IP address over ports 9001 -9003. Future versions will support separate IP addressing for PX. --shm-size=384M > PX advertises support for asynchronous I/O. It uses shared memory to sync across process restarts -v /run/docker/plugins > Specifies that the volume driver interface is enabled. -v /dev > Specifies which host drives PX can see. Note that PX only uses drives specified in config.json. This volume flage is an alternate to --device=\[\]. -v /etc/pwx/config.json:/etc/pwx/config.json > the configuration file location. -v /var/run/docker.sock > Used by Docker to export volume container mappings. -v /var/lib/osd:/var/lib/osd:shared > Location of the exported container mounts. This must be a shared mount. -v /opt/pwx/bin:/export_bin > Exports the PX command line (**pxctl**) tool from the container to the host.
Optional - running with-dev
At this point, Portworx should be running on your system. To verify, run
docker ps.
Authenticated
etcd and
consul",
Access the pxctl CLI
After Portworx is running, you can create and delete storage volumes through the Docker volume commands or the pxctl command line tool, which is exported to /opt/pwx/bin/pxctl. With pxctl, you can also inspect volumes, the volume relationships with containers, and nodes.
To view all pxctl options, run:
# /opt/pwx/bin/pxctl help
To view global storage capacity
For more on using pxctl, see the CLI Reference..
Adding Nodes
To add nodes to increase capacity and enable high availability, simply repeat these steps on other servers. As long as PX is started with the same cluster ID, they will form a cluster.
Application Examples
After you complete this installation, continue with the set up to run stateful containers with Docker volumes: | https://docs.portworx.com/install/docker.html | 2017-08-16T23:48:31 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.portworx.com |
Deleting Your AWS CloudFormation Live Smooth Streaming Stack
When your live event is over, delete the stack that you created for Live Smooth Streaming. This deletes the AWS resources that were created for your live event, and stops the AWS charges for those resources.
To delete an AWS CloudFormation stack for live streaming
Sign in to the AWS Management Console and open the AWS CloudFormation console at.
Check the checkbox for the stack, and click Delete Stack.
Click Yes, Delete to confirm.
To track the progress of the stack deletion, check the checkbox for the stack, and click the Events tab in the bottom frame. | http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IIS4.1DeletingStack.html | 2017-08-16T23:58:25 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.aws.amazon.com |
Configuring audit logging to a logback log file
Configuring audit logging in DataStax Enterprise.
If you've enabled audit logging and set the logger to output to the SLF4JAuditWriter
as described in Configuring and using data auditing,.
Configuring data auditing
You can configure which categories of audit events to log, and whether to omit operations against specific keyspaces from audit logging.
Procedure
- Open the logback.xml file in a text editor.
- Accept the default settings or change the properties in the logback.xml file to configure data auditing:
<!->5< beyond overhead that is caused by regular processing.
- Restart the node to see changes in the log. | https://docs.datastax.com/en/datastax_enterprise/4.7/datastax_enterprise/sec/secAuditingLogback.html | 2017-08-16T23:39:18 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.datastax.com |
Upgrading inSync client installed on a user's laptop
Why upgrade inSync client?
With every new release of inSync, a new version of inSync Client too is released. The new client contains fixes for issues identified in the previous releases and support for the new functionalities in the latest release. Therefore, to take advantage of the new features and the fixes, you must upgrade the inSync Client installed on user laptops.
You can trigger inSync Client upgrade on user laptops from the inSync Admin Console. Alternatively, you can provide the latest inSync Client installer to the users and ask them to upgrade inSync client on their laptops.
Procedure.
- Click Upgrade. | https://docs.druva.com/004_inSync_Professional/5.3.1/030_Profile%2C_User%2C_and_Device_Management/030_Managing_user_devices_and_inSync_client_installations/080_Upgrading_inSync_client_installed_on_a_user's_laptop | 2017-08-16T23:48:18 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.druva.com |
VMware backup fails if auto-enabled CBT is selected for a VM with incompatible hardware
Problem description
Backup from virtual machine fails with VMOMI error. The issue occurs when you have an old hardware version of a virtual machine and it fails to back up when Auto-Enable CBT is selected in VM Backup Policy. The following entry is logged in the detailed backup job logs.
You can get the VM hardware version of the VM (vmx-04) from the Phoenix log file.
Resolution
Upgrade the hardware version of the virtual machine to 7 or later. Virtual machine hardware version 7, introduced with vSphere 4.0, is the first version released that supports CBT.
If VM hardware upgrade is not possible, move the virtual machine to a different VM server group with a VM backup policy that has disabled the Auto enable CBT setting. | https://docs.druva.com/Knowledge_Base/Phoenix/Troubleshooting/VMware_backup_fails_if_auto-enabled_CBT_is_selected_for_a_VM_with_incompatible_hardware | 2017-08-16T23:48:07 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['https://docs.druva.com/@api/deki/files/30507/PhoenixLogFile.png?revision=1&size=bestfit&width=507&height=161',
None], dtype=object) ] | docs.druva.com |
How to Change Height of Slider in LawyeriaX
For some of our users, the height of the slider in LawyeriaX might be too big, or even too small. In case this happens, we created a document which will help you adjust the height of the slider just the way you need it.
First of all, please install the plugin Advanced CSS Editor. After that, go to Appearance -> Customize -> Advanced CSS Editor and add the following code:
#main-slider .item { max-height: 500px !important; } .item-inner { padding: 10px }
You can adjust the height with the values you need. | http://docs.themeisle.com/article/372-how-to-change-height-of-slider-in-lawyeriax | 2017-08-16T23:57:03 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/57768b38903360258a10dcee/file-TEXBYyKm1n.png',
None], dtype=object) ] | docs.themeisle.com |
We removed our free Sandbox April 25th.
You can read more on our blog.
Django and MongoDB¶
This tutorial will show how to build a minimal Django project from scratch on DotCloud, storing data in a MongoDB database.
All the code presented here is also available on GitHub, at; moreover, you can read the whole.
We will need a python service to run our Django code. This service allows us to expose a WSGI-compliant web application. Django can do WSGI out of the box, so that’s perfect for us.
Since our goal is to show how Django and MongoDB work together, we will also add a mongodb service.
www: type: python db: type: mongodb
The role and syntax of the DotCloud Build File is explained in further detail in the documentation, at.
Specifying Requirements¶
A lot of Python projects use a requirements.txt file to list their dependencies. DotCloud detects this file, and if it exists, pip will be used to install the dependencies.
We need (at least) four things here:
- pymongo, the MongoDB client for Python;
- django_mongodb_engine, which contains the real interface between Django and MongoDB;
- django-nonrel, a fork of Django which includes minor tweaks to allow operation on NoSQL databases;
- djangotoolbox, which is not strictly mandatory for Django itself, but is required for the admin site to work.
To learn more about the specific differences between “regular” Django and the NoSQL version, read django-nonrel on All Buttons Pressed.
pymongo git+ hg+ hg+
pip is able to install code from PyPI (just like easy_install); but it can also install code from repositories like Git or Mercurial, as long as they contain a setup.py file. This is very convenient to install new versions of packages automatically without having to publish them on PyPI at each release – like in the present case.
See for details about pip and the format of requirements.txt.
Django Basic Files¶
Let’s pretend that our Django project is called hellodjango. We will add the essential Django files to our project. Actually, those files did not come out of nowhere: we just ran django-admin.py startproject hellodjango to generate them!
Note
The rest of the tutorial assumes that your project is in the hellodjango directory. If you’re following those instructions to run your existing Django project on DotCloud, just replace hellodjango with the real name of your project directory, of course.
The files:
wsgi.py¶
The wsgi.py file will bridge between the python service and our Django app.
We need two things here:
- inject the DJANGO_SETTINGS_MODULE variable into the environment, pointing to our project settings module;
- setup the application callable, since that is what the DotCloud service will be looking for.
import os os.environ['DJANGO_SETTINGS_MODULE'] = 'hellodjango.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()
We can now push our application, by running dotcloud create djangomongo and dotcloud push djangomongo (you can of course use any application name you like). A Python service and a MongoDB service will be created, the code will be deployed, and the URL of the service will be shown at the end of the build. If you go to this URL, you will see the plain and boring Django page, typical of the “just started” project.
Add Database Credentials to settings.py¶
Now, we need to edit settings.py to specify how to connect to our database. When you deploy your application, these parameters are stored in the DotCloud Environment File. This allows you to repeat the deployment of your application (e.g. for staging purposes) without having to manually copy-paste the parameters into your settings each time.
If you don’t want to use the Environment File, you can retrieve the same information with dotcloud info djangomongo.db.
The Environment File is a JSON file holding a lot of information about our stack. It contains (among other things) our database connection parameters. We will load this file, and use those parameters in Django’s settings.
See for more details about the Environment File.
hellodjango/settings.py step 2:
# Django settings for hellodjango project. import json import os with open(os.path.expanduser('~/environment.json')) as f: env = json.load(f) DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', '[email protected]'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django_mongodb_engine', 'NAME': 'admin', 'HOST': env['DOTCLOUD_DB_MONGODB_URL'], 'SUPPORTS_TRANSACTIONS': False, } } # Local time zone for this installation. Choices can be found here: # …
Note
We decided to use the admin database here. This was made to simplify the configuration process. While you can actually use any database name you like (MongoDB will create it automatically), MongoDB admin accounts have to authenticate against the admin database, as explained in MongoDB Security and Authentication docs. If you want to use another database, you will have to create a separate user manually, or add some extra commands to the postinstall script shown in next sections.
Note
You might wonder why we put the MongoDB connection URL in the HOST parameter! Couln’t we just put the hostname, and then also set USER, PASSWORD, and PORT? Well, we could. However, when we will want to switch to replica sets, we will have to specify multiple host/port combinations. And one convenient way to do that is to use the Standard Connection String Format.
Quite conveniently, django_mongodb_engine will just pass the database parameters to pymongo‘s Connection constructor. By using the mongodb:// URL as our HOST field, we’re actually handing it to pymongo, which will do The Right Thing.
Disable “sites” and Enable “djangotoolbox”¶
By default, the application django.contrib.sites won’t behave well with the Django MongoDB engine. Under the hood, it boils down to differences in primary keys, which are strings with MongoDB, and integers elsewhere. It would of course be more elegant to fix sites in the first place, but for the sake of simplicity, we will just disable it since we don’t need it for simple apps.
Also, we need djangotoolbox to make user editing in the admin site work correctly. Long story short, djangotoolbox allows us to do some JOINs on non-relational databases.
hellodjango/settings.py step', ) # …
Django Admin Site¶
We will now activate the Django administration application. Nothing is specific to DotCloud here: we just uncomment the relevant lines of code in settings.py and urls.py.
hellodjango/settings.py step', ) # …
hellodjango/urls.py (updated):
from django.conf.urls.defaults import patterns, include, url # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Examples: # url(r'^$', 'hellodjango.views.home', name='home'), # url(r'^hellodjango/', include('hellodjango.foo.urls')), # Uncomment the admin/doc line below to enable admin documentation: # url(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: url(r'^admin/', include(admin.site.urls)), )
If we push our application now, we can go to the /admin URL, but since we didn’t call syncdb yet, the database structure doesn’t exist, and Django will refuse to do anything useful for us.
Automatically Call syncdb¶
To make sure that the database structure is properly created, we want to call manage.py syncdb automatically each time we push our code. On the first push, this will create the Django tables; later, it will create new tables that might be required by new models you will define.
To make that happen, we create a postinstall script. It is called automatically at the end of each push operation.
#!/bin/sh python hellodjango/manage.py syncdb --noinput
A few remarks:
- this is a shell script (hence the #!/bin/sh shebang at the beginning), but you can also use a Python script if you like;
- the script should be made executable, by running chmod +x postinstall before the push;
- by default, syncdb will interactively prompt you to create a Django superuser in the database, but we cannot interact with the terminal during the push process, so we disable this thanks to --noinput.
If you push the code at this point, hitting the /admin URL will display the login form, but we don’t have a valid user yet, and the login form won’t have the usual Django CSS since we didn’t take care about the static assets yet.
Create Django Superuser¶
Since the syncdb command was run non-interactively, it did not prompt us to create a superuser, and therefore, we don’t have a user to login.
To create an admin user automatically, we will write a simple Python script that will use Django’s environment, load the authentication models, create a User object, set a password, and give him superuser privileges.
The user login will be admin, and its password will be password. Note that if the user already exists, it won’t be touched. However, if it does not exist, it will be re-created. If you don’t like this admin user, you should not delete it (it would be re-added each time you push your code) but just remove its privileges and reset its password, for instance.
#!/usr/bin/env python from wsgi import * from django.contrib.auth.models import User u, created = User.objects.get_or_create(username='admin') if created: u.set_password('password') u.is_superuser = True u.is_staff = True u.save()
#!/bin/sh python hellodjango/manage.py syncdb --noinput python mkadmin.py
At this point, if we push the code, we will be able to login, but we still lack the CSS that will make the admin site look nicer.
Handle Static and Media Assets¶
We still lack the CSS required to make our admin interface look nice. We need to do three things here.
First, we will edit settings.py to specify STATIC_ROOT, STATIC_URL, MEDIA_ROOT, and MEDIA_URL.
MEDIA_ROOT will point to /home/dotcloud/data. By convention, the data directory will persist across pushes. This is important: You don’t want to store media (user uploaded files...) in current or code, because those directories are wiped out at each push.
We decided to point STATIC_ROOT to /home/dotcloud/volatile, since the static files are “generated” at each push. We could have put them in current but to avoid conflicts and confusions we chose a separate directory.
The next step is to instruct Nginx to map /static and /media to those directories in /home/dotcloud/data and /home/dotcloud/volatile. This is done through a Nginx configuration snippet. You can do many interesting things with custom Nginx configuration files; gives some details about that.
The last step is to add the collectstatic management command to our postinstall script. Before calling it, we create the required directories, just in case.
hellodjango/settings.py step 5:
# … # Absolute filesystem path to the directory that will hold user-uploaded files. # Example: "/home/media/media.lawrence.com/media/" MEDIA_ROOT = '/home/dotcloud/data/media/' # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash. # Examples: "", "" MEDIA_URL = '/media/' # Absolute path to the directory static files should be collected to. # Don't put anything in this directory yourself; store your static files # in apps' "static/" subdirectories and in STATICFILES_DIRS. # Example: "/home/media/media.lawrence.com/static/" STATIC_ROOT = '/home/dotcloud/volatile/static/' # URL prefix for static files. # Example: "" STATIC_URL = '/static/' # URL prefix for admin static files -- CSS, JavaScript and images. # Make sure to use a trailing slash. # Examples: "", "/static/admin/". ADMIN_MEDIA_PREFIX = '/static/admin/' # …
location /media/ { root /home/dotcloud/data ; } location /static/ { root /home/dotcloud/volatile ; }
postinstall (updated again):
#!/bin/sh python hellodjango/manage.py syncdb --noinput python mkadmin.py mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static python hellodjango/manage.py collectstatic --noinput
After pushing this last round of modifications, the CSS for the admin site (and other static assets) will be found correctly, and we have a very basic (but functional) Django project to build on!
Wait for MongoDB to Start¶
At this point, if you try to push the app with a different name (e.g. dotcloud push djmongo), you will see database connection errors. Let’s see why, and how to avoid that!
On DotCloud, you get your own MongoDB instance. This is not just a database inside an existing MongoDB server: it is your own MongoDB server. This eliminates access contention and side effects caused by other users. However, it also means that when you deploy a MongoDB service, you will have to wait a little bit while MongoDB pre-allocates some disk storage for you. This takes about one minute.
If you did the tutorial step by step, you probably did not notice that, since there was probably more than one minute between your first push, and your first attempt to use the database. But if you try to push all the code again, it will try to connect to the database straight away, and fail.
To avoid connection errors (which could happen if we try to connect to the server before it’s done with space pre-allocation), we add a small helper script, waitfordb.py, which will just try to connect every 10 seconds. It exists as soon as the connection is successful. If the connection fails after 10 minutes, it aborts (as a failsafe feature).
postinstall (final touches):
#!/bin/sh python waitfordb.py python hellodjango/manage.py syncdb --noinput python mkadmin.py mkdir -p /home/dotcloud/data/media /home/dotcloud/data/static python hellodjango/manage.py collectstatic --noinput
#!/usr/bin/env python from wsgi import * from django.contrib.auth.models import User from pymongo.errors import AutoReconnect import time deadline = time.time() + 600 while time.time() < deadline: try: User.objects.count() print 'Successfully connected to database.' exit(0) except AutoReconnect: print 'Could not connect to database. Waiting a little bit.' time.sleep(10) except ConfigurationError: print 'Could not connect to database. Waiting a little bit.' time.sleep(10) print 'Could not connect to database after 10 minutes. Something is wrong.' exit(1)
With this last step, our Django deployment can be reproduced as many times as required (for staging, development, production, etc.) without requiring special manual timing! | http://docs.dotcloud.com/0.4/tutorials/python/django-mongodb/ | 2014-12-18T00:41:53 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.dotcloud.com |
UI Guidelines
Local Navigation
Text
On the UI,® PlayBook™ tablet. If a complex concept must be explained, consider adding it to a Help screen.
- Do not use the product name "PlayBook" when referring to the tablet. Use the more general term "tablet" instead.
- Avoid using trademarks or other symbols on the UI. Add these to an About screen instead.
- Place labels to the left of UI components. The only exceptions are check boxes and radio buttons, which should have the labels to the right.
- Left-align all UI component labels. Labels are the text that is adjacent to (not within) the UI component.
- Center-align all text within UI components, such as text in buttons, drop-down lists, toggle buttons, and so on.
Fonts
Myriad is the default and preferred font for the BlackBerry® PlayBook™ tablet, because it is designed for easy reading for most users. The font is available in standard width and semi-condensed width. It can also be bolded and italicized. Your application can use other fonts, but you should use coordinated typefaces and be consistent.
Best Practices
- Use a font size of 21 pixels for normal text and 36 pixels for titles. You can use other font sizes, but avoid using any font smaller than 15 pixels.
- Use the standard font width for general purposes. The semi-condensed font width should be reserved for places where there is limited space.
- Use italic for emphasis. For example, you can use italic to emphasize a word, a short phrase, words in a foreign language that readers might not be familiar with, or titles of things or events.
- Avoid underlining text, except when you are creating a hyperlink within a longer string of text.
- Use a paragraph spacing of 2.5 times the font size. For example, with a font size of 18 pixels, use 45 pixels (18 x 2.5) paragraph spacing, or with a font size of 21 pixels, use 53 pixels (21 x 2.5) paragraph spacing.
Capitalization and punctuation
Consistency is the most important aspect of text capitalization and punctuation. You should pay close attention to the usage of particular terms, phrases, and abbreviations, and make sure they always appear identical.
Best Practices
- Do not use colons to terminate labels. Colons don't add any value when a label is adjacent to its UI component. Instead, vertically align multiple labels to the left, leaving space between the labels and the related UI components.
- Avoid unnecessary end punctuation. Whereas complete sentences must end with a period, short phrases and lists typically do not.
- Avoid using all uppercase characters. Uppercase text makes users feel like you are shouting at them.
- Use quotation marks (" ") when you refer to an alias or user-defined object name. For example, when a user deletes a bookmark, the name of the bookmark should appear in quotation marks, as in the message "Are you sure you want to delete "World News"?".
- Avoid using ellipses (…) except to indicate truncated text.
- Use title case capitalization for all UI component labels except check boxes, radio buttons, or labels that might read like a sentence. Capitalize the first word, the last word, and all other words except articles or prepositions with fewer than four letters.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/27299/Text_tablet_1526156_11.jsp | 2014-12-18T00:53:58 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.blackberry.com |
This book assumes you are already familiar with the VoltDB feature set. The choice of features, especially those related to availability and durability, will impact sizing. Therefore, if you are new to VoltDB, we encourage you to visit the VoltDB web site to familiarize yourself with the product options and features.
You may also find it useful to review the following books to better understand the process of designing, developing, and managing VoltDB applications:
VoltDB Tutorial, a quick introduction to the product and is recommended for new users
Using VoltDB, a complete reference to the features and functions of the VoltDB product
VoltDB Administrator's Guide, information on managing VoltDB clusters
These books and more resources are available on the web from. | http://docs.voltdb.com/PlanningGuide/PrefResources.php | 2014-12-18T00:44:42 | CC-MAIN-2014-52 | 1418802765093.40 | [] | docs.voltdb.com |
Set Up AppStream 2.0 Stacks and Fleets
To stream your applications, Amazon AppStream 2.0 requires an environment consisting of a stack, an associated fleet and at least one application image. This topic walks through the steps needed to understand how to set up a stack and a fleet, and how to give users access to the stack. If you haven't already done so, we recommend that you go through the procedures in Getting Started with Amazon AppStream 2.0 before using this topic.
Set Up a Fleet
Set up and create a fleet from which user applications are executed and streamed.
To set up and create a fleet
Open the AppStream 2.0 console at.
You may see the welcome screen showing two choices: Try it now and Get started. Choose Get started, Skip. If you do not see a welcome screen, move on to the next step.
In the left navigation pane, choose Fleets.
Choose Create Fleet and provide a fleet name, optional display name, and optional description. Choose Next.
Choose an image with the applications to stream and choose Next. If you don't have an image to use, see Tutorial: Using an AppStream 2.0 Image Builder.
Provide details for your fleet by providing inputs for the following fields:
- Instance Type
Choose an instance type that matches the performance requirements of your applications. All streaming instances in your fleet launch with the instance type that you select.
- Network Access
Select a VPC and two subnets that have access to the network resources with which your applications need to interact. If you don’t have any subnets, create them using the help link provided and then refresh the subnets list. You can choose existing network settings or create new settings for this fleet. For Internet access on the fleet using your default VPC or with a VPC with a public subnet, choose Default Internet Access. For VPC, select your default VPC or VPC with a public subnet. For Subnet, select one or two public subnets. If you are controlling Internet access using a NAT gateway, leave Default Internet Access unselected and use the VPC with the NAT gateway. For more information, see Network Settings for Fleet and Image Builder Instances.
- Disconnect Timeout
Select the time that a streaming instance should remain active after users disconnect. If users try to reconnect to the streaming session after a disconnection or network interruption within this time interval, they are connected to the session instance they were disconnected from. If users try to connect after this timeout interval, a session launches with a new instance.
- Minimum Capacity
Choose a minimum capacity for your fleet based on the minimum number of users that are expected to be connected at the same time. Capacity is defined in terms of number of instances within a fleet, and every unique user session is served by an instance. For example, to have your stack support 100 concurrent users during low demand, enter the minimum capacity as 100. This ensures that 100 instances are running even if there are fewer than 100 users. If you are unsure about minimum capacity, accept the default value.
- Maximum Capacity
Choose a maximum capacity for your fleet based on the maximum number of users that are expected to be connected at the same time. Capacity is defined in terms of number of instances within a fleet, and every unique user session is served by an instance. For example, to have your stack support 500 concurrent users during high demand, enter the maximum capacity as 500. This ensures that up to 500 instances can be created on demand. If you are unsure about maximum capacity, accept the default value.
- Scaling Details (Advanced)
This section contains default scaling policies that can increase and decrease the capacity of your fleet under specific conditions. Expand this section to change the default scaling policy values. Regardless of scaling policy, your fleet size is always in the range of values specified by Minimum Capacity and Maximum Capacity. We recommend that you accept the default values and choose Review. You can change these values after fleet creation. For more information, see Fleet Auto Scaling for Amazon AppStream 2.0.
Review the details for the fleet, choose Edit for any section to change, and choose Create.
Upon completion of the previous steps, the initial status of your new fleet is listed as Starting in the Fleets dashboard. The fleet needs to be in Running status to be associated with a stack and used for streaming sessions. Over the next few minutes, the service sets up some resources and the fleet moves to Running status. Wait for the fleet to be in Running status before attempting to use it for streaming sessions.
Set Up a Stack
Set up and create a stack to control access to your fleet.
To set up and create a stack
On the left navigation pane, choose Stacks, Create Stack.
Provide a stack name, optional display name and description. For Fleet, select the fleet to associate with your stack. Choose Next.
To enable or disable persistent storage for the stack users, select or clear the Enable Home Folders check box For more information, see Persistent Storage with AppStream 2.0 Home Folders.
Choose Review.
Review the details for the stack, choose Edit for any section to change, and choose Create.
Upon completion of the previous steps, the status of your new stack is listed as Active in the Stacks dashboard. This signifies that the stack is available to work with from the console, but it cannot be used for streaming sessions until the associated fleet is in Running status.
Provide Access to Users
After you create a stack with an associated fleet, each user needs an active URL to access it. This procedure automatically creates a streaming URL that you can share with a user for access to apps.
To provide access to users
On the left navigation pane, choose Stacks, select a stack with a running fleet, and choose Actions, Create streaming URL.
For UserID, specify the user ID. Select an expiration time, which determines how long the generated URL is valid.
Choose Get URL. This displays a window with the URL. To copy the link to your clipboard, choose Copy Link.
When you are finished viewing and copying the generated URL, choose Exit.
Clean Up Resources
You can stop your running fleet and delete your active stack to free up resources and to avoid unintended charges to your account. We recommend stopping any unused, running fleets. For more information, see AppStream 2.0 Pricing.
To clean up your resources
In the navigation pane, choose Stacks and select the active stack.
Choose Actions, Disassociate Fleet.
From Stack Details, open the Associated Fleet link.
The associated fleet is automatically selected in the new window. Choose Actions, Stop. It usually takes about 5 minutes for a fleet to stop completely. Use the refresh button to update the status.
When the fleet has a Stopped status, choose Actions, Delete.
In the navigation pane, choose Stacks and select the active stack that you chose above.
Choose Actions, Delete.
Next Steps
For more information, see the following topics:
Learn how to use the AppStream 2.0 image builder to add your own apps and create new images that you can stream. For more information, see Tutorial: Using an AppStream 2.0 Image Builder.
Manage your AppStream 2.0 Home Folders. For more information, see Persistent Storage with AppStream 2.0 Home Folders.
Manage your AppStream 2.0 resources to optimize your streaming performance, automatic scaling, and cost structure. For more information, see Managing Amazon AppStream 2.0 Resources.
Control who has access to your AppStream 2.0 streaming instances. For more information, see Controlling Access to Amazon AppStream 2.0.
Monitor your AppStream 2.0 resources using Amazon CloudWatch. For more information, see Monitoring Amazon AppStream 2.0 with Amazon CloudWatch.
Troubleshoot your AppStream 2.0 streaming experience. For more information, see Troubleshooting. | http://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html | 2017-05-22T19:27:29 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.aws.amazon.com |
Why?¶
- It looks nice.
- It’s easy to style with CSS (no
_theme/directory required).
- The navigation is laid out better.
- It works well on small screens and mobile devices.
If you find sphinx-better-theme lacking in any of these areas, please open a Github issue.
It looks nice¶
By default, the only colors are the background color, the body text color, and the link color. Content is separated by layout and whitespace, not background color changes.
The font defaults are a little more modern. There is less variation in font styles. Content is wrapped to about 100 characters by default.
Some docs may look better with more liberal use of color. This theme supports that visual style via CSS rules.
It’s easy to style with CSS¶
Unlike every other Sphinx theme I’m aware of, sphinx-better-theme lets you
customize it with CSS without needing a
_theme/ directory, or anything
beyond a CSS file. And you don’t even need that; you can declare inline CSS in
your Sphinx config file.
html_theme_options = { 'inlinecss': 'color: green;', 'cssfiles': ['_static/my_style.css'], }
One of this project’s major goals is to make visual customization easier so that projects can brand their docs better.
It works well on small screens and mobile devices¶
The built-in themes do not work well on small screens. A few other third party themes get this right, but it’s not widespread.
Deficiencies¶
The markup isn’t easy enough to fully customize with CSS. One of the long-term goals of this project is to make the markup more semantic.
The placement of the logo image isn’t good. | http://sphinx-better-theme.readthedocs.io/en/latest/why.html | 2017-05-22T19:07:27 | CC-MAIN-2017-22 | 1495463607046.17 | [] | sphinx-better-theme.readthedocs.io |
After you have finished running the DaRT Recovery Image Wizard and created the recovery image, you can extract the boot.wim file from the ISO image file and deploy it as a recovery partition in a Windows 7 image.
To deploy DaRT in the recovery partition of a Windows 7 image
Create a target partition in your Windows 7 image that is equal to or greater than the size of the ISO image file that you created by using the DaRT Recovery Image Wizard.
The minimum size required for a DaRT partition is approximately 300MB. However, we recommend 450MB to accommodate for the remote connection functionality in DaRT.. 7 image with the recovery partition.
After your Windows 7 image is ready, distribute the image to computers in your enterprise by using your company’s standard image deployment process. For more information about how to create a Windows 7 image, see Building a Standard Image of Windows 7: Step-by-Step Guide.
For more information about how to deploy a recovery solution to reinstall the factory image in the event of a system failure, see Deploy a System Recovery Image.
Related topics
Deploying the DaRT 7.0 Recovery Image | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/dart-v7/how-to-deploy-the-dart-recovery-image-as-part-of-a-recovery-partition-dart-7 | 2017-05-22T19:52:29 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
No Cassandra processing but high CPU usage
Extremely high CPU usage but no Apache Cassandra processing on Linux platforms.
Extremely high CPU usage but no Apache Cassandra™ processing on Linux platforms.
Check the CPU usage for the process
khugepaged. It may run as high as
100%, blocking other processes.
Cause:
Many modern Linux distributions ship with Transparent Hugepages enabled by default. When Linux uses Transparent Hugepages, the kernel tries to allocate memory in large chunks (usually 2 MB), rather than 4K. This can improve performance by reducing the number of pages the CPU must track. However, some applications still allocate memory based on 4K pages. This can cause noticeable performance problems when Linux tries to defrag 2 MB pages. For more information, see Cassandra Java Huge Pages and this RedHat bug report.
- A temporary fix: drop caches by entering:
sync && echo 3 > /proc/sys/vm/drop_caches
- A better solution: disable defrag for hugepages by entering:
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
- Another alternative: add
-XX:+AlwaysPreTouchto the jvm.options file. This change should be tested carefully before being put into production. For details, see Tuning Java resources and blog post. | http://docs.datastax.com/en/landing_page/doc/landing_page/cstarTroubleshooting/highCPU.html | 2017-05-22T19:27:21 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.datastax.com |
- PNP4Nagios 0.6.x
- PNP4Nagios 0.4.x. | http://docs.pnp4nagios.org/faq/10 | 2020-07-02T09:56:01 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.pnp4nagios.org |
Apigee Edge lets you make management API calls that are authenticated with OAuth2 tokens. Support for OAuth2 is enabled by default on Edge for the Cloud accounts. If you are using Edge for the Private Cloud, you cannot use OAuth2 without first setting up SAML.
How OAuth2 works (with the Apigee management API)
Calls to the Apigee management management API for the first time:
As Figure 1 shows, when you make your initial request to the management API:
- You request an access token. You can do this with the management API,
acurl, or
get_token. For example:
get_token Enter username:
[email protected] the password for user '[email protected]'
mypassw0rdEnter management API to get the tokens, you need to save them for later use yourself.
- You send a request to the management API with the access token.
acurlattaches the token automatically; for example:
acurl
If you use another HTTP client, you be sure to add the access token; for example:
curl -v \ -H "Authorization: Bearer ACCESS_TOKEN"
- The management API executes your request and typically returns a response with data.
OAuth2 flow: Subsequent requests
On subsequent requests, you will not need to exchange your credentials for a token. Instead, you can just include the access token you already have, as long as it hasn't expired yet:
As Figure 2 shows, when you already have an access token:
- You send a request to the management API with the access token.
acurlattaches the token automatically. If you use other tools, you need to add the token manually.
- The management API executes your request and typically returns a response with data.
OAuth2 flow: When your access token expires
When an access token expires (after 30 minutes), you can use the refresh token to get a new access token:
As Figure 3 shows, when your access token has expired:
- You send a request to the management API, but your access token has expired.
- The management API rejects your request as unauthorized.
- You send a refresh token to the Edge OAuth2 service. If you are using
acurl, this is done automatically for you.
- The Edge OAuth2 service responds with a new access token.
- You send a request to the management API with the new access token.
- The management API executes your request and typically returns a response with data.
Get the tokens
To get an access token that you can send to the management API, you can use the following
Apigee utilities, in addition to a utility such as
curl:
- get_token utility: Exchanges your Apigee credentials for access and refresh tokens.
- acurl utility: A
curlwrapper that exchanges your Apigee credentials for the tokens, passes the access token in requests, and automatically refreshes the access token when it expires.
- Token endpoints in the management API: Exchange your Apigee credentials for the access and refresh tokens via a call to the management API.
All of these utilities exchange your Apigee account credentials (email address and password) for an access token. These tokens are good for 30 minutes.
These utilities also send you a refresh token, which you can use to exchange for a new access token when your access token expires. A refresh token is good for 24 hours. So, after 24.5 hours, you will need to submit your credentials again for new tokens.
Access the management API with OAuth2
To access the management management API with
acurl and with
curl are described in
the sections that follow.
Use acurl
To access the management.
Notice that
acurl automatically passes the access token on the second request (you
do not need to pass your user credentials once
acurl stores the OAuth2 tokens). It
gets the token from
~/.sso-cli.
For more information, see Using acurl to access the management API.
Use curl
You can use
curl to access the management API. To do this, you must first get the
access and refresh tokens. You can get these using a utility such as
get_token or the
management API..
After you have successfully saved your access token, you pass it in the
Authorization header of your calls to the management API, as the following example
shows:
curl -v \ -H "Authorization: Bearer ACCESS_TOKEN"
Token expiration
Tokens have the following durations:
- Access tokens expire in 1799 seconds (approximately 30 minutes)
- Refresh tokens expire in 84600 seconds (approximately 24 hours).
- Management API: Send a request that includes:
- Refresh token
grant_typeform parameter set to "refresh_token" | https://docs.apigee.com/api-platform/system-administration/using-oauth2 | 2020-07-02T08:33:25 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['/api-platform/images/management-api/oauth-first-request.png',
'OAuth flow: First request'], dtype=object)
array(['/api-platform/images/management-api/oauth-subsequent-requests.png',
'OAuth flow: Subsequent requests'], dtype=object)
array(['/api-platform/images/management-api/oauth-refresh-token.png',
'OAuth flow: Getting a new access token'], dtype=object) ] | docs.apigee.com |
NAME()
The
NAME() global function is a generic way to implement plugins which call things differently in different applications.
For example, universities have lots of different names for their top level division. They may have names like ‘Faculties’, ‘Departments’ or ‘Schools’, depending on the institution’s history.
It’s also available as:
- a template function (which is the recommended way of using it)
- automatic substitution in form labels and other text
- using internationalisation features like the i() template function and
Locale
text()lookup
- as a standard text interpolation in the standard workflow plugin text system
There is a two argument version so longer phrases can be customised using the same system.
function NAME(name)
Returns a translated version of
name.
The
"std:NAME" service is called to look up the name, if it is implemented by any plugins.
Your
"std:NAME" service function may be called during plugin load, as an exception to the usual service rules. This means that plugins can use
NAME() to set up data structures, but it does mean you have to be very careful with the load order of your plugins.
If nothing translates the name,
name is returned unaltered.
The result is cached.
function NAME(code, defaultName)
Looks up
code using the same mechanism as the single argument version of
NAME(), but if it isn’t translated, the value of
defaultName will be returned.
Use this to allow longer messages to be replaced with customised defaults. The use of a code as a key, rather than the entire phrase, allows the default to be edited without updating all the customisations.
function O.interpolateNAMEinString(string)
Returns a new string with uses of
NAME() replaced by translated sub-strings, as described in String interpolation below.
String interpolation
Forms and standard plugins provide an alternative use of
NAME() through string interpolation.
When strings are interpolated, the pattern
NAME(.+?) is replaced by translated text. Note that the text does not use quotes.
The two argument form uses a
| character to separate the code and default name.
For example:
"The NAME(Head of Department) should action this request." "Please review this NAME(std:workflow:notes-private-label|Private note)"
Use
O.interpolateNAMEinString() to use string interpolation in your plugins.
Usage
Ordinary plugins just use the
NAME() function in their code or templates. As a matter of style, try to restrict use of
NAME() to templates:
<div> "Destination: " NAME("Faculty") </div> <div> NAME("example:routing:committee" "This application will be routed to the Faculty committee.") </div>
and only when necessary as the JavaScript function:
var view = { destination: NAME("Faculty") };
A plugin may translate names with a service such as:
P.implementService("std:NAME", function(name) { if(name === "ping") { return "pong"; } });
Returning nothing or
undefined will allow another service implementation to try translating a string. | https://docs.haplo.org/plugin/misc/name | 2020-07-02T09:53:54 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.haplo.org |
Message-ID: <1088675840.30447.1593684187083.JavaMail.confluence@ip-172-30-0-133> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_30446_1255440998.1593684187075" ------=_Part_30446_1255440998.1593684187075 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The Mobile Module adds the ability to launch Vision module projects= on modern smartphones and tablets. This lets you keep track of your contro= l system while moving around your facility. The Mobile Module can be c= ombined with a remote-access networking architecture to allow global on-the= -go access to your control system.
Normally, you can't launch Vision projects on mobile devices. This is be= cause of the technical limitation of Java SE (Standard Edition) which = does not run on mobile devices. The Mobile Module gets around this lim= itation by launching the Client on the Gateway in a special headless (invis= ible) mode, and then using HTML5 and AJAX to show the Client's screen = on the mobile device's browser.
Typically, the mobile device connects to the Ignition Gateway via the fa= cility's wireless LAN (802.11) infrastructure. To launch a mobile clie= nt, the mobile device simply connects to the Ignition Gateway by point= ing its web browser to the Gateway's LAN address. It is important to unders= tand that normally, the traffic is not going over the device's cellula= r connection because the cellular connection connects to the internet,= and without explicit setup, an Ignition Gateway is not accessible fro= m the outside internet.
Remote mobile access (as in, beyond the reach of wifi) can be enabled th= rough the same networking strategies that enable remote access for sta= ndard Vision clients. For this, the mobile device must be able to acce= ss the Ignition Gateway via its cellular connection. One strategy woul= d be to set up a VPN router and configure the mobile device as a VPN client= . This way, the mobile device could directly access the LAN address of= the Gateway as if it were on-site. Another technique would be to put = the Ignition Gateway in a DMZ so that at least one NIC had a public IP addr= ess. Or, an edge router could be configured to port-forward the HTTP a= nd HTTPS ports to the Gateway. Coordination with your IT is advised wh= en attempting to set up remote access.
About Mobile=20 =20
Watch the Video= p>=20
It is possible to bypass the mobile project selection page by adding the= project name to the URL: jectName
Using the Mobile Option=20 =20
The Gateway will automatically redirect mobile devices to the Mobile Pro= ject Selection screen. In some cases, it may be preferable to view the norm= al Gateway Web Pages on the mobile device, such as the Gateway Logs, or the= Performance page. To bypass the mobile project selection screen, simply ty= pe the address of a specific page on your Gateway into your mobile device's= browser. For example, viewing the Overview page in the Status section on a= mobile device would look like the following: ipAddress:port/main/web/status/
You can add mobile project launch links to the home screen of a mobile = device. Links can point to either the project listing page or directly to a= single project. You can even type the launch link into the browser o= f your mobile device.
In the project properties<= /a> in the Designer, there is a page specific to Mobile clients if you have= the mobile module installed. Here you can enable the project for Mobile, s= et fit, auto-login, and more.
Mobile Clients can be set to specific sizes or told to fit themselves to= the device. By default, they are set to Fit to Device to allow for the man= y varied mobile phone and tablet screen sizes, and to allow for automatic a= djustments when rotating the mobile device. However, if the project needs t= o be set to a specific size, that can be done as well.
In the Mobile section of the Project Properties, There is a Viewport set= ting. You can see that the Fit to Device is sele= cted by default, which will resize itself when the device is rotated.&= nbsp;You can also choose Custom and specify any = size you wish.
Setting Mobile Size and Fit to Device=20 =20
Using the Mobile Module adds additional (client scoped) Tags to the Syst= em Tags folder under System > Mobile. This adds Remote = Address and User Agent values. If location is enabled in the mobile browser= , the Location Tags will show Latitude, Longitude, etc. | https://docs.inductiveautomation.com/exportword?pageId=26023646 | 2020-07-02T10:03:07 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.inductiveautomation.com |
ReMix UK 08 Tickets - Akin to Dromedary Pretzels
ie get them while they're hot. Following on from the massive success of last year's Mix 07, the ReMix UK 08 website has just this minute gone live. Here's a Key Facts document:
- ReMix UK 08 is all about bringing together the very best web design and development talent for a 48 hour conversation
- ReMix UK 08 takes place on 18/19th September at the Brighton Centre in, er, Brighton
- Brighton is a beautiful seaside town on the south coast of the UK made famous by hosting the 1974 Eurovision song contest when Abba won with "Waterloo"
- The Brighton Centre is a big, old, drab, grey conference centre but we're going to transform it into something very special
- Why Thu/Fri? So you don't have to get up early the morning after!
- Why not make a holiday of it? Michael Bolton's playing the Brighton Centre not long after ReMix UK (no he's not playing ReMix UK, I mean he's playing there after we play there)
- Early bird price is £239 for the first 300 places then it goes up to £349 so save £110 with EB
We are going to have some fantastic sessions, some amazing speakers (I should know, I own the dev track so you can blame me if it doesn't live up to your expectations!) and a whole lot of fun. It's rumoured that even TechEd Europe top sessions speaker Mr Mike Taulty might do a session (but only if I ask him very nicely - he can be quite tetchy).
What are you waiting for ? Head over to and register for ReMix UK 08 today!
Hope to see you there...
Technorati Tags: remixuk08,silverlight,wpf,expression,web,asp.net,designer,developer,remix,mix | https://docs.microsoft.com/en-us/archive/blogs/mikeormond/remix-uk-08-tickets-akin-to-dromedary-pretzels | 2020-07-02T10:29:21 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/mikeormond/WindowsLiveWriter/ReMixUK08TicketsAkintoDromedaryPretzels_9BA0/clip_image001_bb440e63-5b27-40bd-b5a1-dbde6e4a09a6.gif',
'clip_image001'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/mikeormond/WindowsLiveWriter/ReMixUK08TicketsAkintoDromedaryPretzels_9BA0/clip_image001%5B6%5D_12597181-3b02-4bb5-8871-cf03d311bd62.gif',
'clip_image001[6]'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/mikeormond/WindowsLiveWriter/ReMixUK08TicketsAkintoDromedaryPretzels_9BA0/clip_image001%5B8%5D_ea394d53-dbb3-4276-a918-3b77211b2972.gif',
'clip_image001[8]'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/mikeormond/WindowsLiveWriter/ReMixUK08TicketsAkintoDromedaryPretzels_9BA0/clip_image001%5B10%5D_0ebbad4f-2178-4287-b166-a2c6d9a05864.gif',
'clip_image001[10]'], dtype=object) ] | docs.microsoft.com |
New QFE for Visual Studio 2010 testing tools
A QFE for Visual Studio 2010 testing tools, which fixes some important issues faced by customers, is now available.The full list of issues fixed by this QFE can be found here and you can download this patch here.: '<name>' <control type> as it may have virtualized children. If the control being searched is descendant of '<name>' '<directory>\data.coverage' because it is being used by another process.
Publish failed or canceled. | https://docs.microsoft.com/en-us/archive/blogs/vstsqualitytools/new-qfe-for-visual-studio-2010-testing-tools | 2020-07-02T09:27:51 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
. We also offer commercial licenses - email for more information. | https://docs.rs/crate/recrypt/0.9.2 | 2020-07-02T09:41:14 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.rs |
We already get quite a lot of feedback on getsatisfaction and on this blog. But we want to become even better! If you would be so very kind as to fill out this questionnaire, we would be super grateful. Thanks in advance
Does Nemo documents work with Word Perfect documents? .wpd files?
By default no, but you can add in the settings under file types.
I’ll add it so that it gets indexed by default in future versions. Thanks 🙂 | https://www.nemo-docs.com/blog/nemo-documents-questionnaire/ | 2020-07-02T09:41:55 | CC-MAIN-2020-29 | 1593655878639.9 | [] | www.nemo-docs.com |
IGWN Conda Distribution¶
The IGWN Conda Distribution is a programme to manage and distribute software and environments used by the International Gravitational-Wave Observatory Network (IGWN) using the conda package manager, the conda-forge community, and the CernVM File System (CVMFS).
The distribution consists of a curated list of packages that are rendered into highly-specified software environment files that can then be downloaded and installed on most machines. For Linux, the environments are created and updated automatically and distributed globally using CVMFS and OASIS.
What is Conda?¶
Conda is an open source package management system. It enables users of Windows, macOS, or Linux, to create, save, load, and switch between software environments on your computer.
What environments are available?¶
For full details of what environments are included in the distribution, and their contents, see Environments.
How do I use the IGWN Conda Distribution?¶
For instructions on how to use Conda and the IGWN Conda Distribution, see Usage.
More helpful hints
See Tips and tricks for more useful hints to help you best utilise the IGWN Conda Distribution.
Contributing¶
If you would like to improve the IGWN Conda distributions, please consider one of the following actions: | https://computing.docs.ligo.org/conda/ | 2021-10-16T03:35:34 | CC-MAIN-2021-43 | 1634323583408.93 | [] | computing.docs.ligo.org |
Databricks SQL concepts
Preview
This feature is in Public Preview.
This article introduces the set of fundamental concepts you need to understand in order to use Databricks SQL effectively.
Interface
This section describes the interfaces that Databricks supports for accessing your Databricks SQL assets: UI and API.
UI: A graphical interface to dashboards and queries, SQL endpoints, query history, and alerts.
REST API An interface that allows you to automate tasks on Databricks SQL objects.
Data management
Visualization: A graphical presentation of the result of running a query.
Dashboard: A presentation of query visualizations and commentary.
Alert: A notification that a field returned by a query has reached a threshold.
Computation management
This section describes concepts that you need to know to run SQL queries in Databricks SQL.
Query: A valid SQL statement.
SQL endpoint: A computation resource on which you execute SQL queries.
Query history: A list of executed queries and their performance characteristics. | https://docs.databricks.com/sql/get-started/concepts.html | 2021-10-16T02:33:43 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../../_images/landing-dbsql.png', 'Landing page'], dtype=object)] | docs.databricks.com |
Date: Fri, 8 May 2015 15:20:03 +0530 From: Avinash Sonawane <[email protected]> To: Pratik Singhal <[email protected]> Cc: [email protected] Subject: Re: Compile freebsd kernel on linux Message-ID: <CAJ9BSW_SZFbgspo66RC=L13bb=_Lc1kQzA_Hk_c_DimUmOfgZA@mail.gmail.com> In-Reply-To: <CAGf2gkP3MvJNYuitOFjncHL1U-pCFNrKMjzTSE6VfNzB7ggQHQ@mail.gmail.com> References: <CAGf2gkP3MvJNYuitOFjncHL1U-pCFNrKMjzTSE6VfNzB7ggQHQ@mail.gmail.com>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Fri, May 8, 2015 at 1:25 PM, Pratik Singhal <[email protected]> wrote: > ? > > Regards, > Pratik Singhal > _______________________________________________ > [email protected] mailing list > > To unsubscribe, send any mail to "[email protected]" May be more suited for [email protected] ? -- Avinash Sonawane (RootKea) PICT, Pune
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=426395+0+/usr/local/www/mailindex/archive/2015/freebsd-questions/20150510.freebsd-questions | 2021-10-16T02:57:55 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.freebsd.org |
package com.me; public class TaxCalculator { private Double percentBaseTax = 7.0; public Double calculateTax(Double price, Integer percentAdditionalTax) { return price * (this.percentBaseTax + percentAdditionalTax) / 100; } public Boolean isTaxFree(Double price) { if (price < 10) { return true; } return false; } }
Invoke Methods - Mule 4
Java methods (either instance or static) can be called through the
invoke and
invoke static operations in the Java module. Their return value is placed in the
payload of the output message or can be placed in a
target variable.
To get a detailed description of the configurable parameters for the Java Invoke operation, review the Java module reference operations section.
Invoke Instance Methods
In the following Java class
TaxCalculator, belongs to the
com.me package:
To invoke instance methods:
Use the new operation from the Java module to create an object on which a method is later invoked. This provides an instanced object that can then call one of its methods.
Set the
invokeoperation to call one of the object’s methods. Just like the
newoperation,
invoketakes a map for input parameters (if the method has them) and supports target parameters.
In the next example, an instance of the
TaxCalculator class is created and placed
into the
taxCalculator target variable. Then the
calculateTax(Double, Integer) method is
called, which takes
price and
percentAdditionalTax as arguments, and its return value is
placed in the
totalTax variable.
<java:new <java:invoke <java:args>#[{ price: 25.5, percentAdditionalTax: 2 }]</java:args> </java:invoke>
For the method parameters, the full package name can be specified, for example
constructor="Person(java.lang.Double, java.lang.Integer)". This is not needed, but it can
be useful to add more clarity in the code or in the case there are clashing class names in the Java code.
In Anypoint Studio, the Java module supports DataSense for the
invoke operation, which provides metadata
for both the input arguments and the output value.
In the example, DataSense discovers that
calculateTax returns a
Number, so
the output metadata for the
invoke operation looks like:
For a complete example of using
invoke in Studio, see New and Invoke Operations in Studio.
When configuring the constructor arguments in the
args parameter,
the keys of the map determine how the parameters are passed to the constructor.
To reference the parameters by name (
price,
percentAdditionalTax, etc.),
the Java class containing the method or constructor has to be compiled
using the
-parameters
compiler flag.
If the class was not compiled with this flag, the same parameters
must be referenced in the declared order and with the canonical names
(
arg0,
arg1, etc.).
In this case:
<java:args>#[{ arg0: 25.5, arg1: 2 }]</java:args>
If the Java classes are defined in a Studio project, the Maven compiler plugin must be
configured in the
pom.xml to compile Java classes with the
-parameters flag:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin>
Invoke Methods with DataWeave
The Java module also has a DataWeave function (
Java::invoke) to provide the same functionality
as the
invoke operation but inside a DataWeave expression. This practice is especially helpful for
methods that return
boolean values. The function takes as arguments the full class name
of an object, the instance method to execute, the instanced object, and the method arguments as a map.
In the Java class in Invoke Instance Methods, this examples creates
a new instance of
TaxCalculator and, rather than use the Java module
invoke operation to call
the
isTaxFree(Double) method, it embeds the DataWeave function in a
Choice component:
<java:new <choice> <when expression="#[Java::invoke('com.foo.TaxCalculator', 'isTaxFree(Double)', vars.taxCalculator, {price: vars.price})]"> <flow-ref </when> </choice>
Invoke Static Methods
An example on how to invoke static Java methods:
<java:invoke-static <java:args>#[{ arg0: 180 }]</java:args> </java:invoke-static>
New and Invoke Operations in Studio
In Anypoint Studio, you can write or load a Java package into a project, configure Java operations within one or more a flows, and run the Mule app in which the operations reside.
This example task sets up two flows in a Mule app:
Write a small Java package:
com.examples.
Write two simple Java classes that contain instance methods:
Hello.javaand
Add.java.
Use the Apache Maven compiler plug-in to compile the classes in a way that allows for the use of named parameters in your configurations, instead of
arg0,
arg1, and so on.
Use the New operation in the Java module to instantiate the
Helloand
Addobjects and to access named parameters in the instance methods.
Use the Invoke operation in the Java module to invoke the
hello()and
add(3,4)instance methods.
Run the Mule app to execute the New and Invoke operations in Studio.
Assume that you want to invoke methods in two simple Java classes:
package com.examples; public class Hello { //Constructor public Hello() { } //Instance method: hello() //Returns the string "helloWorld". public String hello() { return "helloWorld"; } }; } }
To create a Mule app that invokes
hello() and
add():
In Studio, select File > New > Project, provide a project name (
javaexamples), and click Finish.
Create a Java package for your classes by right-clicking your Mule project’s
src/main/javadirectory in Package Explorer.
Select New > Package.
Provide the package name
com.examplesin the Name field.
Click Finish.
Make sure that the package
com.examplesappears under the
src/main/javadirectory.
Add the Java code for the
Helloand
Addclasses to your new
com.examplespackage by right-clicking your new
com.examplespackage in Studio, and select New > Class.
In the Name field, type
Hello.javafor the
Helloclass, and click Finish.
Copy and paste the
Helloclass content into the
Hello.javafile from the listing that follows.
The entire file looks like this, including
package com.examplesat the top:
package com.examples; public class Hello { //Constructor public Hello() { } //Instance method: hello() //Returns the string "helloWorld". public String hello() { return "helloWorld"; } }
Right-click your new
com.examplespackage in Studio, and select New > Class.
In the Name field, typ the
Add.javavalue.
Click Finish.
Copy and paste the class content into the
Add.javafile.
The entire file looks like this, including
package com.examplesat the top:
package com.examples;; } }
Click the javaexamples tab in Studio to return to your Mule app, and set up a flow for the
Helloclass and the
hello()method:
In javaexamples, provide a trigger for the flow by dragging a Scheduler component into the Studio canvas.
Optional: You can also set the frequency for the Scheduler component to a value other than the default: for example, set Frequency to
10and Time unit to
SECONDS.
If the Java module is not already available in your Mule palette, click Add Module and drag the Java module into the left column of the palette.
Click the Java module, place its New operation to the right of the Scheduler in the flow, and then double-click and configure the operation:
Class:
com.examples.Hello
Constructor:
Hello()
Do not click fx for the Constructor setting.
Place the Invoke operation to the right of the New operation in the flow, and double-click and configure the operation:
In the Instance field, click fx, and set the value to
payload.
Class field:
com.examples.Hello
Method field:
hello()
Do not click fx for the Method setting.
Find and drag a Logger component to the right of the Invoke operation in the flow, and in its Message field, click fx and type
payload.
Find and drag a Flow Reference component to the right of the Logger component, and then double-click the component and set the Flow Name field to
javaexamplesFlow1, the name of a new flow that you create in the next step.
Set up a new flow by dragging a new Flow component below the existing flow, making sure that its name is
javaexamplesFlow1(so that your Flow Reference setting in the other flow matches the name of this new flow).
Click the Java module, then drag a New operation to the Process section of your new flow,
javaexamplesFlow1, and provide the following configuration for the operation:
Args:
{ "numA" : 5, "numB" : 6}for the arguments.
Class:
com.examples.Add
Constructor:
Add(int,int)
Do not click fx for the Constructor setting.
Place an Invoke operation to the right of the New operation in your new flow, and provide the following configuration:
Click fx, and set the Instance field:
payload.
Args:
{ "x" : 3, "y" : 4}for the arguments to process during the invocation
Class:
com.examples.Add
Method:
add(int,int)
Do not click fx for the Method setting.
Continue with the New operation, and click the Advanced configuration link, and set the Output value to a target variable that stores the payload of the Invoke operation:
Target Variable:
mySum
Target Value:
payload
This step shows how to pass the payload to a target variable if you ever need to do so.
Place a Logger component to the right of the Invoke operation in the new flow, click fx, and set the Message field to
vars.mySum.
This Logger setting is for displaying the payload stored in the target variable in the Studio console.
To make any named parameters readable, add XML for the Mule compiler plugin to the
pom.xmlfile for your Mule project:
In the Package Explorer, double-click
pom.xml, located at the bottom the javaexamples project.
Add the Mule compiler plugin XML between the
<build><plugins></plugins></build>elements in the
pom.xmlfile, retaining any plugins that are already defined there.
<build> <plugins> <!-- any other plugins -->
Paste this XML into your POM file:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <parameters>true</parameters> <source>1.8</source> <target>1.8</target> </configuration> </plugin>
</plugins> </build>
If you try to use named parameters in the New operation without adding the Maven compiler plugin XML, the New operation fails with a message similar to:
Failed to instantiate Class [com.examples.Add] with arguments [Integer numA, Integer numB]. Expected arguments are [int arg0, int arg1]
Return to your Mule app by clicking the javaexamples tab in Studio.
Run the Mule app by selecting Run > Run from the top set of menus.
Once the project deploys successfully, check the Console for the expected output.
You should see something like this in the console (shortened for readability):
INFO 2019-09-22 09:21:32 ... [event: 4c31f...] ... LoggerMessageProcessor: helloWorld INFO 2019-09-22 09:21:32 ... [event: 4c31f...] ... LoggerMessageProcessor: 7
helloWorldis the output value for javaexamplesFlow.
7is the output value for javaexamplesFlow1. | https://docs.mulesoft.com/java-module/1.2/java-invoke-method | 2021-10-16T02:07:03 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['_images/invoke-output-metadata.png', 'invoke output metadata'],
dtype=object)
array(['_images/java-module-ex.png', 'Java module example'], dtype=object)] | docs.mulesoft.com |
Sauce Labs Insights
Insights is the reporting and analytics hub of your Sauce test suite. In it, you will find an interactive data center that can help you interpret your test results over time, identify failure patterns across different platforms, understand how parallel tests can improve your build efficiency, and home in on areas of your application that might benefit from deeper testing.
Learn more about how to use Insights to help you get the most out of your test results.
Customize the Insights Scope
Learn about the different ways in which you can filter and expand the test views to isolate the most relevant data.
Evaluate a Test Over Time
Look at a test's result history to pinpoint when failures were introduced and in what configurations.
Compare Statistical Trends
See how different tests perform on the same platforms or devices to spot pervasive issues.
Analyze Failure Patterns
Let Sauce machine learning evaluate your test suite to detect underlying weaknesses in your tests or app. | https://docs.saucelabs.com/insights/index.html | 2021-10-16T02:54:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.saucelabs.com |
Deep Discovery Director (Consolidated Mode) enables you to import objects to the User-Defined Suspicious Objects list using the Structured Threat Information eXpression (STIX) format.
The following table shows information about STIX files.
Column
Description
File Name
Name of the STIX file.
Description of the STIX file.
Imported
Date and time the STIX file was imported.
Imported By
The account that imported the STIX file. | https://docs.trendmicro.com/en-us/enterprise/deep-discovery-director-(consolidated-mode)-35-online-help/threat-intelligence/custom-intelligence/stix.aspx | 2021-10-16T03:32:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.trendmicro.com |
User feedback is a critical metric for voice and video apps, since it gives a direct link between how the enduser felt about the call and network performance. Direct feedback from the enduser is the quickest indicator of call quality.
If user feedback is consistently negative, it is a sign something is very wrong - whether that be with agent conversations in a call centre, call quality, or some other factor. It is crucial to monitor this metric and use it to guide your contact centre operations. Without engaging with and making improvements based on this metric, you may find that customers are wary to interact with your contact centre. This can result in customer frustration, missed business opportunities, brand degradation, and potential customer churn.
callstats.sendUserFeedback()
Send the feedback on conference performance indicated by the user.
JSON for
feedback
feedback
Updated 8 months ago | https://docs.callstats.io/docs/submit-userfeedback-via-callstats-api | 2021-10-16T02:22:43 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.callstats.io |
What is a CDB file?
The CDB files are used in mission-critical applications like email. The CDB stands for “constant database”, a fast, reliable and simple package for creating or reading constant databases. Database replacement is safe against system crashes. Users don’t have to pause during a rewrite. CDB performs as an associative array (on-disk), mapping keys to values, and enables multiple values to be stored in a single key.
CDB File Format
The CDB file format stores numbers, offsets, lengths, and hash values in little endian format as unsigned 32-bit integers. Keys and data are thought to be opaque byte strings with no special treatment. At the beginning of the database, the fixed-size header represents 256 hash tables by listing their position within the file and their length in slots. Usually the data is stored as a sequence of records, each record stores key length, data length, key, and data. There are no sorting or alignment rules. The records are followed by a set of 256 hash tables of varying lengths. Since zero is a valid length, there may be fewer than 256 hash tables physically stored in the database, but there are nothing considered to be 256 tables. Hash tables consists of a series of slots, each of which contains a hash value and a record offset. “Empty slots” have an offset of zero.
Structure
CDB database consists of an entire dataset in a single computer file. It contains three parts:
- A fixed-size header
- Data
- A set of hash tables.
Lookups are available for exact keys only. Lookups act using the following algorithm:
- Hash the key.
- Determine at which hash table and slot this record should be located.
- Test the indicated slot in the hash table.
For lookups of keys with more than one values, additional values may be found by simply resuming the search at the next slot.
Features
CDB database structure provides several features:
Fast lookups
A successful lookup in a huge database normally takes just two disk accesses and an unsuccessful lookup takes only one.
Low overhead
A database uses 2048 bytes, 24 bytes per record and the space for keys and data.
No random limits
CDB can manage any database up to 4 gigabytes. Since there are no other restrictions, the records don’t even have to fit into memory. Databases are stored in a machine independent format.
Fast atomic database replacement
The command cdbmake can re-write an entire database into two orders of magnitude, faster than other hashing packages.
Fast database dumps
cdbdump can prints the contents of a database in cdbmake-compatible format. | https://docs.fileformat.com/database/cdb/ | 2021-10-16T02:47:18 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Date: Mon, 10 Feb 2020 07:19:18 +0000 From: Erik Lauritsen <[email protected]> To: [email protected] Subject: Bad ZFS performance on the desktop Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
I normally only use ZFS for a storage server, but decided to give it a try on the desktop. I have installed FreeBSD 12.1 on ZFS root in a mirror with 2 x 1TB drives for desktop usage. The performance is terrible every time I use a browser. I suspected the browser cache to be the problem, so I made a tmpfs and put the cache there, but the problem is that browsers write to disk all the time and not only when using the cache. As soon as I shut down the browser, and I have tried with Firefox and others, then the harddrives stops working (I can hear the noise). The browsers are the worst, but other applications that write some stuff to disk is also not so good. I understand that ZFS has to write every single bit to disk twice because I run a mirror, but I am surprised at the performance penalty and how much these drives keep working. I have monitored ZFS using 'top' and can see that it never eats more than half of my memory, so it's not because I'm out of memory. I'm thinking about getting a couple of SSDs, but then again I use backup meticulously and perhaps ZFS on a single drive or just UFS is better for the desktop? Any advice would be greatly appreciated. Kind regards, Erik
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=89617+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20200216.freebsd-questions | 2021-10-16T02:50:51 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.freebsd.org |
4. Documents Page¶
Image 4.1 Documents Page¶
The documents shown on the Documents page are region specific and will depend on your subscription type. In Australia this page provides access to the AIP, the front matter of the ERSA (conversions, special procedures etc.) and a variety of other Airservices Australia documents that you may require.
4.1. AIP¶
The Aeronautical Information Publication is accessed by tapping ‘AIP’ on the Documents page.
If you have subscriptions for multiple countries, use the region selector in the top right corner to select the desired country. The effective date is displayed underneath the title of each section. | https://docs.ozrunways.com/rwy/4rwy_documents-page.html | 2021-10-16T02:36:59 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['_images/fig_rwy_documents.jpeg', '_images/fig_rwy_documents.jpeg'],
dtype=object)
array(['_images/fig_rwy_documents_aip.jpeg',
'_images/fig_rwy_documents_aip.jpeg'], dtype=object)] | docs.ozrunways.com |
How-To article guidelines#
This section includes guidelines specifically for How-To articles that are
published at. The section duplicates the
information in the Style guidelines for contributing content in
the
rackspace-how-to repository on GitHub. Use these guidelines, in
addition to the others in this style guide, when you write or review any
How-To articles for this site.
Follow these guidelines when writing content:
Use sentence-style capitalization for titles and headings
-
-
Write to the user by using second person and imperative mood
Write clear and consistent step text
Use consistent text formatting
Clarify pronouns such as it, this, there, and that
Clarify gerunds and participles
Write clear and consistent code examples
Use consistent terminology
When and when not to suggest contacting Support
For comprehensive writing and style guidelines, see other sections in this style guide.
Use sentence-style capitalization for titles and headings#
Use sentence-style capitalization for all titles and headings..
Following are some examples:
Install or upgrade PHP 5.3 for CentOS 5.x
Ubuntu 8.04 LTS (Hardy Heron): Using mod_python to serve your application
PHP configuration limits for Cloud Sites
Troubleshoot a Vyatta site-to-site VPN connection
Differences between IMAP and POP
For more information about titles and headings, see the Titles and headings topic in this style guide.
Use active voice#.
Following are examples of active voice:
After you install the software, start the computer.
Click OK to save the configuration.
Create a server.
Rackspace products and services solve your business problems.
For more information about voice, see the Use active voice section in this style guide.). To easily find and remove instances of future tense, search for will.
Following are examples of present tense:
After you log in, your account begins the verification process.
Any user with a Cloud account can provision multiple ServiceNet database instances.
The product prompts you to verify the deletion.
To back up Cloud Sites to Cloud Files by using this example, you create two cron jobs. One job backs up the cloud site and database, and the second job uploads the backup to Cloud Files.
For more information about present tense, see the Use present tense section in the complete style guide.
Write to the user by using second person and imperative mood#
Users are more engaged with documentation when you use second person (that is, you address the user as you). Rackspace rather than the user, so before you use them, consider whether second person or imperative mood is more “user friendly.” However, use we recommend rather than it is recommended or Rackspace recommends. Also, you can use we in the place of Rackspace if necessary.
The first-person singular pronoun I is acceptable in the question part of FAQs.
Avoid switching person (point of view) in the same document.
Note
This guidelines document is written in second person, and the headings and task examples use imperative mood.
For more information about this topic, see the Write to the user by using second person and imperative mood section in this style guide.
Write clear and consistent step text#
When you are providing instructions to users, you should generally number the steps (unless you have just one step). For the steps, use the following guidelines. The guidelines are followed by an example. For more extensive examples, see the Steps section of this style guide.
Write each step as a complete imperative sentence (that is, a sentence that starts with an imperative verb) and use ending punctuation. In steps, the focus is on the user, and the voice is active.
Usually, include only a single action in each step. If two actions are closely related, such as opening a menu and selecting a command from the menu, you can include both actions in one step.
If a step specifies where to perform an action, state where to perform the action before describing the action.
If a step specifies a situation or a condition, state the situation or condition before describing the action.
Do not include explanatory or reference information in the action part of a step. If needed, follow the step with one or more paragraphs that provide such information.
Do not document system actions, responses, or results as steps. Put necessary statements in paragraphs following the steps to which they apply.
Use screenshots sparingly. Screenshots can help to orient the user, but a screenshot of every field or dialog box is usually not necessary.
To indicate that a step is optional, include (Optional), in italics, as a qualifier at the beginning of the step.
If more than one method exists for completing an action, document only one method, usually the most efficient or preferred method.
Use consistent text formatting#
Certain text should be formatted differently from the surrounding text to designate a special meaning or to make the text stand out to the user. Usually this formatting is accomplished by applying a different font treatment (such as bold, italics, or monospace).
The following table covers the most common items that should be formatted. For more detailed formatting information, see the Text formatting section of the style guide.
Clarify pronouns such as it, this, there, and that#
Pronouns are useful, but you must ensure that their antecedents (the words that they are used in place of) are clear, and that they (the pronouns) don’t contribute to vagueness and ambiguity.
It: Ensure that the antecedent of it is clear. If multiple singular nouns precede it, any of them could be the antecedent. Also, avoid using it: Avoid using that as a demonstrative pronoun (which stands in for or points to a noun). Instead, use it as an adjective and follow it with a noun.
For more examples, see the Use pronouns carefully section of this style guide.
Clarify gerunds and participles#
Participles are verbs that end in -ed or -ing and act as modifiers. Gerunds are verbs that end in -ing and act as nouns. Both types of words are useful and acceptable, but confusion can arise if they are not placed precisely in a sentence. For example, the word meeting can be a gerund or a modifier (or even a noun) depending on its placement in a sentence. Clarify gerunds and participles as necessary.
For more information and examples, see the Clarify gerunds and participles section of this style guide.
Write clear and consistent code examples#
Observe the following guidelines when creating blocks of code as input or output examples:
Do not use screenshots to show code examples. Format them as blocks of code, in monospace, by using the appropriate markup in your authoring tool.
When showing input, include a command prompt (such as $).
As often as necessary, show input and output in separate blocks and provide explanations for each. For example, if the input contains arguments or parameters, explain those. If the user should expect something specific in the output, or you want to show only part of lengthy output, provide an explanation. Provide your users the information that they need, and separate the input and output when it makes sense.
When the command is simple, and there is nothing specific to say about the output, you can show the input and output in the same code block, as the user would actually see the code in their own terminal. The inclusion of the command prompt will differentiate the input from the output.
Ensure that any placeholder text in code is obvious. If the authoring tool allows it, apply italics to placeholders; if not, enclose them in angle brackets. Show all placeholder text in camelCase.
Follow the conventions of the programming language used and preserve the capitalization that the author of the code used.
For readability, you can break up long lines of input into blocks by ending each line with a backslash.
If the input includes a list of arguments or parameters, show the important or relevant ones first, and group related ones. If no other order makes sense, use alphabetical order. If you explain the arguments or parameters in text, show them in the same order that they appear in the code block.
The following example illustrates many of these guidelines. For more examples, see the Code examples section of this style guide.
Example: Create a VM running a Docker host
Show all of the available virtual machines (VMs) that are running Docker by running the following command:
$ docker-machine ls
If you have not created any VMs yet, your output should look as follows:
NAME ACTIVE DRIVER STATE URL
Create a VM that is running Docker by running the following command:
$ docker-machine create --driver virtualbox test
The
--driverflag indicates what type of driver the machine will run on. In this case,
virtualboxindicates that the driver is Oracle VirtualBox. The final argument in the command gives the VM a name, in this case,
test.
The output should look as follows:
Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env test
Run
docker-machine lsagain to see the VM that you created by running the following command:
$ docker-machine ls
The output should look as follows:
NAME ACTIVE DRIVER STATE URL SWARM test virtualbox Running tcp://192.168.99.101:237
Use is not creative writing, and you should not be concerned that you will bore customers customers and translators, so avoid it when possible.
Avoid fabricated words. Examples of fabricated words are marketecture or edutainment. Most such words are specific to a single business culture and are not understood in other cultures.
Standardize words and spelling across a documentation set.
Don’t use terms with different meanings interchangeably. Some terms have similar but distinct meanings and should not be used interchangeably. For example:
environment, platform
version, release
panel, screen
window, dialog box
Additionally, replace the following nonpreferred terms with the preferred terms.
For more guidelines about terminology, see the following sections in the style guide:
When and when not to suggest contacting Support#
A customer who has sought out documentation has inherently communicated that documentation is their preferred channel of support at that moment. Suggesting that they contact the Support team directly undermines the purpose of the documentation and diminishes the user’s confidence in the instructions.
Don’t suggest contacting Support directly.
Don’t include Support phone numbers.
Don’t recommend creating a ticket unless it is for gaining access to a Rackspace feature.
You should not recommend contacting the Support team in an article unless doing so is a required step. A required step is a task that the customer cannot complete without contacting Support by phone or by opening a ticket. | https://docs.rackspace.com/docs/style-guide/how-to-article-guidelines | 2021-10-16T02:07:51 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.rackspace.com |
Wagtail API v2 Configuration Guide¶
This section of the docs will show you how to set up a public API for your Wagtail site.
Even though the API is built on Django REST Framework, you do not need to install this manually as it is already a dependency of Wagtail.
Basic configuration¶
Enable the app¶
Firstly, you need to enable Wagtail’s API app so Django can see it.
Add
wagtail.api.v2 to
INSTALLED_APPS in your Django project settings:
# settings.py INSTALLED_APPS = [ ... 'wagtail.api.v2', ... ]
Optionally, you may also want to add
rest_framework to
INSTALLED_APPS.
This would make the API browsable when viewed from a web browser but is not
required for basic JSON-formatted output.
Configure endpoints¶
Next, it’s time to configure which content will be exposed on the API. Each content type (such as pages, images and documents) has its own endpoint. Endpoints are combined by a router, which provides the url configuration you can hook into the rest of your project.
Wagtail provides three endpoint classes you can use:
- Pages
wagtail.api.v2.views.PagesAPIViewSet
- Images
wagtail.images.api.v2.views.ImagesAPIViewSet
- Documents
wagtail.documents.api.v2.views.DocumentsAPIViewSet
You can subclass any of these endpoint classes to customise their functionality.
Additionally, there is a base endpoint class you can use for adding different
content types to the API:
wagtail.api.v2.views.BaseAPIViewSet
For this example, we will create an API that includes all three builtin content types in their default configuration:
# api.py # Create the router. "wagtailapi" is the URL namespace api_router = WagtailAPIRouter('wagtailapi') # Add the three endpoints using the "register_endpoint" method. # The first parameter is the name of the endpoint (eg. pages, images). This # is used in the URL of the endpoint # The second parameter is the endpoint class that handles the requests api_router.register_endpoint('pages', PagesAPIViewSet) api_router.register_endpoint('images', ImagesAPIViewSet) api_router.register_endpoint('documents', DocumentsAPIViewSet)
Next, register the URLs so Django can route requests into the API:
# urls.py from .api import api_router urlpatterns = [ ... path('api/v2/', api_router.urls), ... # Ensure that the api_router line appears above the default Wagtail page serving route re_path(r'^', include(wagtail_urls)), ]
With this configuration, pages will be available at
/api/v2/pages/, images
at
/api/v2/images/ and documents at
/api/v2/documents/
Adding custom page fields¶
It’s likely that you would need to export some custom fields over the API. This
can be done by adding a list of fields to be exported into the
api_fields
attribute for each page model.
For example:
# blog/models.py from wagtail.api import APIField class BlogPageAuthor(Orderable): page = models.ForeignKey('blog.BlogPage', on_delete=models.CASCADE, related_name='authors') name = models.CharField(max_length=255) api_fields = [ APIField('name'), ] class BlogPage(Page): published_date = models.DateTimeField() body = RichTextField() feed_image = models.ForeignKey('wagtailimages.Image', on_delete=models.SET_NULL, null=True, ...) private_field = models.CharField(max_length=255) # Export fields over the API api_fields = [ APIField('published_date'), APIField('body'), APIField('feed_image'), APIField('authors'), # This will nest the relevant BlogPageAuthor objects in the API response ]
This will make
published_date,
body,
feed_image and a list of
authors with the
name field available in the API. But to access these
fields, you must select the
blog.BlogPage type using the
?type
parameter in the API itself.
Custom serialisers¶
Serialisers are used to convert the database representation of a model into
JSON format. You can override the serialiser for any field using the
serializer keyword argument:
from rest_framework.fields import DateField class BlogPage(Page): ... api_fields = [ # Change the format of the published_date field to "Thursday 06 April 2017" APIField('published_date', serializer=DateField(format='%A %d %B %Y')), ... ]
Django REST framework’s serializers can all take a source argument allowing you to add API fields that have a different field name or no underlying field at all:
from rest_framework.fields import DateField class BlogPage(Page): ... api_fields = [ # Date in ISO8601 format (the default) APIField('published_date'), # A separate published_date_display field with a different format APIField('published_date_display', serializer=DateField(format='%A $d %B %Y', source='published_date')), ... ]
This adds two fields to the API (other fields omitted for brevity):
{ "published_date": "2017-04-06", "published_date_display": "Thursday 06 April 2017" }
Images in the API¶
The
ImageRenditionField serialiser
allows you to add renditions of images into your API. It requires an image
filter string specifying the resize operations to perform on the image. It can
also take the
source keyword argument described above.
For example:
from wagtail.images.api.fields import ImageRenditionField class BlogPage(Page): ... api_fields = [ # Adds information about the source image (eg, title) into the API APIField('feed_image'), # Adds a URL to a rendered thumbnail of the image to the API APIField('feed_image_thumbnail', serializer=ImageRenditionField('fill-100x100', source='feed_image')), ... ]
This would add the following to the JSON:
{ "feed_image": { "id": 45529, "meta": { "type": "wagtailimages.Image", "detail_url": "", "download_url": "/media/images/a_test_image.jpg", "tags": [] }, "title": "A test image", "width": 2000, "height": 1125 }, "feed_image_thumbnail": { "url": "/media/images/a_test_image.fill-100x100.jpg", "width": 100, "height": 100, "alt": "image alt text" } }
Note:
download_url is the original uploaded file path, whereas
feed_image_thumbnail['url'] is the url of the rendered image.
When you are using another storage backend, such as S3,
download_url will return
a URL to the image if your media files are properly configured.
Additional settings¶
WAGTAILAPI_BASE_URL¶
(required when using frontend cache invalidation)
This is used in two places, when generating absolute URLs to document files and invalidating the cache.
Generating URLs to documents will fall back the the current request’s hostname
if this is not set. Cache invalidation cannot do this, however, so this setting
must be set when using this module alongside the
wagtailfrontendcache module.
WAGTAILAPI_SEARCH_ENABLED¶
(default: True)
Setting this to false will disable full text search. This applies to all endpoints. | https://docs.wagtail.io/en/latest/advanced_topics/api/v2/configuration.html | 2021-10-16T03:10:05 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.wagtail.io |
Tutorial: Running Octave in Galileo¶
Getting Started¶
Get started with Octave in Galileo by cloning and running our Octave_Example Mission with a few simple steps:
Login to Galileo using FireFox or Chrome (log into your account)
Navigate to the Missions tab in the sidebar and the Explore Missions sub-tab
Click on the Filter button on the right, search for “Octave,” and click Apply
Navigate to the Octave_Example Mission, click the button with three dots, and select Clone Mission
Select your storage option of choice and choose Create Mission
Click on the Octave. | https://galileo-tutorial-pages.readthedocs.io/en/latest/docs/octave-batch-public.html | 2021-10-16T02:58:14 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../_images/octave.gif', 'Galileo'], dtype=object)] | galileo-tutorial-pages.readthedocs.io |
with driving, you can similarly have the alternative to investigate well all through the city and its ecological variables without experiencing such issues Car rent Dubai... should consider picking that rent car organizations that are found to where you might be living in to guarantee that in case you need any vehicle for the rental, you can have the choice to get it inside the shortest time Rent car Dubai. | https://docs-prints.com/2020/09/19/what-you-should-know-about-this-year-2/ | 2021-10-16T02:25:52 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs-prints.com |
public interface LicenseManagerFactory
LicenseManagerinstances.
@NonNull LicenseManager createLicenseManager(@NonNull Video video, @NonNull Source source)
LicenseManager, which can be used to acquire, renew or releases (Offline) playback DRM license for a specific video source.
video- reference to an offline playback enabled video.
source- reference to the source in the video that requires an offline playback license operation.
LicenseManager
java.lang.IllegalStateException- If the DRM scheme is unsupported or if a new license manager cannot be created. | https://docs.brightcove.com/android-sdk/javadoc/com/brightcove/player/drm/LicenseManagerFactory.html | 2021-10-16T03:29:01 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.brightcove.com |
First Look: SQL Search with InterSystems Products
This First Look introduces you to InterSystems IRIS® data platform support for SQL text search, which provides semantic context searching of unstructured text data in a variety of languages. It covers the following topics:
Why SQL Search Is Important
How InterSystems IRIS Implements SQL Search
Trying SQL Search for Yourself
For More Information about SQL Search
This First Look presents an introduction to SQL context-aware text searching and walks through some initial tasks associated with indexing text data for searching and performing SQL Search. Once you’ve completed this exploration, you will have indexed text in an SQL column for text searching and performed several types of searches. These activities are designed to use only the default settings and features, so that you can acquaint yourself with the fundamentals of the feature. For the full documentation on SQL Search, see the SQL Search Guide.
A related, but separate, tool for handling unstructured texts is Natural Language Processing (NLP). SQL Search presupposes that you know what you are looking for. NLP text analysis allows you to analyze the contents of texts with no prior knowledge of the text contents.
To browse all of the First Looks, including those that can be performed on a free evaluation instance of InterSystems IRIS
, see InterSystems First Looks
.
Why SQL Search Is Important
The ability to rapidly search unstructured text data is fundamental to accessing the content of the huge volume of text commonly stored by many companies and institutions. Any search facility for such data must have the following functionality:
Fast search: InterSystems IRIS SQL Search can rapidly search large quantities of data because it is searching a generated optimized index to the data, rather than sequentially searching the data itself.
Word-aware search: SQL Search is not a string search, it is a search based on semantic structures in the text. The most basic semantic structure for SQL Search is the word. This reduces the number of false positives that result when a string search finds a string embedded in another word, or when a string bridges two words.
Entity-aware search: SQL Search takes into account multiple words that are grouped by semantic relationship to form entities. It can thus search for multiple words in a specified order (a positional phrase), words appearing within a specific proximity to each other (regardless of sequence), and words found at the beginning or at the end of an entity. This enables you to narrow a search to a word (or phrase) found in a specified context of other words.
Language-aware search: identifying semantic relationships between words is language-specific. SQL Search contains semantic rules (language models) for ten natural languages. It also provides support for other languages. It does not require the creation or association of dictionaries or ontologies.
Pattern matching: SQL Search provides both wildcard matching and regular expression (RegEx) matching to match character patterns.
Fuzzy matching: SQL Search provides fuzzy search for near-matches that take into account a calculated degree of variation from the search string. This enables matching of spelling errors, among other things.
Derived matching: SQL Search can use decompounding to match root words and component words. SQL Search can use synonym tables to match synonym words and phrases.
How InterSystems IRIS Implements SQL Search
SQL Search can search text data found in a column in an SQL table. In order to do this, you must create an SQL Search index for the column containing the text data. InterSystems implements a table column as a property in a persistent class.
There are three levels of index available, each supporting additional features as well as all of the features of the lower levels: Basic, Semantic, and Analytic:
Basic supports word search and positional phrase search, including the use of wildcards, ranges between words in a phrase, regular expression (RegEx) matching, and co-occurrence search.
Semantic supports all of the Basic functionality, and also supports InterSystems IRIS Natural Language Processing (NLP) entities. It can search for entities, and words or phrases that begin an entity or end an entity. It recognizes NLP attributes, such as negation.
Analytic supports all of the Semantic functionality, and also supports NLP paths. It can also search based on NLP dominance and proximity scores.
Populating the index. Like all SQL indices, you can either build the index directly after the table has been populated with data, or have SQL automatically build the index entries as you insert records into an empty table. In either case, SQL automatically updates this index as part of subsequent insert, update, or delete operations.
You perform an SQL search you write a SELECT query in which the WITH clause contains %ID %FIND search_index() syntax. The search_index() function parameters include the name of the SQL Search index and a search string. This search string can include wildcard, positional phrase, and entity syntax characters. The search string can also include AND, OR, and NOT logical operators.
Trying SQL Search for Yourself
It’s easy to use InterSystems IRIS SQL Search. This simple procedure walks you through the basic steps of searching text data stored as a string in an SQL table column. use an IDE such as VS Code or Studio to create ObjectScript code in your instance. For instructions for setting up one of these IDEs and connecting it to your instance, see Visual Studio Code or Studio in InterSystems IRIS Basics: Connecting an IDE.
Before You Begin need to obtain the Aviation.Event table and associated files from the GitHub repo
.
Downloading and Setting up the Sample Files
The Samples-Aviation sources must be accessible by the instance. The procedure for downloading the files depends on the type of instance you are using, as follows:
If you are using an ICM-deployed instance:
Use the icm ssh
for the instance. For a configuration deployed on Azure, for example, the default mount point
for the data volume is /dev/sdd, so you would use commands like the following:
$ git clone /dev/sdd/FirstLook-SQLBasics OR $ wget -qO- | tar xvz -C /dev/sdd
The files are now available to InterSystems IRIS in /irissys/data/FirstLook-SQLBasics on the container’s file system.
If you are using a containerized instance (licensed or Community Edition) that you deployed by other means:
Open a Linux command line on the host. (If you are using Community Edition on a cloud node, connect to the node using SSH
,-SQLBasics-SQLBasics installed:
Go to
in a web browser on the host.
Select Clone or download and then choose Open in Desktop.
The files are available to InterSystems IRIS in your GitHub directory, for example in C:\Users\User1\Documents\GitHub\FirstLook-SQLBasics.
If the host is a Linux system, simply use the git clone command or the wget command on the Linux command line to clone the repo to the location of your choice.
Once you have the sample files, follow the steps provided in the Samples-Aviation README.md file under “Setup instructions”:
Create a namespace
called SAMPLES as follows:
Open the Management Portal for your instance in your browser, using the URL described for your instance. Call the new namespace SAMPLES.
Select Save near the top of the page and then select Close at the end of the resulting log.
To enable the SAMPLES web application for use with InterSystems IRIS Analytics:
a. In the Management Portal, click System Administration > Security > Applications > Web Applications.
b. Click the /csp/samples link in the leftmost column (assuming that the namespace you created is called SAMPLES).
c. In the Enable section, select Analytics.
d. Click Save.
Open the InterSystems Terminal using the procedure described for your instance
in InterSystems IRIS Basics: Connecting an IDE and enter the following command to change to the namespace where the sample will be loaded:
SET $NAMESPACE="SAMPLES"
Enter the following command, replacing .path with the full path of the directory that contains the README.md and LICENSE files of the repo you cloned or downloaded:
DO $system.OBJ.Load("<path>\buildsample\Build.AviationSample.cls","ck")
Enter the following command:
DO ##class(Build.AviationSample).Build()
When prompted, enter the full path of the directory that contains the README.md and LICENSE files. The method then loads and compiles the code and performs other needed setup steps.
Creating and Testing a Basic SQL Search Index
Once the code is compiled, which may take a minute or two, continue with the following steps:
Using the IDE of your choice, create a Basic SQL Search index by defining the following class in the SAMPLES namespace:
Class Aviation.TestSQLSrch Extends %Persistent [DdlAllowed,Owner={UnknownUser},SqlRowIdPrivate, SqlTableName=TestSQLSrch ] { Property UniqueNum As %Integer; Property Narrative As %String(MAXLEN=100000) [ SqlColumnNumber=3 ]; Index NarrBasicIdx On (Narrative) As %iFind.Index.Basic(INDEXOPTION=0, LANGUAGE="en",LOWER=1); Index UniqueNumIdx On UniqueNum [ Type=index,Unique ]; }
This example creates a persistent class (table) that contains a Narrative property (column), and defines a Basic SQL Search index for this property. Because this is a new class, you must populate the table with text data.
Populate the table with text data and build the SQL Search index. An SQL Search index is built and maintained like any other SQL index.
and enter the following commands to populate the new table with text data from the Aviation.Event table you downloaded. In this example, the SQL Search index is automatically built as each record is added:
set $namespace = "SAMPLES" set in1="INSERT OR UPDATE INTO Aviation.TestSQLSrch (UniqueNum,Narrative) " set in2="SELECT %ID,NarrativeFull FROM Aviation.Event WHERE %ID < 100" set myinsert=in1_in2 set tStatement=##class(%SQL.Statement).%New() set qStatus=tStatement.%Prepare(myinsert) if qStatus'=1 {write "%Prepare failed:" DO $System.Status.DisplayError(qStatus) quit} set rset=tStatement.%Execute() write !,"Total rows inserted=",rset.%ROWCOUNT
For performance reasons, you may wish to use the %NOINDEX option to defer building indices until the table is fully populated, and then build the SQL Search index (and any other defined indices) using the %Build() method.
Alternatively, you could add an SQL Search index to an existing persistent class that already contains text data, and then populate the SQL Search index using the %Build() method.
Open a SQL Shell in the Terminal, as described in the first few steps of Creating and Populating a Table With a SQL Script File in First Look: InterSystems SQL, and use SQL Search as a WHERE clause condition of a SELECT query. The WHERE clause can contain other conditions associated by AND logic. Run the following SQL Query in the SAMPLES namespace:
SELECT %iFind.Highlight(Narrative,'"visibility [1-4] mile*" AND "temp* ? degrees"') FROM Aviation.TestSQLSrch WHERE %ID %FIND search_index(NarrBasicIdx,'"visibility [1-4] mile*" "temp* ? degrees"',0,'en')
The search_index() function specifies a search_index parameter. This is a defined SQL Search index for the property (column) to be search. It can be a Basic, Semantic, or Analytic index.
The search_index() function specifies a search_item parameter.
This example defined the search_item as "visibility [1-4] mile*" "temperature ? degree*". This returns all records that contain both positional phrases, in any order:
"visibility [1-4] mile*" returns phrases with from 1 to 4 words between the words “visibility” and “mile”. Because mile* specifies a wildcard, it could match either mile or miles. For example, “visibility less than 1 mile”, “visibility 10 miles”, “visibility approximately 20 statute miles”, “visibility for many miles”.
"temp* ? degrees" returns phrases with a word beginning with “temp” and ending in 0 or more non-space wildcard characters, a single missing word, and then the word “degrees.” Thus it would return records with the phrase “temperature 20 degrees”, “temp. 20 degrees”, “temperature in degrees”, and also the (probably unintended) “temporarily without degrees”.
The search_index() function can optionally specify a search_option parameter.
This option can apply an optional transformation to the search, as follows: 1=stemmed search applies a stemmer to match words or phrases based on their stem form. 2=decompounding search applies decompounding to compound words. 3=fuzzy search applies a specified degree of fuzziness (number of character differences) to the search. 4=regular expression search allows searching using RegEx matching. This example specifies the default, 0, meaning no search transformation.
The search_index() function can optionally specify a search_language parameter. You can specify a language, or specify '*' to invoke automatic language identification, supporting searching texts containing multiple languages. This example specifies the default, 'en' (English).
This example also highlights the returned text by applying the same search_item to the returned records. This highlights every instance of either of these phrases by delimiting them with <b> and </b> tags.
This example is provided to give you some initial experience with InterSystems IRIS SQL Search. You should not use this example as the basis for developing a real application. To use SQL Search in a real situation you should fully research the available choices provided by the software, then develop your application to create robust and efficient code.
Learn More About SQL Search
InterSystems has other resources to help you learn more about SQL Search, including: | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_SQLSRCH | 2021-10-16T02:08:03 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.intersystems.com |
Tasks#
A task is an action that users perform to achieve a goal, such as creating a server. A task topic, article, or section provides the action steps and the necessary context and reference information that the user needs to complete the task.
This topic provides guidelines for developing tasks.
Task titles#
The title of a task topic, article, or section begins with the imperative form of the task action, and it uniquely, precisely, and clearly describes the task. Use a plural subject unless the singular makes more sense or is necessary for clarity.
Examples
Create users in SQL Server
Configure SQL Server Management Studio to connect to SQL Server on Windows
Add new ServiceNet routes to a server
For guidelines about capitalizing titles, see Titles and headings.
Task introductions#
Before providing steps, set the context for the task as necessary. For example, you could state the reason for completing the task, the method to be used, and the expected result. You might also state the intended audience and suggest the amount of time that the task might take, especially if it will take a long time.
Notes:
If the article or section title provides sufficient context, you can omit an introduction.
Avoid providing extensive overview or conceptual text in the introduction to a task. Provide that information in a separate informational topic or in a topic that introduces the task as part of a larger process or user goal.
Prerequisites#
If the task has requirements that the user must meet before taking action, describe them in a “Prerequisites” section that precedes the steps. You could include the following information:
A hyperlink to a preceding task, if that task must be performed before this task
Software that must already be installed, accessible, or running
Access rights that are required for users to perform the task
Hyperlinks to other topics that contain requirements or prerequisite tasks that the user must perform
Note
Avoid including detailed procedures in a prerequisites section. Provide prerequisite tasks in other articles or sections, which you can reference in this section.
Procedures#
A task contains one or more procedures, or set of sequential action steps. Consider the following guidelines when creating a procedure:
If the procedure has more than one step, use a numbered list for the steps. Don’t use bullets, except to list choices within a step.
If the procedure has only one step, show that step in a regular paragraph. That is, don’t number it.
If you have lengthy introductory or prerequisite information, or if you have more than one procedure, provide a heading for the procedure or procedures. Use the imperative form of the action and a singular form of the object. Don’t repeat the title of the task article.
Try to limit procedures to 10 steps. If you have more than 10 steps, consider whether you can divide the steps into two or more procedures. Creating several short, simple, and sequential procedures instead on one long, complex procedure, especially one with many substeps and choice steps, will help users know where they are in the process, judge their progress, and complete the task successfully.
Steps#
When writing steps, use the following guidelines.
-
-
Provide context before the action
Provide conditions before actions
Follow the step with explanatory information
Show only actions as steps
Use screenshots sparingly
-
-
Show multiple possibilities in a list
-
Use imperative sentences#
Write each step as a complete and correctly punctuated imperative sentence (that is, a sentence that starts with an imperative verb). In steps, the focus is on the user, and the voice is active.
Examples
Log in to the Cloud Control Panel.
Use the following command to start
vsftpd:
sudo service vsftpd start
Show one action per step#
Usually, include only a single action in each step. If two actions are closely related, such as opening a menu and selecting a command from the menu, you can include both actions in one step.
Examples
Under Export, select your database (for example, 388488_drupal).
Scroll down to the bottom of the window and select the Save as file check box, which will save your database output to a file.
Click Go.
If you’re prompted to save your file, save it to your computer.
Provide context before the action#
If a step specifies where to perform an action, state where to perform the action before describing the action.
Examples
In the navigation pane, click Inbound Rules.
On the Binding and SSL Settings page, perform the following steps:
Provide conditions before actions#
If a step specifies a situation or a condition, state the situation or condition before describing the action.
Examples
If a new version is available, click Install.
To find out the encryption type of your Windows computer (32-bit or 64-bit), navigate to the server’s Control Panel and click System.
Follow the step with explanatory information#
Don’t include explanatory or reference information in the action part of a step. If needed, follow the step with one or more paragraphs that provide supplemental information.
Examples
In the Body Match text box, enter a word or phrase that will appear on the page when it loads successfully.
For example, you can perform a body match on the copyright date to verify whether the website is running.
Show only actions as steps#
Don’t show system actions, responses, or results as steps. Put necessary statements in unnumbered paragraphs following the steps to which they apply. See the first example in the “Examples” section.
When the result of a step is the appearance of a dialog box, window, or page in which the action of the next steps occurs, you can usually eliminate a result statement and orient the user at the beginning of the next step. See the second example in the “Examples” section.
Examples
Use:
On Linux, enter the following command:
sudo rackspace-monitoring-agent --setup
The list of setup settings is displayed.
Use:
Under Other Options in the Rackspace Email box, select Mobile Sync.
On the Activate Mobile Sync page, select individual users to activate, or select the Add Mobile Sync to all mailboxes on this domain option.
Use screenshots sparingly#
Screenshots can help to orient the user, but a screenshot of every field or dialog box usually isn’t necessary.
If you include screenshots, place each one directly under the step that it illustrates. Don’t rely on the screenshot to show information or values that the user must enter; always provide that information in the text of the steps. However, ensure that the screenshot accurately reflects the directions and values in the step text.
For more information about when to use screenshots, see Screenshot guidelines and process.
Label optional steps#
To indicate that a step is optional, include (Optional), in italics, as a qualifier at the beginning of the step.
Example
(Optional) Click Advanced Options.
Omit extraneous words#
Omit extraneous words (such as pop-up menu or command button) unless they’re needed for clarity.
Examples
Use:
In the Disks window, right-click the volume and select Take Offline.
Avoid:
In the Disks window, right-click the volume and select Take Offline from the pop-up menu.
Use:
Click Add, enter a name for the profile, and then click OK.
Avoid:
Click the Add button, enter a name for the profile in the text box, and then click the OK button.
Show multiple possibilities in a list#
If a step directs the user to choose from multiple possibilities, use an unordered list to present the possibilities.
Example
Select a volume type:
Standard: A standard SATA drive for users who need additional storage on their server
High Performance: An SSD drive, which offers a higher performance option for databases and high performance applications
Results, verification, examples, and troubleshooting#
Following the procedure or procedures, include the following information if it’s necessary or helpful to the user. If the information is brief, you can include it directly following the last step in the procedure. If it’s lengthy or you need to provide more than one type of information, use sections with headings.
The result of performing the task.
Information about verifying successful completion of the task, such as the location of logs. If verification is a separate task in a different article or section, provide a hyperlink to it under a “Where to go from here” heading.
An example that illustrates or supports the task.
Information about what to do if the procedure doesn’t work. This information might be a hyperlink to a separate troubleshooting topic.
Direction to the next action#
If your task is part of a larger set of tasks, you can help the user by including a “Where to go from here” section. You might include the following information:
A brief explanation of the next task and why the user needs to perform it, accompanied by a hyperlink to the next task.
Hyperlinks to other tasks that could be done next, if multiple options are available. Describe the multiple options so that users know which task to choose. | https://docs.rackspace.com/docs/style-guide/style/tasks | 2021-10-16T03:12:34 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.rackspace.com |
.
- Select multiple users by checking the checkbox on the left of each row, then use the bulk actions bar at the bottom to perform an action on all selected users.
Clicking on a user’s name will open their profile details. From here you can then edit that users details.
Note
It is possible to change user’s passwords in this interface, but it is worth encouraging your users to use the ‘Forgotten password’ link on the login screen instead. This should save you some time!
Click the ‘Roles’ tab to edit the level of access your users have. By default there are three roles: | https://docs.wagtail.io/en/latest/editor_manual/administrator_tasks/managing_users.html | 2021-10-16T02:57:21 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../../_images/screen35_users_menu_item.png',
'../../_images/screen35_users_menu_item.png'], dtype=object)
array(['../../_images/screen36_users_interface.png',
'../../_images/screen36_users_interface.png'], dtype=object)
array(['../../_images/screen36.5_users_bulk_actions.png',
'../../_images/screen36.5_users_bulk_actions.png'], dtype=object)] | docs.wagtail.io |
Table of Contents
Product Index
* Required Products: Michael 4 Base. | http://docs.daz3d.com/doku.php/public/read_me/index/9300/start | 2021-10-16T04:25:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.daz3d.com |
Guidelines for using the storage delete command
Contributors
Download PDF of this page
The
snapdrive storage delete command has some restrictions in SnapDrive for UNIX.
When you delete a file system, SnapDrive for UNIX always removes the file system’s mount point.
Linux hosts allow you to attach multiple file systems to a single mountpoint. However, SnapDrive for UNIX requires a unique mountpoint for each file system. The
snapdrive storage deletecommand fails if you use it to delete file systems that are attached to a single. | https://docs.netapp.com/us-en/snapdrive-unix/linux-administration/concept_guidelines_for_usingthe_storage_deletecommand.html | 2021-10-16T04:01:57 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netapp.com |
NetScalerGroupHealthMonitor This monitor is designed for Citrix® NetScaler® load-balancing checks. It checks if more than x percent of the servers assigned to a specific group on a load-balanced service are active. The required data is gathered via SNMP from the NetScaler. The status of the servers is determined by the NetScaler. The provided service itself is not part of the check. A valid SNMP configuration in Horizon for the NetScaler is required. A NetScaler can manage several groups of servers per application. This monitor just covers one group at a time. To check multiple groups, define one monitor per group. This monitor does not check the status of the load-balanced service itself. Monitor facts Class Name org.opennms.netmgt.poller.monitors.NetScalerGroupHealthMonitor Configuration and use Table 1. Monitor-specific parameters for the NetScalerGroupHealthMonitor Parameter Description Default Required group-name The name of the server group to check. n/a Optional group-health The percentage of active servers versus total servers of the group, as an integer. 60 This monitor implements the Common Configuration Parameters. Examples The following example checks a server group called central_webfront_http. If at least 70% of the servers are active, the service is up. If less then 70% of the servers are active, the service is down. Use a configuration like the following. the total amount of servers. MemcachedMonitor NrpeMonitor | https://docs.opennms.com/horizon/28.1.0/operation/service-assurance/monitors/NetScalerGroupHealthMonitor.html | 2021-10-16T02:02:24 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.opennms.com |
pixel color at coordinates (x, y).
If the pixel coordinates are out of bounds (larger than width/height or small than 0),
they will be clamped or repeated based on the texture's wrap mode.
Texture coordinates start at lower left corner.
If you are reading a large block of pixels from the texture, it may be faster to use GetPixels32 or GetPixels which returns a whole block of pixel colors.
The texture must have the read/write enabled flag set in the texture import settings, otherwise this function will fail. GetPixel is not available on Textures using Crunch texture compression.
See Also: GetPixels32, GetPixels, SetPixel, GetPixelBilinear.
// Sets the y coordinate of the transform to follow the heightmap using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Texture2D heightmap; public Vector3 size = new Vector3(100, 10, 100);
void Update() { int x = Mathf.FloorToInt(transform.position.x / size.x * heightmap.width); int z = Mathf.FloorToInt(transform.position.z / size.z * heightmap.height); Vector3 pos = transform.position; pos.y = heightmap.GetPixel(x, z).grayscale * size.y; transform.position = pos; } } | https://docs.unity3d.com/2020.2/Documentation/ScriptReference/Texture2D.GetPixel.html | 2021-10-16T02:38:46 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.unity3d.com |
stable
- Data Migration-MTK
- Overview
- Usage
- Environment
- Configuration File
- Example File
- Command
- mtk
- check-config
- gen
- gen completion
- gen-config
- license
- mig-tab-pre
- mig-tab-data
- mig-tab-post
- mig-tab-other
- show-schema
- show-db-info
- show-type
- show-table
- show-table-split
- show-support-db
- show-table-data-estimate
- sync-schema
- sync-object-type
- sync-domain
- sync-custom-type
- sync-sequence
- sync-queue
- sync-table
- sync-table-data
- sync-table-data-estimate
- sync-index
- sync-constraint
- sync-view
- sync-trigger
- sync-procedure
- sync-function
- sync-package
- sync-synonym
- sync-db-link
- sync-rule
- sync-table-data-com
- sync-alter-sequence
- sync-coll-statistics
- Data Source Name
- DB2 to openGauss/MogDB
- MySQL to openGauss/MogDB
- Oracle to openGauss/MogDB
- DB2 to MySQL
- Release Notes
Release Notes
v0.0.38 - 2021-10-08
- mtk_0.0.38_darwin_amd64.tar.gz
- mtk_0.0.38_darwin_amd64_db2.tar.gz
- mtk_0.0.38_darwin_arm64.tar.gz
- mtk_0.0.38_linux_amd64.tar.gz
- mtk_0.0.38_linux_amd64_db2.tar.gz
- mtk_0.0.38_linux_arm64.tar.gz
- mtk_0.0.38_windows_amd64.tar.gz
- mtk_0.0.38_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Data is exported into a file and the file number is added.
- Data is exported into a file and MySQL statements are generated
LOAD DATA INFILE XX INTO TABLE.
- DB2 storage procedures can be migrated to openGauss (beta version).
- DB2 functions can be migrated to openGauss (beta version).
- The authentication license mechanism is updated.
- openGauss virtual columns are supported.
- PostgreSQL/openGauss
rulecan be migrated to openGauss/PostgreSQL.
- PostgreSQL/openGauss
base typecan be migrated to openGauss/PostgreSQL.
Fix
- The problem during migration of
interval expressionfrom DB2 to openGauss is resolved.
- The problem during migration of
values expressionfrom DB2 to openGauss is resolved. ignore insert into xx values xx
- The problem that the sequence is increased by 1 when auto-increment columns are migrated from MySQL to openGauss is resolved.
- The problem for migrating openGauss constraints is resolved.
- Part problems for migration from openGauss to openGauss are resolved.
- The problem that migrating the openGauss query sequences is slow is resolved.
- The CSV format problem during data export is resolved.
- The problem that
varchar/
timestamprange partitions are migrated from DB2 to MySQL is resolved.
v0.0.37 - 2021-09-24
- mtk_0.0.37_darwin_amd64.tar.gz
- mtk_0.0.37_darwin_amd64_db2.tar.gz
- mtk_0.0.37_darwin_arm64.tar.gz
- mtk_0.0.37_linux_amd64.tar.gz
- mtk_0.0.37_linux_amd64_db2.tar.gz
- mtk_0.0.37_linux_arm64.tar.gz
- mtk_0.0.37_windows_amd64.tar.gz
- mtk_0.0.37_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Materialized views can be migrated from PostgreSQL, DB2, and openGauss to openGauss.
- Stored procedures, functions and stored syntax can be viewed in DB2.
- Migration of MySQL range partitions containing
to_days/
year/
unix_timestampto openGauss is supported.
- Migration of PostgreSQL/openGauss
typeto openGauss is supported.
- Migration of PostgreSQL/openGauss
domainto openGauss is supported.
Fix
- The problem that the database compatibility mode B is not judged during migration to openGauus is resolved.
- The problem that the automatic virtual column of DB2 is always 0 is resolved. It is rewritten as
default 0.
- The problem that the minimum and maximum values of a sequence are equal is resolved.
- The problems related to partitions during migration from DB2 to openGauss are resolved.
- The problem of uppercase and lowercase letters in MySQL query statements is resolved.
- The problems related to the migration from PostgreSQL to openGauss are resolved.
Perf
- That the query of
ADMINTABINFOin DB2 is quite slow is optimized.
v0.0.36 - 2021-09-17
- mtk_0.0.36_darwin_amd64.tar.gz
- mtk_0.0.36_darwin_amd64_db2.tar.gz
- mtk_0.0.36_darwin_arm64.tar.gz
- mtk_0.0.36_linux_amd64.tar.gz
- mtk_0.0.36_linux_amd64_db2.tar.gz
- mtk_0.0.36_linux_arm64.tar.gz
- mtk_0.0.36_windows_amd64.tar.gz
- mtk_0.0.36_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Multilingual help instructions are supported.
- Migration of openGauss functions, views, stored procedures and triggers is supported.
- Error data logging is supported.
- Parameter
autoAddMaxvaluePartis added to support adding
maxvaluepartitions automatically when partition tables are migrated to
openGauss.
- A parameter is added to the command line for exporting data to a file.
- Migration to PostgreSQL is supported.
Fix
- MySQL 8.0.23 query view issues are resolved.
- The problem of determining the parallel completion of a single table is resolved.
- The problem that constraints are lost during migration from DB2 to openGauss is resolved.
- The problem that default values of columns are lost during migration from DB2 to openGauss is resolved.
v0.0.35 - 2021-09-13
- mtk_0.0.35_darwin_amd64.tar.gz
- mtk_0.0.35_darwin_amd64_db2.tar.gz
- mtk_0.0.35_darwin_arm64.tar.gz
- mtk_0.0.35_linux_amd64.tar.gz
- mtk_0.0.35_linux_amd64_db2.tar.gz
- mtk_0.0.35_linux_arm64.tar.gz
- mtk_0.0.35_windows_amd64.tar.gz
- mtk_0.0.35_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Fix
- Parameter
ignoreTabPartitionis added to support migration to the target database and ignoring of partition syntax. Currently, this parameter supports only migration to MySQL.
- The column type
timestampof DB2 and Oracle is changed to
datetime(6)after migration to MySQL.
v0.0.34 - 2021-09-03
- mtk_0.0.34_darwin_amd64.tar.gz
- mtk_0.0.34_darwin_amd64_db2.tar.gz
- mtk_0.0.34_darwin_arm64.tar.gz
- mtk_0.0.34_linux_amd64.tar.gz
- mtk_0.0.34_linux_amd64_db2.tar.gz
- mtk_0.0.34_linux_arm64.tar.gz
- mtk_0.0.34_windows_amd64.tar.gz
- mtk_0.0.34_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The problem that the name of an index or constraint exceeds 64 characters is resolved.
- The migration report is optimized.
- MariaDB 5.5.62 can be matched.
Fix
- Some syntax problems involved in migration of views from Oracle to openGauss are resolved.
- The problem that
remapSchemabecomes invalid is resolved.
v0.0.33 - 2021-08-30
- mtk_0.0.33_darwin_amd64.tar.gz
- mtk_0.0.33_darwin_amd64_db2.tar.gz
- mtk_0.0.33_darwin_arm64.tar.gz
- mtk_0.0.33_linux_amd64.tar.gz
- mtk_0.0.33_linux_amd64_db2.tar.gz
- mtk_0.0.33_linux_arm64.tar.gz
- mtk_0.0.33_windows_amd64.tar.gz
- mtk_0.0.33_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- During table data migration, a check item is added and used for checking whether a table exists.
- The parameter
excludeSysTableis added and can ignore a system table. The user can customize the parameter and does not configure using system configuration by default.
- Indexes and constraints can be created once data synchronization is complete.
- Collecting table statistics information is added.
- The column type
set
enumfor MySQL can be migrated to openGauss.
- The parameters
EnableSyncTabTbsProis added.
RemapTbsSpacesupports conversion of tablespace names.
- DB2 table creation syntax has
compressionand
organize bysyntax added.
- DB2 connection string has
ClientApplNameand
ProgramNameadded.
- Case is ignored when a database is queried in MySQL.
Fix
- The problem of matching special characters in a table name and a column name is resolved.
- After migrated to openGauss,
dualis replaced with
sys_dummy.
- The Chinese character gash problem is resolved.
- The problem that the table creation statement in Oracle does not have double quotation marks is resolved.
- After migrated to openGauss, when a view, function, procedure, or trigger is created, that the
search_pathis incorrect is resolved.
- The problem of consistency for the table, constraint, and index name is resolved.
- The problem that no warning is reported indicating that a table exists is resolved.
- The
auto-incrementcolumn of the
information_schema.TABLESview in MySQL 8.0 is inaccurate, which needs to be associated with the
information_schema.INNODB_TABLESTATSview for query.
v0.0.32 - 2021-08-25
Fix
- The ESLint safety problem is resolved.
v0.0.31 - 2021-08-25
- mtk_0.0.31_darwin_amd64.tar.gz
- mtk_0.0.31_darwin_amd64_db2.tar.gz
- mtk_0.0.31_darwin_arm64.tar.gz
- mtk_0.0.31_linux_amd64.tar.gz
- mtk_0.0.31_linux_amd64_db2.tar.gz
- mtk_0.0.31_linux_arm64.tar.gz
- mtk_0.0.31_windows_amd64.tar.gz
- mtk_0.0.31_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The DB2 virtual autoincrement column can be migrated to the MySQL autoincrement column.
- Oracle supports automatic query of a character set and configuration of environment variable NLS_LANG.
- For log files, parameter logfile is preferably used. If there is no parameter logfile, use parameter reportFile.
Fix
- The time consumed for migrating table data can be estimated.
- The VarGraphic column type problem in DB2 is resolved.
- axios is upgrade.
- The memory leak problem is resolved.
- The problem of the HTML report style is resolved.
- The format problem of migrating DB2 files to openGauss is resolved.
- The safety problem is resolved.
v0.0.30 - 2021-08-18
- mtk_0.0.30_darwin_amd64.tar.gz
- mtk_0.0.30_darwin_amd64_db2.tar.gz
- mtk_0.0.30_darwin_arm64.tar.gz
- mtk_0.0.30_linux_amd64.tar.gz
- mtk_0.0.30_linux_amd64_db2.tar.gz
- mtk_0.0.30_linux_arm64.tar.gz
- mtk_0.0.30_windows_amd64.tar.gz
- mtk_0.0.30_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Fix
- The problem of constraints with the same name is resolved.
- The format problem existing in the timestamp column of DB2 is resolved.
2019-01-10-10.52.08.554035
- Some problems existing in migrating data from Oracle to MySQL are resolved.
- The sorting problem involving index during data migration from Oracle to MySQL is resolved.
v0.0.29 - 2021-08-13
- mtk_0.0.29_darwin_amd64.tar.gz
- mtk_0.0.29_darwin_amd64_db2.tar.gz
- mtk_0.0.29_darwin_arm64.tar.gz
- mtk_0.0.29_linux_amd64.tar.gz
- mtk_0.0.29_linux_amd64_db2.tar.gz
- mtk_0.0.29_linux_arm64.tar.gz
- mtk_0.0.29_windows_amd64.tar.gz
- mtk_0.0.29_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Oracle packages can be migrated to openGauss. (alpha beta version)
- The mig-tab-pre/data/post/other subcommands are supported.
- Oracle triggers can be migrated to openGauss.
Fix
- Oracle trim is converted to openGauss trim (both xx).
- Oracle dbms_lock.sleep is converted to openGauss pg_sleep.
- The insert syntax problem in data migration from DB2 to MySQL is resolved.
- The problem of migrating partition tables from DB2 to MySQL is resolved.
- Constraint creation in MySQL does not allow usage of indexes, which is rectified to a normal syntax.
- The status problem of migrating sequences from DB2 to MySQL is resolved.
- The problem of generating reports slowly is resolved.
- Some problems of converting the storage procedure are resolved.
v0.0.28 - 2021-08-09
- mtk_0.0.28_darwin_amd64.tar.gz
- mtk_0.0.28_darwin_amd64_db2.tar.gz
- mtk_0.0.28_darwin_arm64.tar.gz
- mtk_0.0.28_linux_amd64.tar.gz
- mtk_0.0.28_linux_amd64_db2.tar.gz
- mtk_0.0.28_linux_arm64.tar.gz
- mtk_0.0.28_windows_amd64.tar.gz
- mtk_0.0.28_windows_amd64_db2.tar.gz
- mtk_checksums.txt
- mtkd_0.0.28_linux_amd64.tar.gz
- mtkd_0.0.28_linux_amd64_db2.tar.gz
- mtkd_0.0.28_linux_arm64.tar.gz
Feat
- Part of time functions in DB2 can be converted in openGauss.
- The gen auto complete subcommand is added.
- The connect by can be rewritten as the CTE syntax. (alpha beta version)
- Data can be migrated from DB2 to MySQL.
- batchSize is supported in the COPY command.
Fix
- Split table add abs function because mod function will appear negative.
- The problem that the overloading parameter package is lost is resolved during function and storage procedure migration to openGauss.
- The problem that the schema of the sequence is not remapped after the autoincrement column of MySQL is migrated to the openGauss sequence is resolved.
- After the autoincrement column of MySQL is migrated to the openGauss sequence, the cache is changed to 1.
- When the MySQL setting parameter is put in the timestamp column, it automatically generate the on update syntax.
- The problem of the time zone in the MySQL connection string is resolved. The default time zone is that of the local host.
v0.0.27 - 2021-08-02
- mtk_0.0.27_darwin_amd64.tar.gz
- mtk_0.0.27_darwin_amd64_db2.tar.gz
- mtk_0.0.27_darwin_arm64.tar.gz
- mtk_0.0.27_linux_amd64.tar.gz
- mtk_0.0.27_linux_amd64_db2.tar.gz
- mtk_0.0.27_linux_arm64.tar.gz
- mtk_0.0.27_windows_amd64.tar.gz
- mtk_0.0.27_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The Oracle synonym and the DB2 alias to openGauss synonym are supported.
- The DB2 nickname view is automatically skipped.
- The MySQL cursor syntax
xxx cursor forcan be rewritten into openGauss
cursor xxx is.
- The MySQL
joinsyntax can be rewritten.
- Oracle
NLSSORTcan be rewritten into openGauss
collate.
- The Oracle
months_betweenfunction can be rewritten.
- Oracle or MySQL functions can be migrated to openGauss.
- Oracle
rownumcan be rewritten into openGauss
limit.
- Oracle outer join can be rewritten into a normal syntax.
- Oracle storage procedure can be migrated to openGauss. (alpha beta version)
Fix
- The problem of rewritting the Oracle
add_yearsfunction is resolved.
- The view can be created by skipping the DB2 function index.
- The
sql_modeparameter is added in MySQL.
- The golang panic catching is added.
- The UTF8 string definition is added.
- Some problems of migrating Oracle functions to openGauss are resolved.
- The problem of querying the MySQL 8.0 views is resolved.
- The default value
000-00-00 00:00:00of the MySQL column is changed to
1970-01-01after migration.
- The prefix index syntax of the MySQL column is ignored. (custom_condition(100)
- MySQL JOIN with WHERE clause instead of ON is rewritten.
- The problem of inconsistency between the unique constraint, constraint, and index names of the primary key in DB2 is resolved.
v0.0.26 - 2021-07-23
- mtk_0.0.26_darwin_amd64.tar.gz
- mtk_0.0.26_darwin_amd64_db2.tar.gz
- mtk_0.0.26_darwin_arm64.tar.gz
- mtk_0.0.26_linux_amd64.tar.gz
- mtk_0.0.26_linux_amd64_db2.tar.gz
- mtk_0.0.26_linux_arm64.tar.gz
- mtk_0.0.26_windows_amd64.tar.gz
- mtk_0.0.26_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Oracle functions can be migrated from Oracle to openGauss. (alpha beta version)
- Oracle types can be migrated from Oracle to openGauss.
- The default value
"SYSIBM"."BLOB"('')of the DB2 column is removed.
- The MySQL
bitcolumn type is supported.
Fix
- The problem that the configuration file is incorrect but no warning is reported is resolved.
- The problem of the MySQL constraints with the same name is resolved.
- The problem that error "pq: invalid byte sequence for encoding "UTF8": 0x00" is reported when the MySQL text field is migrated is resolved.
- The problem that the virtualColConv parameter is case-insensitive is resolved.
- The MySQL bigint unsigned auto incr issue is resolved.
- The problem of querying the storage procedure of a function in MySQL 8.0 is resolved.
- The problem of a table name with space is resolved.
- The constraint problem of migrating data from DB2 to openGauss is resolved. A unique index is created first and a unique constraint is created second.
- The problem of the MySQL index column
custom_condition(1000)to openGauss
substring(custom_condition,0,1000)is resolved.
- The DB2 function index problem is resolved.
- The MySQL
int unsignedto bigint problem is resolved.
- The virtual column problem in MySQL 8.0 is resolved.
v0.0.25 - 2021-07-19
- mtk_0.0.25_darwin_amd64.tar.gz
- mtk_0.0.25_darwin_amd64_db2.tar.gz
- mtk_0.0.25_darwin_arm64.tar.gz
- mtk_0.0.25_linux_amd64.tar.gz
- mtk_0.0.25_linux_amd64_db2.tar.gz
- mtk_0.0.25_linux_arm64.tar.gz
- mtk_0.0.25_windows_amd64.tar.gz
- mtk_0.0.25_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- Removing and restoring comment code is added.
- Interval type regular matching is added.
- The
virtualColConvparameter is added, which allows the user to customize expression conversion of the virtual columns.
- The sequence modification function is added, and the last value of a sequence is synchronized to the target database.
- The check constraint of the virtual column in DB2 is automatically skipped.
- The
gen-configsubcommand is added.
- The DB2 nickname table index is automatically skipped.
- The sorting of the start time and end time in a report is added.
Fix
- The code smell problem is resolved.
- The problem that the copy syntax does not include DB2 autoincrement ID virtual column is resolved.
- The Oracle 11g deferred_segment_creation is resolved. Change the show table size to left join.
v0.0.24 - 2021-07-16
- mtk_0.0.24_darwin_amd64.tar.gz
- mtk_0.0.24_darwin_amd64_db2.tar.gz
- mtk_0.0.24_darwin_arm64.tar.gz
- mtk_0.0.24_linux_amd64.tar.gz
- mtk_0.0.24_linux_amd64_db2.tar.gz
- mtk_0.0.24_linux_arm64.tar.gz
- mtk_0.0.24_windows_amd64.tar.gz
- mtk_0.0.24_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The MySQL/DB2 table split function is supported.
- The openGauss virtual column syntax is supported.
- sqlServer views can be migrated to openGauss/MogDB.
- sqlServer indexes can be migrated to openGauss/MogDB.
- sqlServer constraints can be migrated to openGauss/MogDB.
- sqlServer schema table data can be migrated to openGauss/MogDB.
- The syntax of creating secondary partitions in PostgreSQL is supported.
- The Oracle select hint parallel syntax is supported.
- The
show-schemacommand can display the size of
schema.
- The
check-configcommand is added to check whether a configuration file is correct.
- A command is added to allow the user to control the migration type.
- The
showSchema,
showTopTableSize, and
showTopTableSplitcommands are added.
Fix
- The problem that multiple columns are to be partitioned after migrated to PostgreSQL is resolved.
- The problem that a column name starts with numbers is resolved.
v0.0.23 - 2021-07-07
- mtk_0.0.23_darwin_amd64.tar.gz
- mtk_0.0.23_darwin_amd64_db2.tar.gz
- mtk_0.0.23_darwin_arm64.tar.gz
- mtk_0.0.23_linux_amd64.tar.gz
- mtk_0.0.23_linux_amd64_db2.tar.gz
- mtk_0.0.23_linux_arm64.tar.gz
- mtk_0.0.23_windows_amd64.tar.gz
- mtk_0.0.23_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The sqlserver identity column can be migrated.
- The sqlserver table query can be migrated.
- sqlserver sequences can be migrated to openGauss/MogDB.
Fix
- The problem of querying the partition sorting in DB2 is resolved.
v0.0.22 - 2021-07-02
- mtk_0.0.22_darwin_amd64.tar.gz
- mtk_0.0.22_darwin_amd64_db2.tar.gz
- mtk_0.0.22_darwin_arm64.tar.gz
- mtk_0.0.22_linux_amd64.tar.gz
- mtk_0.0.22_linux_amd64_db2.tar.gz
- mtk_0.0.22_linux_arm64.tar.gz
- mtk_0.0.22_windows_amd64.tar.gz
- mtk_0.0.22_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Fix
- The length problem of migrating columns from MySQL to openGauss is resolved.
- The problem that the procedure does not exit normally is resolved.
- The problem that the column length does not increase automatically when a table is exported to a file is resolved.
- The problem that a report cannot generate on the Windows platform is resolved.
- The problem that the select syntax does not have double quotation marks added is resolved.
v0.0.21 - 2021-06-30
- mtk_0.0.21_darwin_amd64.tar.gz
- mtk_0.0.21_darwin_amd64_db2.tar.gz
- mtk_0.0.21_darwin_arm64.tar.gz
- mtk_0.0.21_linux_amd64.tar.gz
- mtk_0.0.21_linux_amd64_db2.tar.gz
- mtk_0.0.21_linux_arm64.tar.gz
- mtk_0.0.21_windows_amd64.tar.gz
- mtk_0.0.21_windows_amd64_db2.tar.gz
- mtk_checksums.txt
Feat
- The trigger, function, storage procedure, and package creation syntax is added in Oracle.
Fix
- The view filter condition is removed from Oracle.
Perf
- Concurrent lock is optimized.
- The performance of table synchronization data is optimized.
v0.0.20 - 2021-06-22
- mtk_0.0.20_darwin_amd64.tar.gz
- mtk_0.0.20_darwin_amd64_db2.tar.gz
- mtk_0.0.20_darwin_arm64.tar.gz
- mtk_0.0.20_linux_amd64.tar.gz
- mtk_0.0.20_linux_amd64_db2.tar.gz
- mtk_0.0.20_linux_arm64.tar.gz
- mtk_0.0.20_windows_amd64.tar.gz
- mtk_0.0.20_windows_amd64_db2.tar.gz
- mtk_checksums.txt
- mtkd_0.0.20_linux_amd64.tar.gz
- mtkd_0.0.20_linux_amd64_db2.tar.gz
- mtkd_0.0.20_linux_arm64.tar.gz
Fix
- The problem that the DB2
xmlcolumn type includes the XML version is resolved.
XMLDeclaration=1is added to a connection string.
- The problem that the DB2
dbClobcolumn type is changed to the openGauss text column type after migration is resolved.
v0.0.19 - 2021-06-18
- mtk_0.0.19_darwin_amd64.tar.gz
- mtk_0.0.19_darwin_amd64_db2.tar.gz
- mtk_0.0.19_darwin_arm64.tar.gz
- mtk_0.0.19_linux_amd64.tar.gz
- mtk_0.0.19_linux_amd64_db2.tar.gz
- mtk_0.0.19_linux_arm64.tar.gz
- mtk_0.0.19_windows_amd64.tar.gz
- mtk_0.0.19_windows_amd64_db2.tar.gz
- mtkd_0.0.19_linux_amd64.tar.gz
- mtkd_0.0.19_linux_amd64_db2.tar.gz
- mtkd_0.0.19_linux_arm64.tar.gz
Fix
- The problem that the Oracle range partition includes multiple columns is resolved.
- The empty string and null problems in Oracle, DB2, and MySQL are resolved.
v0.0.18 - 2021-06-16
- mtk_0.0.18_darwin_amd64.tar.gz
- mtk_0.0.18_darwin_amd64_db2.tar.gz
- mtk_0.0.18_darwin_arm64.tar.gz
- mtk_0.0.18_linux_amd64.tar.gz
- mtk_0.0.18_linux_amd64_db2.tar.gz
- mtk_0.0.18_linux_arm64.tar.gz
- mtk_0.0.18_windows_amd64.tar.gz
- mtk_0.0.18_windows_amd64_db2.tar.gz
- mtkd_0.0.18_linux_amd64.tar.gz
- mtkd_0.0.18_linux_amd64_db2.tar.gz
- mtkd_0.0.18_linux_arm64.tar.gz
Feat
- The license function is added.
- That the column length automatically increases after a non-UTF8 character set is changed to a UTF8 character set during migration to openGauss/MogDB is supported.
- The time consumed for migrating data from Oracle or MySQL to openGauss can be estimated.
- Data can be migrated from a Dameng database to openGauss.
- The
colKeyWordsand
objKeyWordsconfiguration parameters are added.
- The DB2 trigger, function, and procedure query statistics are added.
- The Oracle procedure, function, trigger, and type query statistics are added.
- The Oracle procedure, function, package, synonym, dblink, and queue type query statistics are added.
- Migration to openGauss 2.0.0 is supported.
- The database compatibility query function is added to openGauss.
Fix
- DB2 GBK CHARSET Database char include Chinese error slice bounds out of range
- Oracle '' = null to migrate openGauss is not null issue
- MySQL time issue. MySQL year to openGauss int
- Oracle/MySQL 0x00 issue
- DB2/MySQL blob/clob issue
- Oracle clob blob data issue
- Modify query Oracle dblink view. replace all_db_links to dba_db_links
- add Oracle long raw column type
- openGauss add Col Key Word stream. convert to "STREAM"
- missing column length from query openGauss/PostgreSQL database.
- migrate Oracle missing table
- table ddl comp CHAR = CHARACTER/VARCHAR = CHARACTER VARYING | https://docs.mogdb.io/en/mtk/v2.0/release-notes/ | 2021-10-16T02:44:21 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.mogdb.io |
Configuring Containers
Adding a Container to an Application¶
With the Platformer Console you can add multiple Containers to your Application (Deployment / Job / StatefulSet / CronJob Pod).
Each container can be configured to a different Image Collection so that you can use independent CI pipelines to build the different container images, even if they are part of the same application Pod.
Applications > Select Application > Containers tab.
Click CREATE. This will open the Create Container panel.
Note
Container Name - Name of this container. You can name it after the application or default to anything you prefer. If this is a multi-container Application/Pod use a recognizable and clear name (for your own benefit later).
Container Type - In most cases you will be using the default
Primarytype but Init Containers are also supported for advanced configurations. (Init Containers run to completion before the Primary containers are started in a Pod, read more about them here)
Image Collection - (Read more about Image Collections). You can choose an existing or new Image Collection for this container.
Using Public container images (Docker Hub, GCR, etc.)
To use a public image, select + CREATE NEW in Image Collections section.
Give it a collection name (if it’s from public docker hub, you can name it as
dockerhubso you can re-use it later if you need other public images from docker hub) and select Setup Later from the Select Credentials drop down.
Copy the public image you want to use in this Container to the Default Image field.
Click CONTINUE once you have specified your container details. This will take you to the Resource Utilization section.
You can adjust any Resource Requests and Limits in this section. These values can be configured later when you understand how much resources your application really needs.
Click CONTINUE to proceed to the Network section. You can expose the required ports on the container in this section. (Services and Ingresses will be set up to use these ports). !!! hint Leave the
Service Portto the default value unless you want to change the mapping between the container (“Port”) and the “Service Port”. Eg. 8080 -> 80.
Click CONTINUE to proceed to the Advanced Configuration. In this section you can configure any
commandsor
argumentsto run on your container if required. (Read more about commands and arguments in Kubernetes containers). If there is no such requirement, click FINISH.
Your container will now be added to the Application Pods. You can edit the container details in the Containers tab or switch to the Overview tab to see your deployments update with the new details. | https://docs.platformer.com/user-guides/applications/02-containers/ | 2021-10-16T03:46:38 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.platformer.com |
Parameter: Call Control
From TBwiki
The Call Control parameter is used to set a call control method for ISUP CIC groups. Values for this parameter are selected from a drop-down list. The Call Control parameter has the following values:
- Incoming: Circuits are always controlled by the remote end. Calls will not be sent to these circuits.
- Outgoing: Circuits are always controlled by TMG. Inbound calls from these circuits will be rejected.
- Bothway: Circuits work for both inbound and outbound calls.
- Controlled: Same as Incoming
- Controlling: Same as Outgoing | https://docs.telcobridges.com/tbwiki/Parameter:_Call_Control | 2021-10-16T03:02:41 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.telcobridges.com |
After you install VMware Dynamic Environment Manager on Windows desktops or Terminal Services, you must configure FlexEngine and the Management Console.
To have VMware Dynamic Environment Manager running correctly, you must configure FlexEngine.
- Create and configure an Active Directory GPO for VMware Dynamic Environment Manager. You must configure Group Policies to enable FlexEngine to run when the users log on to their Windows machines, and set up the locations of the configuration and profile archives shares. The rest of the VMware Dynamic Environment Manager Group Policies are optional.
- Configure a logoff script to enable FlexEngine to run at Windows logoff process.
If you do not use a GPO to configure FlexEngine, you can configure it by using command-line arguments. See FlexEngine Command-Line Arguments. Or, you can use the NoAD mode. See Installing and Configuring FlexEngine in NoAD Mode. | https://docs.vmware.com/en/VMware-Dynamic-Environment-Manager/2106/com.vmware.dynamic.environment.manager-install-config/GUID-7D5642ED-736A-48BE-8B80-BACDA81E2929.html | 2021-10-16T04:00:27 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.vmware.com |
Which payment types to you accept? Novice Novice tutorials require no prior knowledge of any specific web programming language.
We have 3 payment types that you can choose from to purchase one of our themes or subscribe to our club membership: PayPal, any major Credit Card and Amazon Pay.
After you click the CHECKOUT button in the shopping cart
You'll be redirected to a secure payment page (from a 3rd party payment processor) where you'll be able to select the type of payment you want to use.
| http://docs.themefuse.com/faq/pre-purchase/which-payment-types-to-you-accept | 2021-01-15T17:47:18 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['http://docs.themefuse.com/media/32/pre-purchase-cart.jpg', None],
dtype=object)
array(['http://docs.themefuse.com/media/32/pre-purchase-payment.jpg',
None], dtype=object) ] | docs.themefuse.com |
What kind of game is Screeps
Screeps is a massive multiplayer online real-time strategy game. Each player can create their own colony in a single persistent world shared by all the players. Such a colony can mine resources, build units, conquer territory..
Game world
The game world consists of interconnected rooms. A room is a closed space 50x50 cells in size. It may have from 1 to 4 exits to other rooms. The world is separated into shards which are connected by intershard portals. You can consider shards a Z-axis of the world. building. | https://docs-ptr.screeps.com/introduction.html | 2021-01-15T18:04:06 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['img/shards.png', None], dtype=object)
array(['img/colony-center.png', None], dtype=object)] | docs-ptr.screeps.com |
{"title":"Release note 10.21.19","slug":"release-note-102119","body":"## Added support for Amazon EC2 P3 GPU Instances\n\nWe have added support for Amazon P3 GPU instance family to Cavatica.\nNVIDIA drivers come preinstalled and optimized according to the Amazon best practice for the specific instance family and are accessible from the Docker container.\n\nThe following instances have been added:\n * p3.2xlarge\n * p3.8xlarge\n * p3.16xlarge","_id":"5dadbaf760b52b003361ef1f","project":"5773dcfc255e820e00e1cd4d","user":{"name":"Marko Marinkovic","username":"","_id":"5767bc73bb15f40e00a28777"},"createdAt":"2019-10-21T14:04:39.830Z","changelog":[],"__v":0} | https://docs.cavatica.org/v1.0/blog/release-note-102119 | 2021-01-15T17:03:11 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.cavatica.org |
Mass Back Promotion
You can back promote your user stories in bulk from the Pipeline page.
From the Pipeline Page
By clicking on Mass Back Promote you can choose the source org and the project/release. Copado will display the candidate environments and user stories and allow you to back promote them in bulk:
In order to mass back promote user stories to your lower environments, follow these steps:
- On the Pipeline page, choose the Source Environment. This is the environment that holds the user stories that are going to be back promoted.
- Choose the Project or Release.
- Select the user stories you want to back promote and the Destination Environment.
- Click on Back Promote or on Back Promote & Deploy:
Multiple-Source Mass Back Promotion
If you need to create a mass back promotion from different source environments, follow the steps below:
- Follow the regular mass back promotion steps and click on Back Promote.
- From the Created Back Promotions screen, click on Prepare another promotion and repeat the process:
- Once you are done, click on Deploy Promotions:
Once you deploy the back promotions, a Deployment Status will be displayed with the deployment status information. | https://docs.copado.com/article/obyletegx4-mass-back-promotion | 2021-01-15T18:05:06 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['https://files.helpdocs.io/U8pXPShac2/articles/obyletegx4/1582192218443/mass-back-promote.png',
None], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/obyletegx4/1575472401667/captura-de-pantalla-2019-12-04-a-las-16-12-58.png',
None], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/obyletegx4/1575472813798/prepare-another-promotio.png',
None], dtype=object)
array(['https://files.helpdocs.io/U8pXPShac2/articles/obyletegx4/1575473137591/deploy-promotions.png',
None], dtype=object) ] | docs.copado.com |
The GameBench Unity Package
Latest Version: 0.7.5
Status: This version should be considered Alpha quality and should not be included into release or release candidate games.
Table of contents
- What is the GameBench Unity Package?
- Installation
- Configuration
- Metrics
- The API
- Overhead
- Uninstallation
1. What is the GameBench Unity Package?
The GameBench Unity Package enables the capture and remote analysis of performance data for games developed with the Unity game engine. Performance data is uploaded to GameBench servers and may be browsed in the GameBench Web Dashboard or accessed programmatically via the GameBench Session Data API.
Supported platforms
- iOS 10.0 or above
- Android 6.0 or above (armeabi-v7a and arm64-v8a)
- We’ve tested this package with many devices, see our device list.
Supported Unity versions
- Unity 2018, 2019, 2020. (NB: 2017 is unsupported but should be usable with special install instructions)
- Unity Scripting Backend: IL2CPP and Mono
- Unity Scripting Runtime Version: All stable .NET versions
2. Installation
IMPORTANT
If upgrading from version 0.6.1 or older, you should first manually remove all GameBench files from your project assets. Versions before 0.7 were in the older ‘asset package’ format where GameBench code and plugins got imported/copied into your project. Build 0.7 onwards uses the newer Package Manager package form.
Once you’ve obtained the package file (
GameBench.tgz) perform the following steps:
- Open your game project in Unity.
- Open the ‘Package Manager’ (which is found under ‘Window’ in Unity’s menu).
- In Package Manager, click the ‘+’ button in the top left corner and select ‘Add package from tarball…’ :
- In the file browser select your
GameBench.tgzfrom wherever you saved it. Package installation will proceed automatically.
- When the package installation completes you will be prompted to configure it with your GameBench account details.
Unity 2018 users: Tarball packages aren’t supported in Unity 2018 so you will need to unpack the .tgz first and use “Add package from disk…” and browse to the
package.json in the package root.
3. Configuration
3.1 Configuring with the Editor UI
GameBench must be configured with some account details before use. This is easily done with the Editor UI which is accessed from “GameBench -> Configure” in the Unity menu:
Account
- Upload URL - Base URL of the server where performance data will be stored.
- GameBench Cloud users should use
- Private server users should use their own URL.
- Email - We recommend using a dedicated email account to store GameBench sessions.
- Token - API token (a hex string). See the API token documentation for more details.
Other Settings
- Enable GameBench - Controls whether GameBench will be included in your project builds.
- Disable automatic capture - Session captures are automatic by default, i.e. a session starts when the app launches and stops when the app is backgrounded. Use this option to disable automatic capture and capture sessions through the C# API instead.
3.2 SDKConfig.txt
GameBench configuration is currently stored as JSON in the file
Assets/Resources/GameBenchSDK/SDKConfig.txt.
{ "serverendpoint":"", "emailaddress":"[email protected]", "sdktoken":"0123456789ABCDEF0123456789ABCDEF", "sdkEnable":true, "sdkAPIControlEnable":false, "markSceneChanges":true, "verboseSDK":true, "tags": "foo=bar,baz=bat" }
As well as the settings that are visible in the Editor UI the following may be directly set in this file:
markSceneChanges
When set, scene changes will be automatically recorded as marker events. Default is ‘true’.
verboseSDK
This boolean flag is useful for troubleshooting, set it to ‘true’ for GameBench to log what it’s doing.
Tags are an unordered set of name/value string pairs that are useful for identifying sessions or groups of sessions in the web dashboard.
This is a comma-delimited list of
key=value pairs that may be used to identify sessions or sets of related sessions in the Analysis area of the Web Dashboard.
4. Metrics
GameBench captures the following measurements:
5. The API
GameBench can be used without writing any code at all. By default, performance data will be automatically captured from app launch until the app is backgrounded, at which point the captured data will be automatically uploaded. But if you need finer control over data capture - to limit it to particular scenes or events in your game, say - then GameBench offers full control through an API.
NB: Before using the API it is strongly recommended that you disable automatic capture via the option in the configuration UI (pictured above).
All APIs are static methods of the
Gamebench class in the
GamebenchLib.Runtime namespace. You do not need to instantiate anything.
5.1 Sessions
5.2 Metrics selection
By default all metrics are captured at preset intervals. These APIs allow control over which metrics are captured as well as the interval between captures.
5.3 Markers
To isolate specific areas of gameplay such as levels or battles, GameBench provides markers functionality.. For example, if you want to isolate performance data during a particular game level, you can set “in” marker when the level begins and “out” marker when the level is completed.
NB: Markers for scene changes are recorded automatically by default.
5.4 Configuration
All of the configuration values documented in section 3 may also be set from code. Note that these values must be set before calling Start(), also that values set with these APIs are transient and do not persist beyond process exit.
6. Overhead
6.1 Executable size
GameBench will add around 0.9MB to your Android .apk and 1.5MB to your iOS .ipa.
6.2 RAM usage
GameBench performs very little dynamic allocation and it’s impact on RAM usage is reasonably constant at around ~200KB on both iOS and Android.
6.3 CPU and Power usage
To measure the impact of GameBench on CPU and Power usage we profiled game demos on common devices to measure the difference once GameBench was enabled. Each game demo required no user input and therefore performed the same path each time, each ran a minimum of 3 times and for the same duration each time.
CPU
Power
Storage and upload size overhead - 10Kb for 10 mins session
7. Uninstallation
To uninstall:
- Open the Package Manager window in Unity.
- Set the ‘Packages’ filter at the top of the window to ‘In Project’.
- Select GameBench from the package list.
- Click the “Remove” button in the bottom right corner. Uninstall will proceed automatically. | https://docs.gamebench.net/unity/user-guide/ | 2021-01-15T18:20:40 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['/images/sdk_config.png', 'sdk_config'], dtype=object)] | docs.gamebench.net |
BrokenConnection method of the Win32_TSSessionSetting class
Sets the BrokenConnectionAction property, which is the action the server takes if the session time-limit is reached or if the connection is broken.
Syntax
uint32 BrokenConnection( [in] uint32 BrokenConnectionAction );
Parameters
BrokenConnectionAction [in]
The action to take.
Disconnect (0)
The user is disconnected from the session.
Terminate (1)
The session is permanently deleted from the server.
Return value
Returns 0 on success, otherwise returns a WMI error code. Refer to Remote Desktop Services WMI Provider Error Codes for a list of these values. The method returns an error if the setting is under Group Policy control.
Remarks). | https://docs.microsoft.com/en-us/windows/win32/termserv/win32-tssessionsetting-brokenconnection | 2021-01-15T18:58:38 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.microsoft.com |
Introduction¶
To be clear, our concern throughout this chapter is with commercial services which rent computational resources over the Internet at short notice and charge in small increments (by the minute or the hour). Currently, the condor_annex tool supports only AWS. AWS can start booting a new virtual machine as quickly as a few seconds after the request; barring hardware failure, you will be able to continue renting that VM until you stop paying the hourly charge. The other cloud services are broadly similar.
If you already have access to the Grid, you may wonder why you would want to begin cloud computing. The cloud services offer two major advantages over the Grid: first, cloud resources are typically available more quickly and in greater quantity than from the Grid; and second, because cloud resources are virtual machines, they are considerably more customizable than Grid resources. The major disadvantages are, of course, cost and complexity (although we hope that condor_annex reduces the latter).
We illustrate these advantages with what we anticipate will be the most common uses for condor_annex.
Use Case: Deadlines¶
With the ability to acquire computational resources in seconds or minutes and retain them for days or weeks, it becomes possible to rapidly adjust the size - and cost - of an HTCondor pool. Giving this ability to the end-user avoids the problems of deciding who will pay for expanding the pool and when to do so. We anticipate that the usual cause for doing so will be deadlines; the end-user has the best knowledge of their own deadlines and how much, in monetary terms, it’s worth to complete their work by that deadline.
Use Case: Capabilities¶
Cloud services may offer (virtual) hardware in configurations unavailable in the local pool, or in quantities that it would be prohibitively expensive to provide on an on-going basis. Examples (from 2017) may include GPU-based computation, or computations requiring a terabyte of main memory. A cloud service may also offer fast and cloud-local storage for shared data, which may have substantial performance benefits for some workflows. Some cloud providers (for example, AWS) have pre-populated this storage with common public datasets, to further ease adoption.
By using cloud resources, an HTCondor pool administrator may also experiment with or temporarily offer different software and configurations. For example, a pool may be configured with a maximum job runtime, perhaps to reduce the latency of fair-share adjustments or to protect against hung jobs. Adding cloud resources which permit longer-running jobs may be the least-disruptive way to accomodate a user whose jobs need more time.
Use Case: Capacities¶
It may be possible for an HTCondor administrator to lower the cost of their pool by increasing utilization and meeting peak demand with cloud computing.
Use Case: Experimental Convenience¶
Although you can experiment with many different HTCondor configurations using condor_annex and HTCondor running as a normal user, some configurations may require elevated privileges. In other situations, you may not be to create an unprivileged HTCondor pool on a machine because that would violate the acceptable-use policies, or because you can’t change the firewall, or because you’d use too much bandwidth. In those cases, you can instead “seed” the cloud with a single-node HTCondor installation and expand it using condor_annex. See HTCondor in the Cloud for instructions. | https://htcondor.readthedocs.io/en/v8_9_9/cloud-computing/introduction-cloud-computing.html | 2021-01-15T18:50:33 | CC-MAIN-2021-04 | 1610703495936.3 | [] | htcondor.readthedocs.io |
Contributing your module to Ansible¶
If you want to contribute a module to Ansible, you must meet our objective and subjective requirements. Please read the details below, and also review our tips for module development.
Modules accepted into the main project repo ship with every Ansible installation. However, contributing to the main project isn’t the only way to distribute a module - you can embed modules in roles on Galaxy or simply share copies of your module code for local use.
Contributing to Ansible: objective requirements¶
To contribute a module to Ansible, you must:
- write your module in either Python or Powershell for Windows
- use the
AnsibleModulecommon code
- support Python 2.7 and Python 3.5 - if your module cannot support Python 2.7, explain the required minimum Python version and rationale in the requirements section in
DOCUMENTATION
- use proper Python 3 syntax
- follow PEP 8 Python style conventions - see PEP 8 for more information
- license your module under the GPL license (GPLv3 or later)
- understand the license agreement, which applies to all contributions
- conform to Ansible’s formatting and documentation standards
- include comprehensive tests for your module
- minimize module dependencies
- support check_mode if possible
- ensure your code is readable
- if a module is named
<something>_facts, it should be because its main purpose is returning
ansible_facts. Do not name modules that do not do this with
_facts. Only use
ansible_factsfor information that is specific to the host machine, for example network interfaces and their configuration, which operating system and which programs are installed.
- Modules that query/return general information (and not
ansible_facts) should be named
_info. General information is non-host specific information, for example information on online/cloud services (you can access different accounts for the same online service from the same host), or information on VMs and containers accessible from the machine.
Please make sure your module meets these requirements before you submit your PR/proposal. If you have questions, reach out via Ansible’s IRC chat channel or the Ansible development mailing list.
Contributing to Ansible: subjective requirements¶
If your module meets our objective requirements, we’ll review your code to see if we think it’s clear, concise, secure, and maintainable. We’ll consider whether your module provides a good user experience, helpful error messages, reasonable defaults, and more. This process is subjective, and we can’t list exact standards for acceptance. For the best chance of getting your module accepted into the Ansible repo, follow our tips for module development. | https://docs.ansible.com/ansible/2.8/dev_guide/developing_modules_checklist.html | 2021-01-15T16:52:13 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.ansible.com |
Data Editing
When using "Mobile" rendering you can take advantage of all edit modes available in RadTreeList (EditForms, InPlace and PopUp).
Although the "Mobile" rendering of RadTreeList is optimized for mobile devices and renders different HTML and layout,there are only few differences in the way that the editing is used and handled. The main difference in the editing in "Mobile" RenderMode is the PopUp edit mode, which renders an entirely new mobile menu for editing. Following is a screenshot with the new PopUp mobile edit menu:
Column editors with mobile rendering
When you set the RenderMode to "Mobile", by default, RadTreeList will render native controls. Native controls are the HTML5 equivalents of our controls. For example, RadNumericTextBox will be replaced with. This change affects the accessing of the column editors and the implementations that mobile TreeList we have included a UseNativeEditorsInMobileMode property to each TreeListEditableColumn. This property could explicitly disable the generation of native controls for a column when it is set to "false" .
The second approach is more general and will disable the rendering of native editors for the entire web site. You could achieve this by setting a UseTreeListNativeEditorsInMobileMode option in the web.config file to "false". | https://docs.telerik.com/devtools/aspnet-ajax/controls/treelist/mobile-support/mobile-rendering/data-editing | 2021-01-15T18:55:01 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['images/TreeList_mobile_Edit.png',
'radtreelist-mobile-popupediting'], dtype=object)] | docs.telerik.com |
Feature: #86331 - Native URL support for MountPoints¶
See Issue #86331
Description¶
MountPoints allow TYPO3 editors to mount a page (and its subpages) from a different area of the site in the current page tree.
The definitions are as follows:
- MountPoint Page: A page with doktype=7 - a page pointing to a different page (“web mount”) that should act as replacement for this page and possible descendants.
- Mounted Page a.k.a. “Mount Target”: A regular page containing content and subpages.
The idea behind it is to manage content only once and “link” / “mount” to a tree to be used multiple times - while keeping the website visitor under the impression to actually navigate just a regular subpage. There are concerns regarding SEO for having duplicate content, but TYPO3 can be used for more than just simple websites, as Mount Points are an important tool for heavy multi-site installations or Intranet/Extranet installations.
A MountPoint Page has the option to either display the content of the MountPoint Page itself, or the content of the target page, when visiting this page.
Linking to a subpage will result in adding “MP” GET Parameters, and altering the root line (tree structure) of visiting the website, as the “MP” is containing the context. The MP parameter found throughout TYPO3 Core contains the ID of the Mounted Page and the ID of the MountPoint Page - e.g. “13-23” whereas 13 would be the Mounted Page and 23 the MountPoint Page (doktype=7).
Recursive mount points are added to the “MP” parameter with “,”, like “13-23,84-26”. Recursive mount points are defined as follows: A Mounted Page could have a subpage which in turn has a subpage which is again a MountPoint Page.
MountPoint support is now added in TYPO3 v9 with Site Handling and slug handling.
Due to TYPO3’s principles of slug handling where a page only contains one single slug
containing the URL path, and not various slugs for different places where it might be used,
TYPO3 will work by combining the slug of the MountPoint Page and a smaller part of the Mounted Page
or subpages of the Mounted Page, which will be added to the URL string - removing the necessity to actually
deal with the query parameter
MP which will never be added again, as it is part of the URL path now.
Using MountPoint functionality on a website plays an important role for menus as this is the only way to actually link to the subpages in a MountPoint context.
Multi-Site support:
The context for cross-domain sites is also kept, ensuring that the user will never notice that content might be coming from a completely different site / pagetree within TYPO3. Creating links for multi-site support is the same as if a Mounted Page is on the same site.
Impact¶
Limitations:
- Multi-language support Please be aware that multi-language setups are supported in general, but this would only fit if both sites support the same language IDs.
- Slug uniqueness when using Multi-Site setups cannot be ensured If a MountPoint Page has the slug “/more”, mounting a page with “/imprint” subpage, but the MountPoint Page has a regular sibling page with “/more/imprint” a collision cannot be detected, whereas the non-mounted page would always work and a subpage of a Mounted Page would never be reached.
For the sake of completeness, please consider the TYPO3 documentation on the following TypoScript properties related to mount points:
config.MP_defaults
config.MP_mapRootPoints
config.MP_disableTypolinkClosestMPvalue | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.5.x/Feature-86331-NativeURLSupportForMountPoints.html | 2021-01-15T18:49:30 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.typo3.org |
# About Waves Keeper
Waves Keeper is a browser extension that allows secure interaction with Waves-enabled web services.
Seed phrases and private keys are encrypted and stored within the extension and cannot be accessed by online dApps and services, making sure that users' funds are protected from hackers and malicious websites. Completion of a transaction doesn't require entering any sensitive information.
Waves Keeper is designed for convenience, so users can sign transactions with just a couple of clicks. Users can create multiple wallets and switch between them easily. And if a user ever forgets the password to the account, the access can be recovered from the seed phrase.
# Download Waves Keeper
- Google Chrome extension (opens new window) (works with Brave and Yandex.Browser)
- Firefox extension (opens new window)
- Opera extension (opens new window)
- Microsoft Edge extension (opens new window)
# Source Code
Waves Keeper source code is available on GitHub (opens new window). | https://docs.waves.tech/en/ecosystem/waves-keeper/ | 2021-01-15T17:24:07 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.waves.tech |
What is hybrid identity with Azure Active Directory?
Today, businesses, and corporations are becoming more and more a mixture of on-premises and cloud applications. Users require access to those applications both on-premises and in the cloud. Managing users both on-premises and in the cloud poses challenging scenarios.
Microsoft’s identity solutions span on-premises and cloud-based capabilities. These solutions create a common user identity for authentication and authorization to all resources, regardless of location. We call this hybrid identity.
With hybrid identity to Azure AD and hybrid identity management these scenarios become possible.
To achieve hybrid identity with Azure AD, one of three authentication methods can be used, depending on your scenarios. The three methods are:
These authentication methods also provide single-sign on capabilities. Single-sign on automatically signs your users in when they are on their corporate devices, connected to your corporate network.
For additional information, see Choose the right authentication method for your Azure Active Directory hybrid identity solution.
Common scenarios and recommendations
Here are some common hybrid identity and access management scenarios with recommendations as to which hybrid identity option (or options) might be appropriate for each.
1 Password hash synchronization with single sign-on.
2 Pass-through authentication and single sign-on.
3 Federated single sign-on with AD FS.
4 AD FS can be integrated with your enterprise PKI to allow sign-in using certificates. These certificates can be soft-certificates deployed via trusted provisioning channels such as MDM or GPO or smartcard certificates (including PIV/CAC cards) or Hello for Business (cert-trust). For more information about smartcard authentication support, see this blog.
License requirements for using Azure AD Connect
Using this feature is free and included in your Azure subscription. | https://docs.microsoft.com/en-us/azure/active-directory/hybrid/whatis-hybrid-identity | 2021-01-15T19:08:55 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.microsoft.com |
As with all Telestream Cloud services jobs can be submitted through API or web console.
When logged in to your account select the Project that has template you’d like to use. You will see the QC jobs list. This is where you can keep track of all jobs that have been processed or are currently in progress.
Click Submit QC Job button to select files for processing. You can either drag & drop source files from your local disc or paste the URL to media file.
When ready, submit job to start the upload and quality check process. You can follow the general progress in the jobs list. When the test is finished we’ll tell you if it passed or failed the test, generate proxy file for preview and detailed report with all issues. We’ll cover that in details in next tutorial.
Updated 6 months ago | https://docs.telestream.dev/docs/running-qc-jobs-in-the-cloud-1 | 2021-01-15T18:38:08 | CC-MAIN-2021-04 | 1610703495936.3 | [array(['https://files.readme.io/4f3ed64-qc-jobs-view-5c8c0b7f.jpg',
'qc-jobs-view-5c8c0b7f.jpg'], dtype=object)
array(['https://files.readme.io/4f3ed64-qc-jobs-view-5c8c0b7f.jpg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/2af1f8b-qc-file-upload-45349f9c.jpg',
'qc-file-upload-45349f9c.jpg'], dtype=object)
array(['https://files.readme.io/2af1f8b-qc-file-upload-45349f9c.jpg',
'Click to close...'], dtype=object) ] | docs.telestream.dev |
Deprecation: #84133 - Deprecate _isHiddenFormElement and _isReadOnlyFormElement¶
See Issue #84133
Description¶
The following properties have been marked as deprecated and should not be used any longer:
renderingOptions._isHiddenFormElement
renderingOptions._isReadOnlyFormElement
Those properties are available for the following form elements of the form framework:
- ContentElement
- Hidden
- Honeypot
Impact¶
The properties mentioned are still available in TYPO3 v9, but they will be dropped in TYPO3 v10.
Affected Installations¶
Any form built with the form framework is affected as soon as those properties have been manually added to the form definition. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.4/Deprecation-84133-Deprecate_isHiddenFormElementAnd_isReadOnlyFormElement.html | 2021-01-15T18:50:49 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.typo3.org |
{"title":"Release note 10.03.16","slug":"release-note-100316","body":"##Access TCGA data on Cavatica\n\n*Note: documentation links for TCGA data on Cavatica will be fully live on Oct. 10th. Thank you for your patience*\n\nThe Cancer Genome Atlas (TCGA) is now available on Cavatica through an integration with the [Seven Bridges Cancer Genomics Cloud (CGC)](). As such, the CGC is the source for authenticating users with the [Database of Genotypes and Phenotypes (dbGaP)]() and authorizing access to TCGA data on Cavatica. To access TCGA on Cavatica, you will first be directed to create an account on the CGC. \n\nYou can access TCGA Open Data on Cavatica after you are authenticated via the CGC and agree to [data use policies](). In addition, you can obtain access to Controlled Data through a dbGaP application. If you have an approved dbGaP application, be sure [update your application]() to list Seven Bridges as the Platform as a Service (PaaS). Security is our priority at Seven Bridges, and we ask that you familiarize yourself with and comply by [TCGA data access standards]() on Cavatica.\n\nOnce you have TCGA access, check out the [Data Browser]() to start querying TCGA data right away. \n\n##job.tree.log for job executions\nAfter each job execution, Cavatica will generate a job.tree.log file. This file contains the structure of the working directory and can be useful in the debugging process. Learn more from our [documentation]() on the Seven Bridges Knowledge Center.","_id":"57f2b10e130b941700a4bd0e","changelog":[],"__v":0,"createdAt":"2016-10-03T19:27:10.594Z","project":"5773dcfc255e820e00e1cd4d","user":{"name":"Emile Young","username":"","_id":"5613e4f8fdd08f2b00437620"},"metadata":{"title":"","description":"","image":[]}} | https://docs.cavatica.org/blog/release-note-100316 | 2021-01-15T17:52:36 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.cavatica.org |
On January.
Basic Usage
countpattern col: myCol on: 'honda'
Output: Generates a new column containing the number of instances of the string
honda that appear in each row of the column,
myCol.
Syntax and Parameters
countpattern col:column_ref [ignoreCase:true|false] [after:start_point | from: start_point] [before:end_point | to:end_point] [on:'exact_match']
Matching parameters:
NOTE: At least one of the following parameters must be included to specify the pattern to count:
after,
before,
from,
on,
to.
For more information on syntax standards, see Language Documentation Syntax Notes.
col
Identifies the column to which to apply the transform. You can specify only one column.
countpattern col: MyCol on: 'MyString'
Output: Counts the number of instances of the value
MyString in the
MyCol column and writes this value to a new column.
Usage Notes:
after
countpattern col: MyCol after: 'entry:'
Output: Counts
1 if there is anything that appears in the
MyCol column value after the string
entry:. If the value
entry: does not appear in the column, the output value is
0.
afterparameter value using string literals, regular expressions, or Patterns .
Usage Notes:
- The
afterand
fromparameters are very similar.
fromincludes the matching value as part of the extracted string.
aftercan be used with either
to,
on, or
before. See Pattern Clause Position Matching
before
countpattern col: MyCol before: '|'
Output:
- Counts
1if there is a value that appears before the pipe character (
|) in the
MyColcolumn, and no other pattern parameter is specified. If the
beforevalue does not appear in the column, the output value is
0.
- If another pattern parameter such as
afteris specified, the total count of instances is written to the new column.
Usage Notes:
- The
beforeand
toparameters are very similar.
toincludes the matching value as part of the extracted string.
beforecan be used with either
from,
on, or
after. See Pattern Clause Position Matching .
from
fromvalue is included in the match.
countpattern col: MyCol from: 'go:'
Output:
- Counts
1if contents from
MyColthat occur from
go:, to the end of the cell when no other pattern parameter is specified. If
go:does not appear in the column, the output value is blank.
- If another pattern parameter such as
tois specified, the total count of instances is written to the new column.
Usage Notes:
- The
afterand
fromparameters are very similar.
fromincludes the matching value as part of the extracted string.
fromcan be used with either
toor
before. See Pattern Clause Position Matching .
on
countpattern
Identifies the pattern that marks the ending of the value to match. Pattern can be a string literal, Patterns
, or regular expression. The
to value is included in the match.
countpattern col:MyCol from:'note:' to: `/`
Output:
- Counts instances from
MyColcolumn of all values that begin with
note:up to a backslash character.
- If a second pattern parameter is not specified, then this value is either
0or
1.
Usage Notes:
- The
beforeand
toparameters are very similar.
toincludes the matching value as part of the extracted string.
tocan be used with either
fromor
after. See Pattern Clause Position Matching.
ignoreCase
Indicates whether the match should ignore case or not.
- Set to
trueto ignore case matching.
- (Default) Set to
falseto perform case-sensitive matching.
countpattern col: MyCol on: 'My String' ignoreCase: true
Output: Counts the instances of the following values if they appear in the
MyCol column:
My String,
my string,
My string, etc.
Usage Notes:
Tip: For additional examples, see Common Tasks.
Examples
Tip: For additional examples, see Common Tasks.
Example - counting patterns in tweets.. | https://docs.trifacta.com/display/DP/Countpattern+Transform | 2021-01-15T19:01:03 | CC-MAIN-2021-04 | 1610703495936.3 | [] | docs.trifacta.com |
Exporting Brush Presets
You can export your presets to either backup. share, or install on a different computer.
-. | https://docs.toonboom.com/help/harmony-14/paint/drawing/export-brush-preset.html | 2018-09-18T17:40:04 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/HAR12/HAR12_ExportBrushes.png',
None], dtype=object) ] | docs.toonboom.com |
Talk:Plugin
From Joomla! Documentation
(Difference between revisions)
Latest revision as of 04... | http://docs.joomla.org/index.php?title=Talk:Plugin&diff=cur&oldid=14175 | 2013-05-18T16:19:07 | CC-MAIN-2013-20 | 1368696382560 | [] | docs.joomla.org |
These functions may be used to create and manage processes..
3. Be aware that programs which use signal.signal() to register a handler for SIGABRT will behave differently. Availability: Unix, Windows..
(pid, fd), where pid is
0in the child, the new child's process id in the parent, and
fdis the file descriptor of the master end of the pseudo-terminal. For a more portable approach, use the pty module. Availability: Some flavors of Unix
<sys/lock.h>) determines which segments are locked. Availability: Unix.
startfile() returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application's exit status. The path parameter is relative to the current directory. If you want to use an absolute path, make sure the first character is not a slash ("/"); the underlying Win32 ShellExecute() function doesn't work it is. Use the os.path.normpath() function to ensure that the path is properly encoded for Win32. Availability: Windows. New in version 2.0..
If pid is greater than
0, waitpid() requests
status information for that specific process. If pid is
0, the request is for the status of any child in the process
group of the current process. If pid is
-1, the request
pertains to any child of the current process. If pid is less
than
-1, status is requested for any process in the process
group
-pid (the absolute value of pid)..
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.1.1/lib/os-process.html | 2013-05-18T16:47:07 | CC-MAIN-2013-20 | 1368696382560 | [] | docs.python.org |
You can set up recurring reports in the WhatsUp Gold console. The reports can be sent either in the body of the email message or as an attached Archived Web Page (.mht) file. You can also use the Scheduled Reports feature to set up scheduled reports in the WhatsUp Gold web interface. The reports can be sent as .pdf document attachment. For more information, see Using Scheduled Reports: printing, exporting, and emailing reports.
To create a new Recurring Report:
Important: Recurring reports for workspace reports that include Split Second Graphs display a user rights error. Currently, Split Second Graphs are not supported in recurring reports.
Note: Recurring reports are sent in a fixed format that cannot be modified. They may not appear as expected, depending on your email client and your email preferences. If this is the case, you can send the reports as attachments.
Note: Recurring reports of workspace reports can only be sent as attachments.
You can find this path by selecting a report in the web interface. The URL shown in the address bar is the URL you need to enter in the URL box. You can use "localhost" - or - the configured IP address for the WhatsUp computer in the report URL.
To edit an existing Recurring Report: | http://docs.ipswitch.com/NM/82_WhatsUp%20Gold%20v14.4/03_Help/configuring_recurring_reports.htm | 2013-05-18T16:18:04 | CC-MAIN-2013-20 | 1368696382560 | [] | docs.ipswitch.com |
Gant is a tool for creating Ant-task-based builds using Groovy scripting instead of XML. Try it and I'm sure you'll like it!
This is, in fact, the first actual release. It got numbered 0.2.0 because there was a significant syntax revision, so 0.1.0 was never released.
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/display/GROOVY/2006/10/24/Gant+Version+0.2.0+Released | 2013-05-18T16:29:15 | CC-MAIN-2013-20 | 1368696382560 | [] | docs.codehaus.org |
Introduction
If you've ever work.
Icinga. Icinga is installed and running on the Icinga host.
Defining Parent/Child Relationships
In order for Icinga to be able to distinguish between DOWN and UNREACHABLE states for the hosts that are being monitored, you'll need to tell Icinga how those hosts are connected to each other - from the standpoint of the Icinga daemon. To do this, trace the path that a data packet would take from the Icinga daemon to each individual host. Each switch, router, and server the packet encounters or passes through is considered a "hop" and will require that you define a parent/child host relationship in Icinga. Here's what the host parent/child relationships look like from the viewpoint of Icinga:
Now that you know what the parent/child relationships look like for hosts that are being monitored, how do you configure Icinga to reflect them? The parents directive in your host definitions allows you to do this. Here's what the (abbreviated) host definitions with parent/child relationships would look like for this example:
define host{ host_name Icinga ; <-- The local host has no parent - it is the topmost host } define host{ host_name Switch1 parents Icinga } Icinga with the proper parent/child relationships for your hosts, let's see what happen when problems arise. Assume that two hosts - Web and Router1 - go offline...
When hosts change state (i.e. from UP to DOWN), the host reachability logic in Icinga kicks in. The reachability logic will initiate parallel checks of the parents and children of whatever hosts change state. This allows Icinga to quickly determine the current status of your network infrastructure when changes occur.
In this example, Icinga will determine that Web and Router1 are both in DOWN states because the "path" to those hosts is not being blocked.
Icinga will determine that all the hosts "beneath" Router1 are all in an UNREACHABLE state because Icinga can't reach them. Router1 is DOWN and is blocking the path to those other hosts. Those hosts might be running fine, or they might be offline - Icinga doesn't know because it can't reach them. Hence Icinga considers them to be UNREACHABLE instead of DOWN.
UNREACHABLE States and Notifications
By default, Icinga will notify contacts about both DOWN and UNREACHABLE host states. As an admin/tech, you might not want to get notifications about hosts that are UNREACHABLE. You know your network structure, and if Icinga.
© 2009-2010 Icinga Development Team, | http://docs.icinga.org/1.0.3/en/networkreachability.html | 2013-05-18T16:37:56 | CC-MAIN-2013-20 | 1368696382560 | [] | docs.icinga.org |
An interpolation between two integers that rounds.
This class specializes the interpolation of Tween
This is the closest approximation to a linear tween that is possible with an
integer. Compare to StepTween and Tween
See Tween for a discussion on how to use interpolation objects.
- Inheritance
- Object
- Animatable<T>
- Tween<int>
- IntTween
Constructors
Properties
- begin ↔ int
- The value this variable has at the beginning of the animation. [...]read / write, inherited
- end ↔ int
- int
- Returns the value this variable has at the given animation clock value. [...]
- animate(
Animation<double> parent) → Animation
- Returns the interpolated value for the current value of the given animation. [...]inherited
- noSuchMethod(
Invocation invocation) → dynamic
- Invoked when a non-existent method or property is accessed. [...]inherited
- toString(
) → String
- Returns a string representation of this object.inherited
Operators
- operator ==(
other) → bool
- The equality operator. [...]inherited | https://docs.flutter.io/flutter/animation/IntTween-class.html | 2018-01-16T17:31:53 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.flutter.io |
This documentation contains User and Kernel functions exported by Vita modules.
It is aimed at people who want to write applications and plugins for the Vita with the vitasdk and want to get an overview about the functions they can use.
The header files of the vitasdk are released under the terms of the MIT license, here is a non-legally binding summary of what you can do.
You can help us to improve this documentation!
Go to vita-headers, improve the headers files containing the documentation, or open an issue if you have found a problem with it.
If you have questions about this documentation, don't hesitate to join our IRC chat room #vitasdk on the FreeNode server.
Source of this documentation:
Vitasdk Sample code:
Henkaku Wiki:
Vita Dev Wiki: | https://docs.vitasdk.org/ | 2018-01-16T17:13:34 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.vitasdk.org |
Indent Included Content
Source code snippets from external files are often padded with a leading block indent. This leading block indent is relevant in its original context. However, once inside the documentation, this leading block indent is no longer needed.
The indent attribute:
[source,ruby,indent=2] ---- def names @name.split ' ' end ----
Produces:
def names @name.split ' ' end | https://docs.asciidoctor.org/asciidoc/latest/directives/include-with-indent/ | 2021-07-24T03:50:04 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.asciidoctor.org |
Welcome to RISCV-BOOM’s documentation!¶
The Berkeley Out-of-Order Machine (BOOM) is a synthesizable and parameterizable open-source RISC-V out-of-order core written in the Chisel hardware construction language. The goal of this document is to describe the design and implementation of the core as well as provide other helpful information to use the core.
Useful Links¶
The BOOM source code can be found here:.
The main supported mechanism to use the core is to use the Chipyard framework:.
The BOOM website can be found here:.
The BOOM mailing list can be found here:.
Quick-start¶
The best way to get started with the BOOM core is to use the Chipyard project template. There you will find the main steps to setup your environment, build, and run the BOOM core on a C++ emulator. Chipyard also provides supported flows for pushing a BOOM-based SoC through both the FireSim FPGA simulation flow and the HAMMER ASIC flow. Here is a selected set of steps from Chipyard’s documentation:
# Download the template and setup environment git clone cd chipyard ./scripts/init-submodules-no-riscv-tools.sh # build the toolchain ./scripts/build-toolchains.sh riscv-tools # add RISCV to env, update PATH and LD_LIBRARY_PATH env vars # note: env.sh generated by build-toolchains.sh source env.sh cd sims/verilator make CONFIG=LargeBoomConfig
Table of Contents¶
Introduction:
Core Overview:
- Instruction Fetch
- Branch Prediction
- The Decode Stage
- The Rename Stage
- The Reorder Buffer (ROB) and the Dispatch Stage
- The Issue Unit
- The Register Files and Bypass Network
- The Execute Pipeline
- The Load/Store Unit (LSU)
- The Memory System
Usage:
- Parameterization
- The BOOM Development Ecosystem
- Debugging
- Micro-architectural Event Tracking
- Verification
- Physical Realization | https://docs.boom-core.org/en/stable/index.html | 2021-07-24T04:17:19 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.boom-core.org |
The Dashboard is the first page that appears after you log in to the Bugcrowd platform. It is the hub for researchers to quickly view the most important tasks. It contains a high-level summary of your payment information, current and upcoming programs, tasks related to your participation on Bugcrowd (including submissions), an activity feed for viewing the activity in the platform, and announcements.
The Dashboard comprises of five main areas: Rewards, Engagements, Tasks, Activity, and Announcements.
You can click the
> < icon to collapse or expand the left panel.
Viewing RewardsViewing Rewards
The Rewards section displays the rewards earned in the last 30 days and upcoming reward information.
Viewing EngagementsViewing Engagements
The Engagements section displays the recent engagements, recommended programs, and programs ending soon.
- Recent section displays the programs you have most recently interacted with, submitted vulnerabilities, joined, accepted application, or accepted invitation.
- Just for you section displays the recommended programs based on your profile.
- Ending Soon section displays the programs that will end in the next two weeks. If there are programs that are not ending soon, then this section is not displayed.
Viewing TasksViewing Tasks
The Tasks tab on the right-side displays the tasks that are due for completion.
You can filter the tasks based on the following:
- Unblock: Displays submissions for which you must respond to unblock it.
- Collaboration: Displays submission collaboration invitations.
- Retest: Displays submissions that required a retest.
- Profile and Account
- Done: Displays tasks that are completed.
- Dismissed: Displays tasks that are dismissed.
Some tasks such as accepting invitations persist until the invitation is accepted, rejected, task is dismissed, or until the invitation expires. Each invitation displays an expiry time based on the program start and invitation timing. For a program that has not yet started, the invitation will expire 8 days from the program launch day. If the program has started, then it is 8 days from when you received your invitation. If the program is paused or rescheduled, the counter will be paused until the program launches or resumes.
Viewing ActivitiesViewing Activities
The Activity tab provides a chronological list of all activities on your submissions, sorted from the most recent to the oldest across all programs. It lets you stay up to date on the most recent activity in the program, such as comments that have been added to a submission, submission statuses that have been changed, and rewards that have been paid.
To help you identify researchers in the activity feed, rewards, and submission comments, Bugcrowd automatically generates and assigns researchers a unique avatar, if a profile photo does not exist. This allows you to quickly track and differentiate between users in the Activity feed.
The activities are grouped based on Today, Yesterday, This week, Last week, This month, Last month, and then the activities older than Last Month.
You can filter the activities based on the following:
- Rewards
- Blocker
- State Change
- Severity Change
- Disclosure
Viewing AnnouncementsViewing Announcements
You can view announcements for the programs you have access to. The announcements are grouped based on Today, Yesterday, This week, Last week, This month, Last month, and then the announcements older than Last Month.
Clicking the announcement title displays announcement view for that program.
Click More to view the announcement details. | https://docs.bugcrowd.com/researchers/onboarding/researcher-dashboard/ | 2021-07-24T05:01:11 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['/assets/images/researcher/researcher-dashboard/dashboard.png',
'dashboard'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/expand-icon.png',
'expand-icon'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/rewards.png',
'rewards'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/recent-programs.png',
'recent-programs'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/just-for-you.png',
'just-for-you'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/ending-soon.png',
'ending-soon'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/tasks.png',
'tasks'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/tasks-filter.png',
'tasks-filter'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/invitation-with-expiry-time.png',
'invitation-with-expiry-time'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/activity.png',
'activity'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/activity-filter.png',
'activity-filter'], dtype=object)
array(['/assets/images/researcher/researcher-dashboard/announcements.png',
'announcements'], dtype=object) ] | docs.bugcrowd.com |
The rightClickItem event is fired whenever the player right clicks with an item in their hand. It does not offer any special getters, but you can still access all members from MCPlayerInteractEvent
The event is cancelable.
If the event is canceled, Item#onItemRightClick willItemEvent;
MCRightClickItemEvent extends MCPlayerInteractEvent. That means all methods available in MCPlayerInteractEvent are also available in MCRightClickItemEvent | https://docs.blamejared.com/1.16/es/vanilla/api/event/entity/player/interact/MCRightClickItemEvent/ | 2021-07-24T04:21:28 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.blamejared.com |
Citrix Workspace app and Citrix Receiver
Citrix Workspace app replaces and extends the full capabilities of Citrix Receiver. Citrix recommends using the latest version of Citrix Workspace app to access workspaces. You can also access workspaces using Internet Explorer 11, or the latest version of Edge, Chrome, Firefox, or Safari.
For more information about supported features in Workspace app by platform, refer to the Workspace app feature matrix.
Important lifecycle milestone for Citrix ReceiverImportant lifecycle milestone for Citrix Receiver
Citrix Receiver has reached End of Life and is no longer supported. If you continue to use Citrix Receiver, technical support is limited to the options described in Lifecycle Milestones Definitions.
For more information about End of Life milestones for Citrix Receiver by platform, refer to Lifecycle Milestones for Citrix Workspace app and Citrix Receiver.
Information in this article about Citrix Receiver is provided as a convenience to help you transition your subscribers to using Workspace app.
Supported authentication methods for Citrix Workspace appSupported authentication methods for Citrix Workspace app
The following table shows the authentication methods supported by Citrix Workspace app. The table includes authentication methods relevant to specific versions of Citrix Receiver, which Citrix Workspace app replaces.
For more information about Workspace app support for specific features, refer to the Workspace app feature matrix.
For an overview of TLS and SHA2 support with Citrix Receivers, see CTX23226.
Transitioning from Citrix Receiver to Citrix Workspace appTransitioning from Citrix Receiver’re new to the workspace experience, you’ll get the latest version of the user interface as soon as it is available. You can access the workspace experience from your browser or from a local Citrix Workspace app.
Existing customers. If you’ve been using an earlier version of Citrix Workspace, the updated user interface can take around five minutes to display in local Citrix Workspace apps. You may temporarily see an older version of the user interface. Alternatively, you can click the Refresh button in your web browser to update the user interface as needed. If you’ve been using Citrix Receiver, guide users to upgrade to Citrix Workspace app so they can use all the features of Citrix Cloud services.
The following scenarios illustrate what users are likely to see if they are still using Citrix Receiver rather than Citrix Workspace app (recommended).
Citrix Receiver
Important:
Citrix Receiver has reached End of Life. For more information, refer to Important lifecycle milestone for Citrix Receiver in this article.
Users that are still accessing Workspace with Citrix Receiver see the “purple” user interface shown below. They see Virtual Apps and Desktops apps as well as web and SaaS apps from the Citrix Gateway service. Files are not supported in Citrix Receiver and users cannot access them this way.
The access control feature is not supported in Citrix Receiver. Thus, with the same services and access control enabled, users still see the purple user interface, but without web and SaaS apps.
Access control is a feature that delivers access for end users to SaaS, web, and virtual apps with a single sign-on (SSO) experience.
Citrix Workspace app or browser
Users that upgrade to Citrix Workspace app or use a web browser to access Workspace see the new user interface. They can then use all the new functionality, including access to Files.
Azure Active Directory (AAD)
This scenario is for when AAD is enabled as the Workspace authentication method. If users try to access Workspace with Citrix Receiver, they’ll see a message that the device isn’t supported. Once they upgrade to Citrix Workspace app, they can access their workspace.
StoreFront (on-premises deployment)
This scenario is for a StoreFront on-premises environment. If users choose to upgrade from Citrix Receiver to Citrix Workspace app, the only change will be the icon to open Citrix Workspace app.
Government users
Citrix Cloud Government users will continue to see their “purple” user interface when using the Workspace app or when accessing from a web browser. | https://docs.citrix.com/en-us/citrix-workspace/workspace-app.html | 2021-07-24T05:31:57 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['/en-us/citrix-workspace/media/purple-receiver.png',
'Citrix Workspace with Citrix Receiver'], dtype=object)
array(['/en-us/citrix-workspace/media/purple-receiver-access-control.png',
'Citrix Workspace with Citrix Receiver and access control'],
dtype=object)
array(['/en-us/citrix-workspace/media/workspace-new-ui-with-files.png',
'Citrix Workspace app with new user interface'], dtype=object)
array(['/en-us/citrix-workspace/media/receiver-with-aad-or-ad-plus-token.png',
'Citrix Receiver with AAD'], dtype=object)
array(['/en-us/citrix-workspace/media/storefront-on-premises.png',
'Citrix Workspace app with StoreFront on-premises'], dtype=object)] | docs.citrix.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.