content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Event-action model PubCoder allows you to easily enrich your content with interactivity by defining a series of actions that each object can perform in response to some events, like a tap on the screen or device shaking or tilting. Actions can be combined in lists using a very simple form of visual programming: each action is represented by a “brick” that can be chosen from a menu, and the various bricks can be placed in the list one after another or sticked together, so that they will be performed at the same time. The Interactivity Panel The Interactivity panel, at the right of the project window, can be used to define the behavior of the selected object. When an object is selected, the list of events related to that object is displayed on top of the control. Those are the starting points to let the object react with some actions. Some events are more generic and usually related to direct user input (e.g. tap, swipe), some are more specific to the object (e.g. a quiz was completed successfully). Selecting an event (e.g. Tap in the picture) displays the corresponding series of actions that will be executed when the event occurs. If the event already contains some actions, the number of contained actions will be displayed at left of the event name (e.g. 7 in the picture). You can add an action by clicking the Add New Action button, that will pop up a menu containing a searchable list of all actions that can be applied to the current event - in fact, not every action can be applied to every event, e.g. the Drag Object action can only be applied to the Drag event, since a user dragging on screen is required to actually drag an object. Once an action was added to the list, its propertiesivity panel can be dragged to re-arrange their order of execution, and can be also “attached” one to another (like the two Hide Object in the picture above) to be executed at the same time as opposed to one after another. Some actions can target any other object on the page to change their appearance, e.g. moving, rotating, scaling, fading them or bringing them to the front or backwards. Other can act on specific objects: play or stop a video, an audio or a frame-by-frame animation, switching the text of a text box or the image if an image object and so on. There are actions to change page, open a URL, run JavaScript code, apply CSS classes and other that allow to loop or run a group of actions and more. The combinations are really endless, and we keep on adding something new very often. See Generic Actions below for a list of common actions. Whatever the action you use, you can quickly preview the entire page clicking the Preview button in the Project Window toolbar. To remove an action, simply select it and hit the minus ( -) button. Also, for each event, you can use the button at the bottom-right of the list to decide if an action list should be performed whenever the event is fired, only the first time the event is fired or never. Generic Events Most events are generic to the vast majority of objects, most of them happen in response to user action, such as touching the screen or tilting the device. Here we describe these generic events one by one. For a description of object-specific events, see the help page for each specific object. Accelerometer The user gesture of moving the device in the space; accepts only Float Object action. Drag The user gesture of dragging an object over the screen; accepts only Drag Object action. Load The page was loaded and is being displayed on the user device. Pinch Open The user gesture of simultaneously moving two fingers outwards on the object on a mobile device (on a desktop computer, it is raised when the user double-clicks on the object). Pinch Close The user gesture of simultaneously moving two fingers inwards on the object on a mobile device (on a desktop computer, it is raised when the user double-clicks on the object while pressing the alt key). Read Aloud Started The Read Aloud playback was just started. Also triggered right after page load when read aloud playback was already active. Read Aloud Stopped The Read Aloud playback was just stopped. Shake The user is shaking the device in the space. Show Event raised when the object is displayed on the page, either because the page loaded or because the hidden object was shown by another action Swipe Down The user gesture of swiping the object downwards with its finger (or dragging with the mouse). Swipe Left The user gesture of swiping the object to the left with its finger (or dragging with the mouse). Swipe Right The user gesture of swiping the object to the right with its finger (or dragging with the mouse). Swipe Up The user gesture of swiping the object upwards with its finger (or dragging with the mouse). Tap The user gesture of tapping on the object, that is briefly touching the object, quickly raising the finger again on a mobile device (or clicking on the object with the mouse on a desktop computer) Touch Down The user gesture of touching the object with its finger (or pressing the mouse button with the cursor over the object). Touch Up The user gesture of raising the finger up while touching the object (or releasing the mouse button after it was pressed on the object). Generic Actions Most actions can target any other object on the page, to change their appearance or layout. Here we describe these generic actions one by one. For a description of object-specific actions, see the help page for each specific object. Hide Object Hides an object on the page, with or without a fade out effect Properties Show Object Shows an object that was previously hidden, e.g. by a previous Hide Object action or having its hidden property enabled Properties Move Object Moves an object on the page in a rectilinear direction. You can use the handle on the target object to move it to the desired position on stage, or define the X,Y movement via properties Properties Rotate Object Rotates an object on the page by a certain amount of degrees. You can use the handles on the target object to rotate it as desired and change the rotation origin, or define the rotation via properties Properties Scale Object Scales an object on stage to a fixed scale or by a fixed amount Properties Drag Object When added to the event handler of a Drag event, allows the user drag the target object around the screen, meaning that after touching it, the object will follow the user’s finger (or mouse cursor) unless the finger (or mouse button) is raised. Drop zones can be defined, with corresponding action lists that will be executed if the user drops the object on the area. It is also possible to define an object whose bounds will be used to limit the area where the drag can take place, so that the target object cannot be dragged outside of the area. Properties To define a drop zone, simply click on Add Drop Zone in the action inspector, then define the target for this drop zone and the list of actions to execute when the object is dropped onto the area. Finally, there’s a list of actions connected to drops outside of every defined zone. Float Object When added to the event handler of an Accelerometer event, will make the target object float around the screen of the user’s device, following gravity and device tilting. Properties Repeat Actions Repeats the sequence of actions occurred in the same action list before this one Properties Wait Waits a certain amount of seconds before executing the next action in the action list Properties Start/Stop Read Aloud Switches the playback of the Read Aloud from OFF to ON or vice versa. ibooks:readaloud="startstop"attribute), this action must be placed alone in the action list of a Tap or Touch Down event, no other actions can be executed in the same action list Play Audio File Plays an audio file. Properties Play, Pause, Play/Pause Soundtrack These actions switch the soundtrack playback status accordingly, though the user may be able to manage the soundtrack playback on some reader apps. Add CSS Class to Object Adds a CSS class to the container (usually a DIV) of the target object, altrering its class attribute. See the Code Section to see how to define a custom CSS class via code. Properties Remove CSS Class from Object Removes a CSS class from the container (usually a DIV) of the target object, altrering its class attribute Properties Set Object CSS Style Assigns a value to a certain CSS property of the container (usually a DIV) of the target object, altrering its style attribute. Properties Set Object Background Color Applies a specific color to the background of the target object Properties Set Object Border Applies specific border settings to the container of the target object Properties Set Object Shadow Applies specific shadow settings to the container of the target object Properties Scroll Content Scrolls the current page vertically to reach e certain point. For more informations, please see Vertical Scrolling Pages Properties Bring Back to Initial Layer, Bring Forward, Bring to Front, Send Backward, Send to Back These actions allow to change the z-index of the layer of the target object, to bring it above or below other objects. Go To Next Page, Go To Previous Page Sends the user to the next or previous page in your layout Go To Page Sends the user to a certain page in your layout Properties Open App Menu Opens the navigation menu of the reader app, allowing the user to modify reader settings or exit the reader Open Localizations Menu Opens a menu that allows the user to switch language, choosing from the different language renditions included in your exports Close Reader Closes the reader that is displaying the contents to the user, going back to PubReader or Shelf screen. Open URL Opens a web page to the specified URL, in the current app or in the user favorite browser Properties Run JavaScript Executes a custom JavaScript and passes immeditaley to the next action in the list
https://docs.pubcoder.com/pubcoder_events_and_actions.html
2020-10-20T02:50:14
CC-MAIN-2020-45
1603107869785.9
[array(['images/pubcoder_events_and_actions_1.png', '1'], dtype=object) array(['images/pubcoder_events_and_actions_2.gif', '2'], dtype=object) array(['images/pubcoder_events_and_actions_3.gif', '2'], dtype=object) array(['images/pubcoder_events_and_actions_4.png', '2'], dtype=object)]
docs.pubcoder.com
Extract text from an image. Requires that you have training data for the language you are reading. Works best for images with high contrast, little noise and horizontal text. Simple example # Simple example text <- ocr("") cat(text) # Get XML HOCR output xml <- ocr("", HOCR = TRUE) cat(xml) Roundtrip test: render PDF to image and OCR it back to text # Full roundtrip test: render PDF to image and OCR it back to text curl::curl_download("", "R-intro.pdf") orig <- pdftools::pdf_text("R-intro.pdf")[1] # Render pdf to png image img_file <- pdftools::pdf_convert("R-intro.pdf", format = 'tiff', pages = 1, dpi = 400) # Extract text from png image text <- ocr(img_file) unlink(img_file) cat(text) On Windows and MacOS the package binary package can be installed from CRAN: install.packages("tesseract") Installation from source on Linux or OSX requires the Tesseract library (see below). On Debian or Ubuntu install libtesseract-dev and libleptonica-dev. Also install tesseract-ocr-eng to run examples. sudo apt-get install -y libtesseract-dev libleptonica-dev tesseract-ocr-eng On Ubuntu Xenial and Ubuntu Bionic you can use this PPA to get the latest version of Tesseract: sudo add-apt-repository ppa:cran/tesseract sudo apt-get install -y libtesseract-dev tesseract-ocr-eng On Fedora we need tesseract-devel and leptonica-devel sudo yum install tesseract-devel leptonica-devel On RHEL and CentOS we need tesseract-devel and leptonica-devel from EPEL sudo yum install epel-release sudo yum install tesseract-devel leptonica-devel On OS-X use tesseract from Homebrew: brew install tesseract Tesseract uses training data to perform OCR. Most systems default to English training data. To improve OCR results for other languages you can to install the appropriate training data. On Windows and OSX you can do this in R using tesseract_download(): tesseract_download('fra') On Linux you need to install the appropriate training data from your distribution. For example to install the spanish training data: Alternatively you can manually download training data from github and store it in a path on disk that you pass in the datapath parameter or set a default path via the TESSDATA_PREFIX environment variable. Note that the Tesseract 4 and Tesseract 3 use different training data format. Make sure to download training data from the branch that matches your libtesseract version.
https://docs.ropensci.org/tesseract/
2020-10-20T03:02:35
CC-MAIN-2020-45
1603107869785.9
[]
docs.ropensci.org
Getting Data into Your H2O Cluster¶ The first step toward building and scoring your models is getting your data into the H2O cluster/Java process that’s running on your local or remote machine. Whether you’re importing data, uploading data, or retrieving data from HDFS or S3, be sure that your data is compatible with H2O. Supported File Formats¶ H2O currently supports the following file types: CSV (delimited, UTF-8 only) files (including GZipped CSV) ORC SVMLight ARFF XLS (BIFF 8 only) XLSX (BIFF 8 only) Avro version 1.8.0 (without multifile parsing or column type modification) Parquet Notes: - H2O supports UTF-8 encodings for CSV files. Please convert UTF-16 encodings to UTF-8 encoding before parsing CSV files into H2O. -> Direct Hive Import¶ H2O supports direct ingestion of data managed by Hive in Hadoop. This feature is available only when H2O is running as a Hadoop job. Internally H2O uses metadata in Hive Metastore database to determine the location and format of given Hive table. H2O then imports data directly from HDFS so limitations of supported formats mentioned above apply. Data from hive can pulled into H2O using import_hive_table function. H2O can read Hive table metadata two ways - either via direct Metastore access or via JDBC. Note: When ingesting data from Hive in Hadoop, direct Hive import is preferred over Using the Hive 2 JDBC Driver. Requirements¶ The user running H2O must have read access to Hive and the files it manages. For Direct Metastore access, the Hive jars and configuration must be present on H2O job classpath - either by adding it to yarn.application.classpath (or similar property for your resource manger of choice) or by adding Hive jars and configuration to libjars. For JDBC metadata access, the Hive JDBC Driver must be on H2O job classpath. Limitations¶ The imported table must be stored in a format supported by H2O. CSV: The Hive table property skip.header.line.countis currently not supported. CSV files with header rows will be imported with the header row as data. Partitioned tables with different storage formats. H2O supports importing partitioned tables that use different storage formats for different partitions; however in some cases (for example large number of small partitions), H2O may run out of memory while importing, even though the final data would easily fit into the memory allocated to the H2O cluster. Importing Examples¶ Example 1: Access Metadata via Metastore¶ This example shows how to access metadata via the metastore."]]) Example 2: Access Metadata via JDBC¶ This example shows how to access metadata via JDBC. of metadata via JDBC basic_import <- h2o.import_hive_table("jdbc:hive2://hive-server:10000/default", "table_name")# basic import of metadata via JDBC basic_import = h2o.import_hive_table("jdbc:hive2://hive-server:10000/default", "table_name"), Teradata, Note: The handling of categorical values is different between file ingest and JDBC ingests. Te JDBC treats categorical values as Strings. Strings are not compressed in any way in H2O memory, and using the JDBC interface might need more memory and additional data post-processing (converting to categoricals explicitly).. fetch_mode: Set to DISTRIBUTED to enable distributed import. Set to SINGLE to force a sequential read by a single node from the database. num_chunks_hint: Optionally specify the number of chunks for the target frame.. use_temp_table: Specifies whether a temporary table should be created by select_query. temp_table_name: The name of the temporary table to be created by select_query. fetch_mode: Set to DISTRIBUTED to enable distributed import. Set to SINGLE to force a sequential read by a single node from the database.) Using the Hive 2 JDBC Driver¶ H2O can ingest data from Hive through the Hive v2 JDBC driver by providing H2O with the JDBC driver for your Hive version. A demo showing how to ingest data from Hive through the Hive v2 JDBC driver is available here. The basic steps are described below. Notes: Direct Hive Import is preferred over using the Hive 2 JDBC driver. H2O can only load data from Hive version 2.2.0 or greater due to a limited implementation of the JDBC interface by Hive in earlier versions. Set up a table with data. beeline -u jdbc:hive2://hive-host:10000/db-name - Create the DB table:CREATE EXTERNAL TABLE IF IT DOES NOT EXIST AirlinesTest( fYear STRING , fMonth STRING , fDayofMonth STRING , fDayOfWeek STRING , DepTime INT , ArrTime INT , UniqueCarrier STRING , Origin STRING , Dest STRING , Distance INT , IsDepDelayed STRING , IsDepDelayed_REC INT ) COMMENT 'test table' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/tmp'; - Import the data from the dataset. Note that the file must be present on HDFS in /tmp.LOAD DATA INPATH '/tmp/AirlinesTest.csv' OVERWRITE INTO TABLE AirlinesTest. Add the Hive JDBC driver to H2O’s classpath. # Add the Hive JDBC driver to H2O's classpath java -cp hive-jdbc.jar:<path_to_h2o_jar> water.H2OApp Initialize H2O in either R or Python and import data. # initialize h2o in R library(h2o) h2o.init(extra_classpath=["hive-jdbc-standalone.jar"])# initialize h2o in Python import h2o h2o.init(extra_classpath = ["hive-jdbc-standalone.jar"]) After the jar file with JDBC driver is added, then data from the Hive databases can be pulled into H2O using the aforementioned import_sql_tableand import_sql_selectfunctions. connection_url <- "jdbc:hive2://localhost:10000/default" select_query <- "SELECT * FROM AirlinesTest;" username <- "username" password <- "changeit" airlines_dataset <- h2o.import_sql_select(connection_url, select_query, username, password)connection_url = "jdbc:hive2://localhost:10000/default" select_query = "SELECT * FROM AirlinesTest;" username = "username" password = "changeit" airlines_dataset = h2o.import_sql_select(connection_url, select_query, username, password) Connecting to Hive in a Kerberized Hadoop Cluster¶ When importing data from Kerberized Hive on Hadoop, it is necessary to configure the h2odriver to authenticate with the Hive instance via a delegation token. Since Hadoop does not generate delegation tokens for Hive automatically, it is necessary to provide the h2odriver with additional configurations. H2O is able to generate Hive delegation tokens in three modes: On the driver side, a token can be generated on H2O cluster start. On the mapper side, a token refresh thread is started, periodically re-generating the token. A combination of both of the above. H2O arguments used to configure the JDBC URL for Hive delegation token generation: hiveHost- The full address of HiveServer2, for example hostname:10000 hivePrincipal- Hiveserver2 Kerberos principal, for example hive/[email protected] hiveJdbcUrlPattern- (optional) Can be used to further customize the way the driver constructs the Hive JDBC URL. The default pattern used is jdbc:hive2://{{host}}/;{{auth}}where {{auth}}is replaced by principal={{hivePrincipal}}or auth=delegationTokenbased on context Note on libjars: In the examples below, we are omitting the -libjars option of the hadoop.jar command because it is not necessary for token generation. You may need to add it to be able to import data from Hive via JDBC. Generating the Token in the Driver¶ The advantage of this approach is that the keytab does not need to be distributed into the Hadoop cluster. Requirements: The Hive JDBC driver is on h2odriver classpath via the HADOOP_CLASSPATH environment variable. (Only used to acquire Hive delegation token.) The hiveHost, hivePrincipaland optionally hiveJdbcUrlPatternarguments are present. (See above for details.) Example command: export HADOOP_CLASSPATH=/path/to/hive-jdbc-standalone.jar hadoop jar h2odriver.jar \ -nodes 1 -mapperXmx 4G \ -hiveHost hostname:10000 -hivePrincipal hive/[email protected] \ -hiveJdbcUrlPattern "jdbc:hive2://{{host}}/;{{auth}};ssl=true;sslTrustStore=/path/to/keystore.jks" Generating the Token in the Mapper and Token Refresh¶ This approach generates a Hive delegation token after the H2O cluster is fully started up and then periodically refreshes the token. Delegation tokens usually have a limited life span, and for long-running H2O clusters, they need to be refreshed. For this to work, the user’s keytab and principal need to available to the H2O Cluster Leader node. Requirements: The Hive JDBC driver is on the h2o mapper classpath (either via libjars or YARN configuration). The hiveHost, hivePrincipaland optionally hiveJdbcUrlPatternarguments are present. (See above for details.) The principalargument is set with the value of the users’s Kerberos principal. The keytabargument set pointing to the file with the user’s Kerberos keytab file. The refreshTokensargument is present. Example command: hadoop jar h2odriver.jar [-libjars /path/to/hive-jdbc-standalone.jar] \ -nodes 1 -mapperXmx 4G \ -hiveHost hostname:10000 -hivePrincipal hive/[email protected] \ -pricipal user/[email protected] -keytab path/to/user.keytab \ -refreshTokens Note on refreshTokens: The provided keytab will be copied over to the machine running the H2O Cluster leader node. For this reason, we strongly recommended that both YARN and HDFS be secured with encryption. Generating the Token in the Driver with Refresh in the Mapper¶ This approach is a combination of the two previous scenarios. Hive delegation token is first generated by the h2odriver and then periodically refreshed by the H2O Cluster leader node. This is the best-of-both-worlds approach. The token is generated first in the driver and is available immediately on cluster start. It is then periodically refreshed and never expires. Requirements: The Hive JDBC driver is on the h2o driver and mapper classpaths. The hiveHost, hivePrincipaland optionally hiveJdbcUrlPatternarguments are present. (See above for details.) The refreshTokensargument is present. Example command: export HADOOP_CLASSPATH=/path/to/hive-jdbc-standalone.jar hadoop jar h2odriver.jar [-libjars /path/to/hive-jdbc-standalone.jar] \ -nodes 1 -mapperXmx 4G \ -hiveHost hostname:10000 -hivePrincipal hive/[email protected] \ -refreshTokens Using a Delegation Token when Connecting to Hive via JDBC¶ When running the actual data-load, specify the JDBC URL with the delegation token parameter: my_citibike_data <- h2o.import_sql_table( "jdbc:hive2://hostname:10000/default;auth=delegationToken", "citibike20k", "", "" ) my_citibike_data = h2o.import_sql_table( "jdbc:hive2://hostname:10000/default;auth=delegationToken", "citibike20k", "", "" )
http://docs2.h2o.ai/h2o/latest-stable/h2o-docs/getting-data-into-h2o.html
2020-10-20T02:40:32
CC-MAIN-2020-45
1603107869785.9
[]
docs2.h2o.ai
DP-900: Microsoft Azure Data Fundamentals Languages: English, Japanese, Chinese (Simplified), Korean, French, German, Spanish.
https://docs.microsoft.com/en-us/learn/certifications/exams/dp-900?WT.mc_id=thomasmaurer-blog-thmaure
2020-10-20T04:35:36
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
InstallShield 2014 Project: This information applies to the following project types: Feature is a general term that refers to a set of components or subfeatures in InstallShield. A subfeature is a feature that is located below another feature—similar to the relationship between a folder and a subfolder. Top-level features are the highest features in the hierarchy. Top-level features are never referred to as subfeatures. How to Refer to Features and Subfeatures in InstallScript Code Some feature functions and setup type dialog functions require you to refer to a single feature, while others require you to refer to multiple features. Referring to Single Features To refer to a single feature, use the feature’s name. To refer to a subfeature, use a path-like expression where the name of each feature in the hierarchy leading to that feature is separated by double backslashes. For example, to specify the subfeature Tutorials under the top-level feature Help Files, use the following expression in your installation script: szFeature = "Help Files\\Tutorials"; To refer to the subfeature CBT under Tutorials, use the following: szFeature = "Help Files\\Tutorials\\CBT"; Note that the name of a feature cannot contain backslashes. Referring to Multiple Features In InstallScript MSI installations, some feature and setup type dialog functions, such as SdFeatureMult, display multiple features and their subfeatures. In these cases, you refer to multiple features by specifying the feature immediately above them in the hierarchy. To refer to multiple top-level features, use a null string (""). For example, if you pass a null string to the SdFeatureMult function, the corresponding dialog displays all the top-level features in your script-created component set in the left window on the SdFeatureMult dialog, depending on the value of the MEDIA system variable. All subfeatures appear in the right window on this dialog. See Also Feature Functions Dialog Functions Dialog Customization Functions
https://docs.revenera.com/installshield21helplib/helplibrary/IHelpIScriptSpecifyFeatures.htm
2020-10-20T02:53:20
CC-MAIN-2020-45
1603107869785.9
[]
docs.revenera.com
Dark Installation Unzip the file and upload the XML file with the name<< Change the size of the image from the Blogger Designer. Note: Contact us with your blog name if you want a similar header image. Posts Slider Featured Post / Most Popular In layout you can delete these widget and replace with new ones: Note: These widgets have custom codes so if you delete them and want to add them back, you will have to reinstall the theme. Instagram Feed Visit instagram.pixelunion.net to generate an access token for your Instagram account. Make sure you’re logged into your Instagram account in order for it to work. Edit the Instagram widget and add the code in the widget’s body: Note: if the feed stops working generate a new token and replace the old one. Other Widgets Featured Post Edit the featured post gadget at the footer of the blog and choose a post to feature. Share Buttons the share buttons will also only work when the blog is public and have the buttons enabled. Number of posts per page Edit the Blog Posts gadget to choose how many post. _20<<.
http://docs.underlinedesigns.com/docs/dark/
2020-10-20T03:36:17
CC-MAIN-2020-45
1603107869785.9
[array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/11/backup.png', None], dtype=object) array(['http://underlinedesigns.com/wp-content/uploads/2017/03/upload.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/menu-1.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/1-b.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/1-c.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/1-c-1.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/1-c-2.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/menu-2.png', None], dtype=object) array(['http://underlinedesigns.com/wp-content/uploads/2017/03/social2.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/header-2.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/header-3.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/Capture-2.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/slider.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/featured-2.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/featured-3.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/11/access.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/insta.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/11/idtoken.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/11/showshare.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/11/postsnum.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/color.png', None], dtype=object) array(['http://docs.underlinedesigns.com/wp-content/uploads/2017/07/mobile.png', None], dtype=object) ]
docs.underlinedesigns.com
Safety Certificate Renewal Reminders PaTMa's Property Manager is designed to make property management as easy as possible. With options to store documents on the cloud and have access to them at any time, it allows for efficient management with all documents compiled up where it's easy to find. More specifically, PaTMa is aware of all the recognised documents that need to be provided to tenants before they sign a new tenancy agreement and these include safety certficates. Hence, it is essential that all safety certicates are kept up to date and PaTMa enables you to set reminders anticipating its renewal date. Note: If you have gone through the tenancy checklist, this step will be complete with the gas safety certification and an electrical safety certificate. However, if you wish to add other certificates or start from scratch, here's a step-by-step guide on how to add a safety certificate.. Additionally, a notification as displayed below will appear in the property list. This way you'll have plenty of time to evaluate the impact of the mortgage reverting to a standard rate (we've got some exciting tools coming out soon to help with that). If you decide to find a new deal, you'll still be in time to get in touch with your mortgage broker.
https://docs.patma.co.uk/manager/features/certificatereminders/
2020-10-20T03:37:19
CC-MAIN-2020-45
1603107869785.9
[array(['../img/safetycert.png', None], dtype=object)]
docs.patma.co.uk
Release 321 (15 Oct 2019)# Warning The server RPM is broken in this release. General Changes# Fix incorrect result of round()when applied to a tinyint, smallint, integer, or biginttype with negative decimal places. (#42) Improve performance of queries with LIMITover information_schematables. (#1543) Improve performance for broadcast joins by using dynamic filtering. This can be enabled via the experimental.enable-dynamic-filteringconfiguration option or the enable_dynamic_filteringsession property. (#1686) Security Changes# Hive Connector Changes# Fix reading TEXTfile collection delimiter set by Hive versions earlier than 3.0. (#1714) Fix a regression that prevented Presto from using the AWS Glue metastore. (#1698) Allow skipping header or footer lines for CSVformat tables via the skip_header_line_countand skip_footer_line_counttable properties. (#1090) Rename table property textfile_skip_header_line_countto skip_header_line_countand textfile_skip_footer_line_countto skip_footer_line_count. (#1090) Add support for LZOP compressed ( .lzo) files. Previously, queries accessing LZOP compressed files would fail, unless all files were small. (#1701) Add support for bucket-aware read of tables using bucketing version 2. (#538) Add support for writing to tables using bucketing version 2. (#538) Allow caching directory listings for all tables or schemas. (#1668) Add support for dynamic filtering for broadcast joins. (#1686) PostgreSQL Connector Changes# Support reading PostgreSQL arrays as the JSONdata type. This can be enabled by setting the postgresql.experimental.array-mappingconfiguration property or the array_mappingcatalog session property to AS_JSON. (#682)
https://docs.starburstdata.com/latest/release/release-321.html
2020-10-20T03:32:07
CC-MAIN-2020-45
1603107869785.9
[]
docs.starburstdata.com
Acknowledgments¶ Taichi depends on other open-source projects, which are shipped with taichi and users do not have to install manually: pybind11, fmt, Catch2, spdlog, stb_image, stb_image_write, stb_truetype, tinyobjloader, ffmpeg, miniz. Halide has been a great reference for us to learn about the Apple Metal API and the LLVM NVPTX backend API.
https://taichi.readthedocs.io/en/stable/acknowledgments.html
2020-10-20T03:20:34
CC-MAIN-2020-45
1603107869785.9
[]
taichi.readthedocs.io
Payment Failed This is a POST request sent to the url you set in the notifyUrl property when the application was rejected or wasn't completed successfully. Ex: Request ParametersRequest Parameters - txn_type: cart - payment_status: failed - pt_currency: USD - pt_amount: The purchase amount - uuid: The application token Request BodyRequest Body POST / HTTP/1.1 Host: your-notify-url-here.com Content-Type: application/x-www-form-urlencoded txn_type=cart&payment_status=failed&pt_currency=USD&pt_amount=1000&uuid=d7f5b8ee-380c-4559-be18-0311e9922f98
http://docs.paytomorrow.com/docs/api/payment-failed-postback/
2020-10-20T02:52:54
CC-MAIN-2020-45
1603107869785.9
[]
docs.paytomorrow.com
(The text is based on a text have been developedstock's (M.26, M.9) become infected, much of the scion trunk and major limbs above the graft union very typically remain symptom less, colour. Table of contents:
http://docs.metos.at/Fire+blight+Biology?structure=Disease+model_en
2020-10-20T02:39:04
CC-MAIN-2020-45
1603107869785.9
[]
docs.metos.at
Boosting document relevance To influence the relevance score that the search engine calculates for returned results, add a reserved ID field (177) with a data type of Real to a form. The range of the boost value is 0 to 2000. The default boost is 1.0 (the neutral value). To increase the relevance, an entry must have a document boost value of 1.1 to 2000. The higher the value, the more relevant the entry is. To decrease the relevance, an entry must have a document boost value of 0 to 0.9. The lower the value, the less relevant the entry is. Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/ars1908/boosting-document-relevance-866350317.html
2020-10-20T04:03:55
CC-MAIN-2020-45
1603107869785.9
[]
docs.bmc.com
On the dashboard you will find the image you uploaded on “Images & Snapshots” under your private images. Click on the Launch button and: Select “Boot from image (creates a new volume).” as the instance boot source. Ensure the device size is at least the same size as the image uploaded. If you are importing an existing virtual machine, for its first boot you should choose a flavor that provides at least the same amount of CPU and RAM as the VM had before. Once you confirm the compute instance is booting appropriately, you can resize it to a smaller flavor if you wish. Warning Remember that your VM has been imported exactly as it was before, therefore there might be some things that may prevent you from connecting to it remotely (for example: a host base firewall blocking connections). You can use the console and your existing user credentials to connect to your compute instance and make adjustments to its configuration as required.
https://docs.catalystcloud.io/images/launching-from-custom-image.html
2020-10-20T02:33:14
CC-MAIN-2020-45
1603107869785.9
[]
docs.catalystcloud.io
Introducing MEF Lightweight Composition and an Updated Composition Provider for ASP.NET MVC [Nick]): - By optimizing the scenarios appropriate for web applications, a much higher throughput can be achieved (composed graphs/second) - Some tweaks to the lifetime model used in the Composition Provider make it easier to share parts from a web application with alternative hosts (back-end processes, worker roles, etc.) Let’s take a look at these in detail. High-Throughput/Concurrent Composition. Broader Lifetime Model. - By default, parts created by the Composition Provider are non-shared - Adding a [Shared] attribute will make the part shared at the application-level, i.e. with singleton semantics - The [Shared] attribute accepts a parameter describing the ‘boundary’ within which the part will be shared, so [Shared(Boundaries.HttpRequest)] will cause an instance of the part to be shared/released along with the lifecycle of processing a web request To see the new lifetime model in action, you can view the demo and samples in the MEF CodePlex repository. These additions apply to the lightweight model only, we are still providing CompositionScopeDefinition with the full CompositionContainer. How Lightweight Composition Works: - The [Export] , [Import] , [ImportingConstructor] , [ImportMany] , [PartMetadata] and [MetadataAttribute] attributes, including custom attributes - Class, interface and property exports - Constructor and property imports - IPartImportsSatisfiedNotification for OnImportsSatisfied() - IDisposable To make a highly-optimized implementation possible, however, some features provided by the full CompositionContainer are not supported: - Imports or exports on private fields, properties or constructors – these limit our ability to generate MSIL for composition - Inherited exports, field exports, static exports or method exports – these slow down the part discovery/startup process - Importing into custom collections – this is complicated to implement efficiently but can be revisited in the future - Imports with RequiredCreationPolicy or ImportSource – these are incompatible with the simplified lifetime model - ICompositionService – this is inefficient in server scenarios; IExportProvider is used as a substitute with similar functionality. Getting Future.
https://docs.microsoft.com/en-us/archive/blogs/bclteam/introducing-mef-lightweight-composition-and-an-updated-composition-provider-for-asp-net-mvc-nick
2020-10-20T04:25:04
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
Conversion tools¶ These tools convert data between a legacy genomic file format and using ADAM’s schemas to store data in Parquet. fasta2adam and adam2fasta¶ These commands convert between FASTA and Parquet files storing assemblies using the NucleotideContigFragment schema. fasta2adam takes two required arguments: FASTA: The input FASTA file to convert. ADAM: The path to save the Parquet formatted NucleotideContigFragments to. fasta2adam supports the full set of default options, as well as the following options: -fragment_length: The fragment length to shard a given contig into. Defaults to 10,000bp. -reads: Path to a set of reads that includes sequence info. This read path is used to obtain the sequence indices for ordering the contigs from the FASTA file. -repartition: The number of partitions to save the data to. If provided, forces a shuffle. -verbose: If given, enables additional logging where the sequence dictionary is printed. adam2fasta takes two required arguments: ADAM: The path to a Parquet file containing NucleotideContigFragments. FASTA: The path to save the FASTA file to. adam2fasta only supports the -print_metrics option from the default options. Additionally, adam2fasta takes the following options: -line_width: The line width in characters to use for breaking FASTA lines. Defaults to 60 characters. -coalesce: Sets the number of partitions to coalesce the output to. If -force_shuffle_coalesceis not provided, the Spark engine may ignore the coalesce directive. -force_shuffle_coalesce: Forces a shuffle that leads to the output being saved with the number of partitions requested by -coalesce. This is necessary if the -coalescewould increase the number of partitions, or if it would reduce the number of partitions to fewer than the number of Spark executors. This may have a substantial performance cost, and will invalidate any sort order._reads: Treats the input as a read file (uses loadAlignmentsinstead of loadFragments), which behaves differently for unpaired FASTQ. -save_as_reads: Saves the output as a Parquet file of AlignmentRecords, as SAM/BAM/CRAM, or as FASTQ, depending on the output file extension. If this option is specified, the output can also be sorted: -sort_reads: Sorts reads by alignment position. Unmapped reads are placed at the end of all reads. Contigs are ordered by sequence record index. -sort_lexicographically: Sorts reads by alignment position. Unmapped reads are placed at the end of all reads. Contigs are ordered lexicographically.
https://adam.readthedocs.io/en/adam-parent_2.11-0.23.0/cli/conversions/
2020-10-20T03:33:53
CC-MAIN-2020-45
1603107869785.9
[]
adam.readthedocs.io
Persistent Cart A persistent shopping cart keeps track of unpurchased items which use, both registered customers and guest shoppers are required to either log in to an existing account, or create a new account before going through checkout. For guest shoppers, a persistent shopping cart is the only way to retrieve information from a previous session. customers’ shopping contents for future reference. When using a persistent cart, we recommend that you set the lifetime of the server session and the session cookie to a long period of time. See Session Lifetime.
https://docs.magento.com/user-guide/v2.3/sales/cart-persistent.html
2020-10-20T03:40:04
CC-MAIN-2020-45
1603107869785.9
[]
docs.magento.com
AI-900: Microsoft Azure AI Fundamentals Languages: English, Japanese, Chinese (Simplified), Korean, German, French, Spanish Retirement date: none. Price based on the country in which the exam is proctored.
https://docs.microsoft.com/en-us/learn/certifications/exams/ai-900?WT.mc_id=thomasmaurer-blog-thmaure
2020-10-20T04:04:29
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
- * Mobile Admin Mobile Admin allows for convenient management by providing the administrators and sub-administrators with Knox Manage’s key device management features. With Mobile Admin’s mobile-friendly user interface, you can monitor users and devices on your own mobile devices. In addition, the Mobile Admin limits the authority to manage or control the devices depending on the type of the administrator. Mobile Admin basic information Knox Manage offers the following Mobile Admin features: - Dashboard: Provides summarized information of the devices and users. You can also easily monitor the security status of the enrolled devices by viewing the compliance violation and device command history through the dashboard. - Device management: Provides full management capabilities for devices of all OS types. You can view the information of all enrolled devices and control the devices by sending a device command. - User management: Manages all user accounts. You can view the information of the user accounts and control the account status. Meet the requirements listed below to ensure the efficient operation of Mobile Admin. NOTE— If the selected language in Mobile Admin is not supported on the mobile device, Mobile Admin will be displayed in English.
https://docs.samsungknox.com/knox-manage/mobile-admin.htm
2020-10-20T02:50:53
CC-MAIN-2020-45
1603107869785.9
[]
docs.samsungknox.com
Sound Settings Dialog Box The Sound Settings dialog box lets you set the compression settings for the movie you will export. For tasks related to this dialog box, see Exporting QuickTime Movies. - From the top menu, select File > Export > Movie. - In the Export to QuickTime Movie dialog box that opens, click Movie Options. - In the Movie Setting dialog box that opens, click Settings in the Sound section.
https://docs.toonboom.com/help/harmony-14/premium/reference/dialog-box/sound-settings-dialog-box.html
2020-10-20T03:08:50
CC-MAIN-2020-45
1603107869785.9
[array(['../../Resources/Images/HAR/Stage/Export/HAR11/HAR11_export_soundSettings.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Cannot read a document with the specified schema Environment Description After publishing a report to Report Server and previewing it, there is an error displayed in the viewer: "Cannot read a document with the specified schema" Error Message Cannot read a document with the specified schema Solution This error is caused by the mismatch in Telerik Reporting versions, report that is created or modified with a newer version cannot be processed by the older version. When you modify the report in Standalone designer of the newer version, the XML schema of the report will be updated. Reporting engine of the older version is not able to read the updated schema and will show the mentioned error. To avoid issues caused by the version mismatch when using both Telerik Reporting and Report Server products it is recommended: 1. To have Telerik Reporting and Report Server of the same version on the machine. If you have upgraded Telerik Reporting to the latest version consider upgrading Report Server as well. 2. Create and modify reports with Standalone designer that is shipped with Report Server, this way you can be sure that version of designer and Report Server is the same. The designer can be found in product installation folder: C:\Program Files (x86)\Progress\Telerik Report Server\Telerik.ReportServer.Web\Report Designer.
https://docs.telerik.com/report-server/knowledge-base/cannot-read-document-with-the-specified-schema
2018-04-19T17:24:41
CC-MAIN-2018-17
1524125937015.7
[]
docs.telerik.com
A class for receiving events from a Label. More... A class for receiving events from a Label. You can register a Label::Listener with a Label using the Label::addListener() method, and it will be called when the text of the label changes, either because of a call to Label::setText() or by the user editing the text (if the label is editable). Destructor. Called when a Label goes into editing mode and displays a TextEditor. Called when a Label is about to delete its TextEditor and exit editing mode.
https://docs.juce.com/master/classLabel_1_1Listener.html
2018-04-19T17:03:28
CC-MAIN-2018-17
1524125937015.7
[]
docs.juce.com
PyUNLocBoX: Optimization by Proximal Splitting¶ The PyUNLocBoX is a Python package which uses proximal splitting methods to solve non-differentiable convex optimization problems. It is a free software, distributed under the BSD license, and available on PyPI. The documentation is available on Read the Docs and development takes place on GitHub. (A Matlab counterpart the package! Acknowledgments¶ The PyUNLocBoX was started in 2014 as an academic open-source project for research purpose at the EPFL LTS2 laboratory.
https://pyunlocbox.readthedocs.io/en/v0.5.2/
2018-04-19T17:19:05
CC-MAIN-2018-17
1524125937015.7
[]
pyunlocbox.readthedocs.io
You manage some system settings by using the App Controller command-line console. You can use the command-line console from the Console tab in either XenCenter or in vSphere. If you enable Secure Shell (SSH) access, you can also open any command prompt, such as PuTTY, and log on to App Controller. The following sections appear in the App Controller command-line console: With Express Setup, you can configure the basic network settings to enable App Controller to work within your network. These settings include: For more information about using these settings, see Setting the App Controller IP Address for the First Time. You can install multiple instances of the App Controller virtual machine (VM) to create a cluster. One App Controller VM acts as the cluster head. All other App Controller VMs in the cluster are called service nodes. Each service node has a local database that is used by the service node only. Updating user information from the service node to the cluster head requires writing to the database. A service node connects to the database on the cluster head by using a secure channel. Citrix recommends deploying two App Controller VMs in a high availability pair. Each VM is a cluster head. If one VM fails, the secondary VM can act as the cluster head. Citrix also recommends using the gateway proxy to establish a secure connection between the service node and the cluster head. When you create the cluster head, you enter a shared key for the cluster. When you join additional VMs to the cluster, you enter the shared key. For more information about clustering, see Creating a Cluster. With high availability, you configure the settings for the primary and secondary App Controller VMs. These settings include: For more information about configuring high availability, see Configuring High Availability. With the System Menu, you can configure or view basic system settings that include: With Troubleshooting, you can access three tools that help you view network settings, view logs, and create a support bundle that you can send to technical support. In Network Utilities, you can do the following: You can configure logs by using the Logging menu. In the menu, you can: You can also create a support bundle to send to technical support staff for evaluation. You can view the date and time in App Controller. The day, date, time, time zone, and year appear. When you install App Controller, 50 GB of disk space is allocated in XenServer for the App Controller VM. You can use the command-line console to view how much disk space App Controller is using. The system disk usage statistics appear. You can use the command-line console to change the default server certificate in App Controller. When you reset the certificate, App Controller removes the passphrase and the new certificate file overwrites the old certificate file. When you reset the default certificate, you must restart App Controller. The certificate resets. You can restart or shut down App Controller by using the command-line console.
https://docs.citrix.com/de-de/xenmobile/8-7/xmob-appc-manage-wrapper-f-con/xmob-appc-maintain-wrapper-con/xmob-appc-maintain-change-time-cli-tsk.html
2018-04-19T17:39:50
CC-MAIN-2018-17
1524125937015.7
[]
docs.citrix.com
Most new users should follow the TurboGears 2.2.2 Standard Installation and then continue on to Quickstarting A TurboGears 2.2.2 Project, after which they should look at the first few basic moves in TurboGears 2 At A Glance. When you feel confident with your understanding of TurboGears at high level you should give a look at our TurboGears Book and its 20 Minutes Wiki Tutorial to get start with your first real web application. This is a set of more advanced tutorial that cover some common framework usages. We suggest to give a look at the Explore A Quickstarted Project tutorial for a better grasp of a tipical TurboGears web application structure. Sometimes, you don’t need a tutorial. Sometimes, you just need to see some sample code, or get a specific answer to a specific question, and tutorials are just too much for you. If that’s you, might we suggest checking out our Recipes and FAQ? Those tutorials are related to parts of the framework that got deprecated in the past and are here only for reference or for projects that still rely on previous versions of TurboGears.
https://turbogears.readthedocs.io/en/rtfd2.2.2/tutorials.html
2018-04-19T17:08:40
CC-MAIN-2018-17
1524125937015.7
[]
turbogears.readthedocs.io
MSP Identity Validity Rules¶ As mentioned in MSP description, MSPs may be configured with a set of root certificate authorities (rCAs), and optionally a set of intermediate certificate authorities (iCAs). An MSP’s iCA certificates must be signed by exactly one of the MSP’s rCAs or iCAs. An MSP’s configuration may contain a certificate revocation list, or CRL. If any of the MSP’s root certificate authorities are listed in the CRL, then the MSP’s configuration must not include any iCA that is also included in the CRL, or the MSP setup will fail. Each rCA is the root of a certification tree. That is, each rCA may be the signer of the certificates of one or more iCAs, and these iCAs will be the signer either of other iCAs or of user-certificates. Here are a few examples: rCA1 rCA2 rCA3 / \ | | iCA1 iCA2 iCA3 id / \ | | iCA11 iCA12 id id | id The default MPS implementation accepts as valid identities X.509 certificates signed by the appropriate authorities. In the diagram above, only certificates signed by iCA11, iCA12, iCA2, iCA3 an rCA3 will be considered valid. Certificates signed by internal nodes will be rejected. Notice that the validity of a certificate is also affected, in a similar way, if one or more organizational units are specified in the MSP configuration. Recall that an organizational unit is specified in an MSP configuration as a pair of two values, say (parent-cert, ou-string) representing the certificate authority that certifies that organisational unit, and the actual organisational unit identifier, respectively. If a certificate C is signed by an iCA or rCA for which an organisational unit has been specified in the MSP configuration, then C is considered valid if, among other requirements, it includes ou-string as part of its OU field.
http://hyperledger-fabric.readthedocs.io/en/v1.0.5/msp-identity-validity-rules.html
2018-04-19T17:13:59
CC-MAIN-2018-17
1524125937015.7
[]
hyperledger-fabric.readthedocs.io
JHTML/image From Joomla! Documentation < API15:JHTML The "API15" namespace is an archived namespace. This page contains information for a Joomla! version which is no longer supported. It exists only as a historical reference, it will not be improved and its content may be incomplete and/or contain broken links. Description Write a </img> element [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Syntax image($url, $alt, $attribs=null) Defined in libraries/joomla/html/html.php Importing jimport( 'joomla.html.html' ); Source Body function image($url, $alt, $attribs = null) { if (is_array($attribs)) { $attribs = JArrayHelper::toString( $attribs ); } if(strpos($url, 'http') !== 0) { $url = JURI::root(true).'/'.$url; }; return '<img src="'.$url.'" alt="'.$alt.'" '.$attribs.' />'; } [<! removed edit link to red link >] <! removed transcluded page call, red link never existed >
https://docs.joomla.org/API15:JHTML/image
2018-04-19T17:28:28
CC-MAIN-2018-17
1524125937015.7
[]
docs.joomla.org
1 Time Scale Gregor assumes that all days have exactly 86,400 seconds. Therefore, it is based fundamentally on mean solar time, or Univeral Time, and not on Coordinated Universal Time (UTC), the civil time scale adopted by most of the world. In the interest of reconciling the SI second to a close approximation of mean solar time, UTC occasionally inserts an extra leap second into a day. UTC can also remove seconds but has never done so. The rotation of the Earth is slowing and solar days are getting longer, so there has only ever been a need to add seconds. Since leap seconds are added on an irregular basis, they complicate both the representation of times and arithemtic performed on them. In practice, most computer systems are not faithful to UTC. The POSIX clock, for example, ignores leap seconds. The standard (and non-standard) date and time libraries of most programming languagues also ignore them. In truth, although UTC is the de jure international standard, it’s rare to find a system that actually implements it and just as rare to find a user who misses it. That said, if there is a demand for proper UTC support, I will consider adding it. Ideally, Gregor would be able to support many different time scales. API suggestions are welcome.
https://docs.racket-lang.org/gregor/time-scale.html
2018-03-17T16:13:14
CC-MAIN-2018-13
1521257645248.22
[]
docs.racket-lang.org
NAP Overview Updated: March 29, 2012 Applies To: Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 Network Access Protection (NAP) is one of the most anticipated features of the Windows Server® 2008 operating system. NAP is a new platform that allows network administrators to define® 7, Windows Vista®, Windows® XP with Service Pack 3 (SP3), Windows Server 2008, and Windows Server® 2008 R2. Note On NAP client computers running Windows 7, NAP is integrated into Action Center. If a NAP client computer is determined to be noncompliant with network health polices, you can obtain more information by reviewing the Network Access Protection category under Security . NAP client computers that are compliant with health requirements and computers that are not running the NAP Agent service do not display NAP information in Action Center. NAP also includes an application programming interface (API) that developers and vendors can use to integrate their products and leverage this health state validation, access enforcement, and ongoing compliance evaluation. For more information about the NAP API, see Network Access Protection ().
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd759127(v=ws.11)
2018-03-17T17:01:22
CC-MAIN-2018-13
1521257645248.22
[]
docs.microsoft.com
Working with the Add Publishing Point Wizard The Add Publishing Point Wizard helps you create a new publishing point on your Windows Media server. You can start the Add Publishing Point Wizard by clicking Add Publishing Point (Wizard) on the Action menu of a server or the Publishing Points item in the console tree. You can also start the wizard by clicking the Add Publishing Point button on the Getting Started tab of those items. This section provides information to assist you in completing the Add Publishing Point Wizard. In This Section Naming the publishing point Determining the content type Selecting the publishing point type Using an existing publishing point Identifying the content location Selecting content playback options Verifying your publishing point options Completing the Add Publishing Point Wizard Note The Add Publishing Point Wizard is designed to support common uses of Windows Media Services and does not incorporate advanced features, such as setting up cache/proxy publishing points. To create a new publishing point that uses advanced content sources, on the server Action menu, click Add Publishing Point (Advanced) and complete the dialog box. See Also Concepts Add a publishing point using the advanced method
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731021(v=ws.11)
2018-03-17T17:42:59
CC-MAIN-2018-13
1521257645248.22
[]
docs.microsoft.com
. String Number Type ⇒ ':number' There is only one number type that supports both floating point and integer numbers. There is no loss of precision in any operation, the engine always stores the data in the most performant way that doesn’t compromise precision. Dates Dates in DataWeave follow the ISO-8601 standard and are defined between '|' characters. The date system supports: DateTime Local DateTime Time Local Time Period TimeZone Date Date Type ⇒ ':date' Represented as 'Year'-'Month'-'Date' The type Date has no time component at all (not even midnight). TimeZone Type ⇒ ':timeZone' Timezones must include a + or a - to be defined as such. |03:00| is a time, |+03:00| is a timezone. DateTime Type ⇒ ':datetime' Date time is the conjunction of 'Date' + 'Time' + 'TimeZone'. Local Date Time Type ⇒ ':localdatetime' Local date time is the conjunction of 'Date' + 'Time'. Period Type ⇒ ':period' Specifies a period of time. Examples |PT9M| ⇒ 9 minutes , |P1Y| ⇒ 1 Year Date decomposition In order to access the different parts of the date, special selectors must be used. Changing the Format of a Date You can specify a date to be in any format you prefer through using as in the following way: If you are doing multiple similar conversions in your transform, you might want to define a custom type as a directive in the header and set each date as being of that type. Regular. Custom Types You can define your own custom types in the header of your transform, then in the body you can define an element as being of that type. To do so, the directive must be structured as following: %type name = java definition For example: To then assign an element as being of the custom type you defined, use the operation as :type after defining a field: Defining Types
https://docs.mulesoft.com/mule-user-guide/v/3.9/dataweave-types
2018-03-17T16:13:37
CC-MAIN-2018-13
1521257645248.22
[]
docs.mulesoft.com
Restore a file or a folder from CPANEL How to restore a specific file or an entire folder using JetBackup: After accessing your CPANEL, you’ll see under “JetBackup” category an icon labeled “Files Backup”, as shown below: The following page will be presented: This page presents all the available backup dates. For each date there is a “File Manager” link. Clicking a “File Manager” link will allow using the backed up files or folders of a specific date. The page followed will be: This page has two purposes: 1. Browsing files – same as browsing files in common file browsers. 2. Selecting files or folders to restore or be downloaded to your computer. In most cases, the website files are under public_html folder, so we’ll open this folder and get the following page: Option A – Restore an entire Folder: For restoring an entire folder, all we need to do is to check V in the box adjacent to the desired folder and click “Restore Selected Files” button. You could also download the entire folder to your computer by checking V in the box adjacent to the desired folder and clicking “Download Selected Files” button. After the folder’s compression process and preparing it to be downloaded, you’ll see this on the same page: This is you download link for saving the desired folder to your computer. Option B – Restore a single file: In the same manner you’ve checked V in the box adjacent to the desired folder, you could also check V in the box adjacent to a single file to be restored or downloaded. After doing so and clicking “Restore Selected Files” button, you’ll get the following: Here you are asked to check V and confirm that you understand that restoring the file will overrun the present one. You could also enter your email address in order to be notified when the desired file is restored. To start the process, click the “Restore” button and after a short wait you’ll be notified that: The file (or folder) you wanted to be restored is in process. Please note that the duration of the restore process depends on the file’s or folder’s weight. You can see the restore status using the “Queue” option from the main JetBackup category in your CPANL home page: Which will lead you to a designated page of your restore status: And will finally display:
https://docs.jetapps.com/jetbackup/restore-file-or-folder
2018-03-17T16:41:03
CC-MAIN-2018-13
1521257645248.22
[]
docs.jetapps.com
Visualize Account Relationships by Using Account Hierarchy in Lightning Experience On account record pages, the Actions dropdown menu includes the View Account Hierarchy action unless you customized the Salesforce1 and Lightning Experience Actions section of the account page layout before Spring ’17. In that case, add the action to your account page layout. Your users can expand or collapse parts of a hierarchy as they navigate it. They can view up to 2,000 accounts from each point where they enter a hierarchy. By default, account hierarchies display the same columns as the Recently Viewed Accounts standard list view. But if the hierarchy columns don’t show the information that your sales reps need, you can customize them independently of the list view. In Setup, in the Quick Find box, enter Object Manager and then click Object Manager. In Account, click Hierarchy Columns, and then edit the columns. Under Columns Displayed, add or remove fields.
http://releasenotes.docs.salesforce.com/en-us/spring17/release-notes/rn_sales_accounts_hierarchy.htm
2018-03-17T16:36:47
CC-MAIN-2018-13
1521257645248.22
[array(['release_notes/images/account_hierarchy_action.png', 'The View Account Hierarchy action on a record detail page'], dtype=object) array(['release_notes/images/account_hierarchy.png', 'An account hierarchy'], dtype=object) array(['release_notes/images/account_hierarchy_columns.png', 'Using the Object Manager to edit account hierarchy columns'], dtype=object) array(['release_notes/images/account_hierarchy_columns_select.png', 'Choosing the columns to display in account hierarchies'], dtype=object) ]
releasenotes.docs.salesforce.com
setState() schedules an update to a component’s state object. When state changes, the component responds by re-rendering. props (short for “properties”) and state are both just JavaScript objects that trigger a re-render when changed.: setStateis giving me the wrong value? Calls to setState are asynchronous - don’t rely on this.state to reflect the new value immediately after calling setState. Pass an updater function instead of an object if you need compute values based on the current state (see below for details). Example of code that will not behave as expected: incrementCount() { // Note: this will *not* work as intended. this.setState({count: this.state.count + 1}); } handleSomething() { // this.state.count is 1, then we do this: this.incrementCount(); this.incrementCount(); // state wasn't updated yet, so this sets 2 not 3 } See below for how to fix this problem. Pass a function instead of an object to setState to ensure the call always uses the most updated version of state (see below). Passing an update function allows you to access the current state value inside the updater. Since setState calls are batched, this lets you chain updates and ensure they build on top of each other instead of conflicting: incrementCount() { this.setState((prevState) => { return {count: prevState.count + 1} }); } handleSomething() { // this.state.count is 1, then we do this: this.incrementCount(); this.incrementCount(); // count is now 3 } Learn more about setState It’s a good idea to get to know React first, before adding in additional libraries. You can build quite complex applications using only React. © 2013–present Facebook Inc. Licensed under the Creative Commons Attribution 4.0 International Public License.
http://docs.w3cub.com/react/faq-state/
2018-09-18T17:54:35
CC-MAIN-2018-39
1537267155634.45
[]
docs.w3cub.com
Cloud Gem Framework The Lumberyard Cloud Gem Framework makes it easy to build popular cloud-connected features, such as dynamic content, leaderboards, and daily messages. The Cloud Gem Framework has two components: Cloud Gem Portal – A web application for visually managing and administering your cloud features (like scheduling messages, releasing dynamic content, or deleting a fraudulent leaderboard score) Cloud gems – Modular packages of discrete functionality and assets that include everything necessary for a game developer to include that functionality into their project, including backend and client functionality. Cloud gems can be used out of the box in production, and they come with full source code in case you want to customize their behavior. Topics - Getting Started with the Cloud Gem Framework - Creating a Cloud Gem - Getting Started With Game Development on the Cloud Gem Portal - Making HTTP Requests Using the Cloud Gem Framework - Cloud Gem Framework Resource Manager Hooks - Running AWS API Jobs Using the Cloud Gem Framework - AWS Behavior Context Reflections - Adding AWS Resources to a Cloud Gem - Cloud Gem Framework Service API - Using the Cloud Gem Framework Command Line - Using Shared Code - Cloud Gem Framework and Resource Manager Versioning - Updating Projects and Cloud Gems to Version 1.0.0 of the Cloud Gem Framework
https://docs.aws.amazon.com/lumberyard/latest/userguide/cloud-canvas-cloud-gem-framework-intro.html
2018-09-18T18:19:58
CC-MAIN-2018-39
1537267155634.45
[]
docs.aws.amazon.com
Identifying the Chassis Type of a Computer Microsoft® Windows® 2000 Scripting Guide The chassis is the physical container that houses the components of a computer. Chassis types include the tower configuration, desktop computer, notebook computer, and handheld computer. At first glance, it might seem that the chassis type is interesting information but of minimal use to system administrators. In truth, however, knowing the physical design of the chassis provides valuable information for system administrators. After all, the physical design is a key factor in determining the type of hardware you can install on the computer; for example, disk drives that can be installed on a desktop computer are unlikely to fit in a subnotebook computer. Knowing the chassis type of a computer can also be important for: Applying Group Policy. Group Policy is often applied differently to computers with some chassis types. For example, software is typically installed in full on notebook computers rather than simply installed on first use. This ensures that a mobile user has available all the features of the software package. Planning hardware upgrades. A computer that is going to be upgraded must be able to support the intended upgrade. Hard disks and network adapters designed for desktop computers do not work on notebook computers. Planning hardware moves. If space is limited, you might prefer moving a mini-tower computer to a particular area rather than a full tower computer. Traditionally, the only way to identify the chassis type has been by visual inspection. However, the Win32_SystemEnclosure class can be used to determine the chassis type of a computer. Chassis types are stored as an array consisting of one or more of the values shown in Table 8.6. Table 8.6 Computer Chassis Values Scripting Steps Listing 8.5 contains a script that identifies computer chassis type._SystemEnclosure class. This query returns a collection consisting of the physical properties of the computer and its housing. For each set of physical properties in the collection, echo the chassis type. To do this, you must set up a For-Next loop to echo the values for the chassis type. The For-Next loop is required because the chassis type is stored as an array. Listing 8.5 Identifying Computer Chassis Type When the script in Listing 8.5 runs, the chassis type is reported as an integer. For example, if the computer has a mini-tower configuration, the value 6 is echoed to the screen. In a production script, a Select Case statement should be used to echo back string values as shown in the following code sample: Case 6 Wscript.Echo "This computer is configured as a mini-tower."
https://docs.microsoft.com/en-us/previous-versions/tn-archive/ee156537(v=technet.10)
2018-09-18T17:03:42
CC-MAIN-2018-39
1537267155634.45
[]
docs.microsoft.com
Xsheet Column Width Dialog Box The Xsheet Column Width dialog box lets modify the width of a column in the Xsheet view and use it as the default column width. For tasks related to this dialog box, see Modifying the Look of the Column on page 1. - In the Xsheet view, select a column. - From the Xsheet menu, select View > Set Columns Width. The Xsheet Column Width dialog box opens.
https://docs.toonboom.com/help/harmony-14/paint/reference/dialog-box/xsheet-width-dialog-box.html
2018-09-18T17:29:19
CC-MAIN-2018-39
1537267155634.45
[array(['../../Resources/Images/HAR/Stage/Layers/HAR11/HAR11_timing_columnwidth.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Layers/HAR11/HAR11_timing_columnwidth.png', None], dtype=object) ]
docs.toonboom.com
Fundraising Wrapup¶ We started the fund raise drive at the beginning of PyCon, and in a little under 3 weeks you were able to bring us up to our funding goal of $24,000. With the end of April, we’ve wrapped up fundraising for now. You can still contribute however, and give us a head start on our next fund raising goal. We set our goal to cover support costs for the next 3 months, and to set Read the Docs on a path to becoming self-sustaining. It will allow us to dedicate time to supporting and maintaining the service as it continues to grow. So far, we’ve had 157 contributions, and we couldn’t have hit our goal without the help of everyone. The Python Software Foundation graciously provided us with a grant for $8,000 to go towards the continued support efforts. Our largest corporate sponsors include Twilio, Sentry, DreamHost, and Lincoln Loop. Additionally, we have received generous service sponsorships from Elastic Search, MaxCDN, and Gandi. Again, a huge thank you to everyone that made this happen! What to Expect¶ public Trello board, we welcome you to subscribe for updates and to provide us with feedback. The funds will go towards covering the support and maintenance costs of Read the Docs each week. We want to increase our capacity to handle support requests by working with someone on a part-time contract basis, working solely on managing this support burden..
http://blog.readthedocs.com/fundraising-wrapup/
2018-09-18T18:21:43
CC-MAIN-2018-39
1537267155634.45
[]
blog.readthedocs.com
How do I add contacts? Contacts can be anyone whom you wish to be able to view anything that is shared with them, an auditor who needs access to your cap table and all documents associated with it, or an administrator who you want to input all the data. You want to click on CONTACTS which is located in the top right corner of the page to the right of 'Documents'. This will take you to your Contacts page. Anyone that is already on your Cap Table will already be listed in your contacts. To add a contact you want to click on ACTIONS in the top right of the screen and then select ADD NEW CONTACT or if entering more than one BULK ADD CONTACTS. This will bring up a form where you can put in your contacts information. Go ahead and fill in your contacts information. Once you have filled out your contacts information you want to click on ADD CONTACT.
https://docs.equity.gust.com/article/126-how-do-i-add-contacts
2018-09-18T17:48:54
CC-MAIN-2018-39
1537267155634.45
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f76479e4b05e7f887e97b0/images/573f2ae99033603b8d7dd38b/file-zQNjOaNeVa.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f76479e4b05e7f887e97b0/images/57f531ae90336079225d26bd/file-gALEuyMxpz.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f76479e4b05e7f887e97b0/images/578cf607c6979160ca1442f9/file-KCCNiL9bMK.png', None], dtype=object) ]
docs.equity.gust.com
All content with label as5+gridfs+infinispan+lock_striping+testng. Related Labels: publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, archetype, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, test, jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, async, transaction, interactive, xaresource, build, searchable, demo, installation, scala, client, non-blocking, migration,, - lock_striping, - testng ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+gridfs+infinispan+lock_striping+testng
2018-09-18T18:00:36
CC-MAIN-2018-39
1537267155634.45
[]
docs.jboss.org
These are external resource text files stored in the Partition CSP folder. On each text line, you enter the directive followed by the directive values. LANSA ships three sample Content Security Policy samples (xStrict, xMedium and xLow) with descending level of restrictions. Use these samples as a starting point to create your own Content Security Policy files. Web pages are compatible with the Content Security Policy Level 2 recommendation.
https://docs.lansa.com/14/en/lansa017/content/lansa/vlwebeng03_0085.htm
2018-09-18T17:11:20
CC-MAIN-2018-39
1537267155634.45
[]
docs.lansa.com
Track or Treat This article is part of our "From the Trenches" collection. It describes the advantages to tracking project work, discusses tracking methods, and explains the difference between tracking time and tracking progress. To see more articles, see "From the Trenches" white papers. Track or treat It's the Halloween season here in North America so I thought we'd talk about something scary: tracking our projects. What? That's not scary you say? Information from the field would beg to differ. Planning not Managing is still so common In many industries and organizations, it is stunningly common that even when formal project management schedules are created, they are left in a planning mode only and never tracked. The exercise of plan and if you must, plan again. Nowhere is this more prevalent than in software development. For all the progress we've made in project management in the software industry, the numbers of projects that are only planned vs. those which are planned and then tracked is enormous. If you are one of those who only plans, the good news is, you're not alone. The bad news is, you're not alone! There are many reasons why tracking projects in some industries is not popular. In some industries, for example, it is quite common to have personnel who specialize in creating bids or in project pricing or in project contracting or in estimating to make the original plan for the project. This is true in many different environments but we see it almost always in construction, heavy engineering, aerospace/defence and large engineering/procurement/construction (EPC) projects. Once the bid is won, a completely new team takes on the tracking and delivery of the project. On large projects, the people who created the original bid have often moved on long ago to make other bids as the time between creating the estimate and closing the contract may be extensive. The project that is just now starting up may be old news to them. So those who do the project management aren't able to track against the original plan because the people who created it and the structure of the plan itself are not available. The most common reason given for not doing project tracking however is that the project is so fluid that tracking the work is too challenging. Some projects are changed so quickly that just keeping up with the plan is an enormous undertaking. If you are spending all your time updating the plan, there is precious little time left to track what you have been planning. This can have an interesting effect which is not necessarily a good one. In environments where the project manager updates the plan over and over and over again based on changing conditions, the project is never really late; never really over budget; never really off track. How could it be? After all, we just updated the plan 20 minutes ago and we're right on track with where we planned. If you're in the software development industry and you're thinking, that sounds a little like Agile, you'd be exactly right. The idea of Agile project management was to build as we design and have the delivery of what we're creating happen iteratively. Our plans would adjust accordingly and we could, at any time, say "The client reports that it's good enough. We can stop here for now." That's completely appropriate for certain kinds of development but for others, it's the stuff of dreams. Most software development environments live with the same project management constraints as every other industry. We have deadlines to meet, budgets to respect and a fixed list of scope to deliver. Let's call that traditional project management. Even in primarily Agile environments, my experience has been that Agile management happens within an umbrella of traditional project management. Whatever the incentive to just plan, tracking your project carries the potential for enormous benefits. Let's take a look at the whole tracking concept. What does tracking mean? You might think that project tracking has a very distinct definition and you'd be incorrect. How to track a project depends vastly on what the objectives are. Here are a couple of more common tracking methods: Guess at a percentage "We're about half way there," the team leader says and we know that's about 50 percent of what we'd planned. While this is tracking and this is much better than not tracking at all, the quality of this data is quite weak. If I had a plan to complete a task in 10 days and I report that we're about 50 percent complete, project management tools like Microsoft Project and Project Server will make some assumptions for me. They'll figure that based on the limited data they have, you must have spent 5 days of effort so far and have 5 days of effort remaining. Perhaps that's true but it would mask a situation where you are about 50 percent complete but it's taken you 20 days of effort to get there and therefore probably have 20 days of work remaining. Measure how much is left Years ago a dark comedy movie called "The Money Pit" featuring Tom Hanks featured a crew of home contractors who never seemed to be done. The running gag through throughout the movie was the answer to "When will you be done?" "Three more weeks" all the contractors would say. But, tracking remaining duration is a much better quality of data than just guessing at a percentage. Remaining duration gives us a sharp focus on what is left to get this piece done and when can the next piece that is dependant on this one get started. There are two ways to think of remaining duration based on how you've set up your tasks. The first is to think of the total task's remaining duration. This would be appropriate if we are not focused on the effort required to complete it. The second is to think of the remaining duration or effort required for each assignment. This would be more appropriate if the tasks are resource driven. But either is a big step up from just guessing at a percentage. Measure how much we've spent "I've spent 10 days so far," is one way to look at progress. Sometimes referred to as LOE or "Level or Effort." Level of Effort is a great way to look at our actual burn rate but it carries a blind side. On the good side of this method, we have a great understanding of how much we've spent on this task so far. On the bad side, we might not have a great understanding of what's left to do. Being in the timesheet business, we deal often with organizations trying to implement this method. At one time our staff thought of this method as only appropriate if coupled with other more sophisticated project tracking techniques but we've been shown that it is often very strong just on its own. "If we could just determine where our time is going," I was told by a client, "that would put us so far ahead of what we've been doing, we could become almost instantly more effective." He was right too. We did implement a timesheet which allowed time to be tracked against planned tasks and that alone made the organizations tremendously more effective. They were later able to add additional methods of tracking to improve their performance yet further. We're going to use the earned value method The earned-value method was developed about 30 years ago as a way of controlling extremely complex projects but the fundamental concept is quite simple. If we make a budget for a task, then no matter how much time we spend, we can't earn more than 100% of the budget. Earned value focuses on tracking the "physical" percent complete and that lends itself well to some types of projects and not so much to others. If we're building a road, for example, and we have 100 miles of road to build, then when we're at mile-marker 50, we're half done. If you've spent 75% of the money getting that far, you've got big troubles and the earned value method will make that obvious. That would indicate that you're probably going to go 50% over budget by the time you're done. If you're doing research for a new drug or writing software, then measuring the physical percent complete may be much more elusive. The earned value folks have a whole toolbox of possible ways to get at this kind of progress and of them all, "weighted milestones" would be my favorite. In a weighted milestone project management environment, we set up key milestones of the work and as we arrive at that milestone, we earn the percentage we'd agreed upon before starting that milestone would represent. What's great about this method is there's little debate. Did you complete the milestone? Yes or No? If not, you've earned nothing. If so, you've earned that percentage. The Ivory Snow Project Even if you are using one of these methods, one of the things you should watch out for is what I call the "Ivory Snow" project. These projects advance almost instantly to 99.97% complete and then stay stuck there for the rest of time. How will all these things appear? Regardless of what project management tool you're using, showing progress is often a fairly common element of the display. Here we've got an image from Microsoft Project showing one bar with 50% progress: If that's all we're tracking, we've at least got some notion of where we're headed but modern tools like Project and Project Server can offer so much more. If we set a baseline on the project, we'll be able to compare not only how the task is progressing but how it compares against our original plan. Here we can see that the task was expected to be 50% complete and it is, but it started a week late. On the right side of the bar, we can see that we've spent 50% of the time and (taking weekends into account) we've filled 50% of the bar. If we had entered resource work, we might have had 80 hours of work to day and used 40 hours. That's right on track for this task if we think of it in isolation but even though the task may progress at the pace and burn rate we expected, it is still having a negative impact on any tasks that are downstream. Okay, I'm tracking, now what? Okay, so we've covered some of the basics. You're already in the top 20% of skilled project management managers. Seriously. This is already better than 80% of those out there. Now for something fundamental but potentially very impactful. If x, then y What I mean by that is that effective tracking needs a consequence formula: If x happens, then take y action. It's a basic formula but one of the toughest to train people in. Many years ago, I had the privilege of working with a team of nationally certified lifeguards. These were skilled professionals but one thing could be practiced but never really experienced until it actually happened: How would the lifeguard react in an actual emergency. Those in the armed forces will explain a similar challenge. You can train and train and train but you don't really know for sure how someone will react when there is an actual weapon fired in anger towards them. Project management is fortunately not usually a matter of life or death but we have a similar problem with those who track projects. Does the project manager know what they should do when the project isn't tracking exactly as planned? This is something you can think about long in advance. Do you have a contingency budget of time and/or money? Do you have a chain of command for them to get authority to take action? Do you have a communications plan for them to reach the right people when the project is running late or not? And, what results constitute taking action? Is a one day delay worth escalating? How about one week? How about an increase in risk or scope? Setting some standards for this in advance can avoid upset later. Trick or Track Setting your organization or your project to implement tracking isn't difficult. Virtually ever project management product in the industry has some ability to store project progress but there is one corporate cultural aspect of tracking that must still be taken into account to have a good chance of success and that is to not shoot the messenger. Many project managers who I have spoken to over time express concerns that their management find only good news to be acceptable when receiving project reports. A number of years ago I was in a large boardroom of a large multi national firm. We were discussing the impact of the project management tool receiving information from the timesheet we publish. "I don't understand," a senior vice president said, "why, when we get the timesheet hours, that the tasks aren't updating the progress." "If I had a 40 hour task and I put 40 hours of effort from the timesheet into that task, what would you expect the result to be?" I asked. The VP looked at confused at the question. "I'd expect it to be complete," he said. "But what if it's not?" I answered. "I don't understand," said the now upset VP. "If it's a 40 hour task and you did 40 hours of work, then it must be over." I wasn't sure what to answer to that but fortunately I was saved by the head of the project group who asked to speak to the VP outside for a moment and presumably explained that life doesn't always follow the plan. Getting management to understand that they can make the biggest impact when the project doesn't go as planned is something that can deliver huge benefits just as much as management's insistence that all project must report progress as planned can be crippling. Tracking your project can be a treat not only for those who manage it but also for the entire ().
https://docs.microsoft.com/en-us/ProjectOnline/track-or-treat-white-paper?redirectSourcePath=%252fpt-br%252farticle%252facompanhar-ou-tratar-white-paper-4d4526a9-0fc9-48ed-b2a1-a11c93f257d1
2018-09-18T18:32:25
CC-MAIN-2018-39
1537267155634.45
[array(['media/5bcf72ea-0d86-4a44-9ab7-609f5be47f62.png', 'Gantt bar with 50 percent progress'], dtype=object) array(['media/8c657eea-b546-41c6-ac77-1803945a9fb3.png', 'Gantt bar with baseline'], dtype=object) ]
docs.microsoft.com
Keyboard Behavior Enable key repeat Key repeat lets you rest your finger on a key and have the character repeat itself; for instance, if you want to draw an ASCII line, or a row of x's, across an email message. Repeat delay controls how long the keyboard waits before it starts repeating the character. This is calibrated in milliseconds, from 10 up to 1000 (a full second). You can adjust the interval with the slider. Once the repeat delay has expired, repeat speed controls how quickly the characters are registered: from 1 per second (quite slow) to 250 per second (which can lead to surprises with the Delete or Backspace keys). The Test area at the bottom of the module helps you instantly try out your settings. Cursor Show Blinking controls whether the cursor blinks on and off, and how quickly. This module changes the cursor behavior in [wp>GTK+] programs (including AbiWord, for instance, and Xfce and GNOME accessories), but not in every program you might use. Application Shortcuts When you choose to assign a keyboard shortcut, the following screen greets you. Here, you can type in the name of the desired application (for instance, Clipman) if you know it. Otherwise, click on Open to search for it in a file tree. When you're done, click on OK. You will see the following screen. Simply press the keystrokes you wish to use to launch the application. In this example, if you want to launch Clipman by pressing Ctrl-Shift-F9, simply press Ctrl-Shift-F9 now. The shortcut will take effect immediately. Layout
https://docs.xfce.org/xfce/xfce4-settings/keyboard
2018-09-18T17:16:31
CC-MAIN-2018-39
1537267155634.45
[]
docs.xfce.org
In the Groups > File Import section you can upload files via FTP and import them. These facilities are provided: - Import Files : Import uploaded files in bulk. - Export Files : Create a text file with current data from the the Groups > Files section. - Scan for Files : Create a text file with data for the files located in the managed folder* or directory. These facilities provide everything needed to handle file imports for files that should be handled and protected by Groups File Access. Details about each section are provided in what follows. Import Files Here you can import file data in bulk from a text file, after uploading files via FTP to the folder managed by Groups File Access. On the server used here for example, this folder is: /var/www/groups/wp-content/uploads/groups-file-access … we refer to this as the *managed folder – but this obviously can be a different one depending on your server and the location of the wp_content and uploads folders. The section will show you the corresponding path on your system. You will see that there are two options that can be used during your imports: - Test only : If this option is enabled when you run the import, no file entries will be created. But you will obtain information about what would happen during the import. This includes how many files would have been imported, if any invalid lines were detected and which ones along with any detected errors. - Delete replaced files : If this option is enabled during import, any existing files will be deleted if they are replaced by new ones. Also see the notes on the filenamefield below. The accepted file-format is a plain text file (ideally UTF-8 encoded) with values separated by tabs provided on one line per file and in this order: filename file_id name description max_count group_names Please note that you do not include the above as the first line, it’s just an outline of which fields are expected (required or optional). Description of fields: filename– required – The filename of the file uploaded to the managed folder – only indicate the filename and do not include the full path. A line that refers to the filenameof an existing entry will update the information related to the entry. For an existing entry, if a different filename is indicated, this will replace the existing file. If the option the “Delete replaced files” is enabled, the old file will be deleted from the managed folder. file_id– optional – The Id of an existing file entry. If provided, the existing file will be deleted and the new file will be related to the entry. name– optional – A descriptive name for the file. If left empty, the filename will be used. description– optional – A detailed description of the file. max_count– optional – The maximum number of allowed accesses to the file per user. Leave empty or use 0 for unlimited accesses. group_names– optional – The names of the groups that are allowed to access the file, separated by comma. If empty, the file can not be accessed until a group is assigned. Important: The files must have been uploaded to the managed folder before starting to import. Export Files Here you can create a text file with current data for all files managed in the Groups > Files section. The file format corresponds to the supported import file-format used to import files. You can use this to modify existing file entries in bulk or prepare an import on another instance, for example when you move from a development to a production site. This will create a text file (in the supported import file-format) with current data for all files managed in the Groups > Files section. Scan for Files Here you can prepare a text file based on the files located currently in the managed folder. This will create a text file (in the supported import file-format) with current data for all files in the folder managed by Groups File Access. So why would you want to use this? You can simply upload all files you want to import to the managed folder, scan and import them. - Upload files you want to import to the managed folder. - Scan the files. - Review the text file that the scan has created and edit details like filenames, descriptions and groups as desired. - Import them based on the reviewed text file.
http://docs.itthinx.com/document/groups-file-access/file-import/
2018-09-18T17:24:55
CC-MAIN-2018-39
1537267155634.45
[array(['http://docs.itthinx.com/wp-content/uploads/2015/03/File-Import-Steps.png', None], dtype=object) array(['http://docs.itthinx.com/wp-content/uploads/2015/03/Scan-for-Files.png', None], dtype=object) ]
docs.itthinx.com
Mozilla applications, the -moz-outline-radius-bottomright CSS property can be used to round the bottom-right corner of an element's outline. <p>Look at this paragraph's bottom-right corner.</p> p { margin: 5px; border: solid cyan; outline: dotted red; -moz-outline-radius-bottomright: 2em; } -moz-outline-radiusproperty for more information. © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
http://docs.w3cub.com/css/-moz-outline-radius-bottomright/
2018-09-18T17:53:55
CC-MAIN-2018-39
1537267155634.45
[]
docs.w3cub.com
Administering and deleting backup images The hbase backup command has several subcommands that help you to administer.
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/hbase-data-access/content/hbase-backup-management-commands.html
2018-09-18T18:51:02
CC-MAIN-2018-39
1537267155634.45
[]
docs.hortonworks.com
About analyzers¶ Overview¶ An analyzer is a function or callable class (a class with a __call__ method) that takes a unicode string and returns a generator of tokens. Usually a “token” is a word, for example the string “Mary had a little lamb” might yield the tokens “Mary”, “had”, “a”, “little”, and “lamb”. However, tokens do not necessarily correspond to words. For example, you might tokenize Chinese text into individual characters or bi-grams. Tokens are the units of indexing, that is, they are what you are able to look up in the index. An analyzer is basically just a wrapper for a tokenizer and zero or more filters. The analyzer’s __call__ method will pass its parameters to a tokenizer, and the tokenizer will usually be wrapped in a few filters. A tokenizer is a callable that takes a unicode string and yields a series of analysis.Token objects. For example, the provided whoosh.analysis.RegexTokenizer class implements a customizable, regular-expression-based tokenizer that extracts words and ignores whitespace and punctuation. >>> from whoosh.analysis import RegexTokenizer >>> tokenizer = RegexTokenizer() >>> for token in tokenizer(u"Hello there my friend!"): ... print repr(token.text) u'Hello' u'there' u'my' u'friend' A filter is a callable that takes a generator of Tokens (either a tokenizer or another filter) and in turn yields a series of Tokens. For example, the provided whoosh.analysis.LowercaseFilter() filters tokens by converting their text to lowercase. The implementation is very simple: def LowercaseFilter(tokens): """Uses lower() to lowercase token text. For example, tokens "This","is","a","TEST" become "this","is","a","test". """ for t in tokens: t.text = t.text.lower() yield t You can wrap the filter around a tokenizer to see it in operation: >>> from whoosh.analysis import LowercaseFilter >>> for token in LowercaseFilter(tokenizer(u"These ARE the things I want!")): ... print repr(token.text) u'these' u'are' u'the' u'things' u'i' u'want' An analyzer is just a means of combining a tokenizer and some filters into a single package. You can implement an analyzer as a custom class or function,). Note that this only works if at least the tokenizer is a subclass of whoosh.analysis.Composable, as all the tokenizers and filters that ship with Whoosh are. See the whoosh.analysis module for information on the available analyzers, tokenizers, and filters shipped with Whoosh. Using analyzers¶ When you create a field in a schema, you can specify your analyzer as a keyword argument to the field object: schema = Schema(content=TEXT(analyzer=StemmingAnalyzer())) Advanced Analysis¶ Token objects¶ The Token class has no methods. It is merely a place to record certain attributes. A Token object actually has two kinds of attributes: settings that record what kind of information the Token object does or should contain, and information about the current token. Token setting attributes¶ A Token object should always have the following attributes. A tokenizer or filter can check these attributes to see what kind of information is available and/or what kind of information they should be setting on the Token object. These attributes are set by the tokenizer when it creates the Token(s), based on the parameters passed to it from the Analyzer. Filters should not change the values of these attributes. Token information attributes¶ A Token object may have any of the following attributes. The text attribute should always be present. The original attribute may be set by a tokenizer. All other attributes should only be accessed or set based on the values of the “settings” attributes above. So why are most of the information attributes optional? Different field formats require different levels of information about each token. For example, the Frequency format only needs the token text. The Positions format records term positions, so it needs them on the Token. The Characters format records term positions and the start and end character indices of each term, so it needs them on the token, and so on. The Format object that represents the format of each field calls the analyzer for the field, and passes it parameters corresponding to the types of information it needs, e.g.: analyzer(unicode_string, positions=True) The analyzer can then pass that information to a tokenizer so the tokenizer initializes the required attributes on the Token object(s) it produces. Performing different analysis for indexing and query parsing¶ Whoosh sets the mode setting attribute to indicate whether the analyzer is being called by the indexer ( mode='index') or the query parser ( mode='query'). This is useful if there’s a transformation that you only want to apply at indexing or query parsing: class MyFilter(Filter): def __call__(self, tokens): for t in tokens: if t.mode == 'query': ... else: ... The whoosh.analysis.MultiFilter filter class lets you specify different filters to use based on the mode setting: intraword = MultiFilter(index=IntraWordFilter(mergewords=True, mergenums=True), query=IntraWordFilter(mergewords=False, mergenums=False)) Stop words¶ “Stop” words are words that are so common it’s often counter-productive to index them, such as “and”, “or”, “if”, etc. The provided analysis.StopFilter lets you filter out stop words, and includes a default list of common stop words. >>> from whoosh.analysis import StopFilter >>> stopper = StopFilter() >>> for token in stopper(LowercaseFilter(tokenizer(u"These ARE the things I want!"))): ... print repr(token.text) u'these' u'things' u'want' However, this seemingly simple filter idea raises a couple of minor but slightly thorny issues: renumbering term positions and keeping or removing stopped words. Renumbering term positions¶ Remember that analyzers are sometimes asked to record the position of each token in the token stream: So what happens to the pos attribute of the tokens if StopFilter removes the words had and a from the stream? Should it renumber the positions to pretend the “stopped” words never existed? I.e.: or should it preserve the original positions of the words? I.e: It turns out that different situations call for different solutions, so the provided StopFilter class supports both of the above behaviors. Renumbering is the default, since that is usually the most useful and is necessary to support phrase searching. However, you can set a parameter in StopFilter’s constructor to tell it not to renumber positions: stopper = StopFilter(renumber=False) Removing or leaving stop words¶ The point of using StopFilter is to remove stop words, right? Well, there are actually some situations where you might want to mark tokens as “stopped” but not remove them from the token stream. For example, if you were writing your own query parser, you could run the user’s query through a field’s analyzer to break it into tokens. In that case, you might want to know which words were “stopped” so you can provide helpful feedback to the end user (e.g. “The following words are too common to search for:”). In other cases, you might want to leave stopped words in the stream for certain filtering steps (for example, you might have a step that looks at previous tokens, and want the stopped tokens to be part of the process), but then remove them later. The analysis module provides a couple of tools for keeping and removing stop-words in the stream. The removestops parameter passed to the analyzer’s __call__ method (and copied to the Token object as an attribute) specifies whether stop words should be removed from the stream or left in. >>> from whoosh.analysis import StandardAnalyzer >>> analyzer = StandardAnalyzer() >>> [(t.text, t.stopped) for t in analyzer(u"This is a test")] [(u'test', False)] >>> [(t.text, t.stopped) for t in analyzer(u"This is a test", removestops=False)] [(u'this', True), (u'is', True), (u'a', True), (u'test', False)] The analysis.unstopped() filter function takes a token generator and yields only the tokens whose stopped attribute is False. Note Even if you leave stopped words in the stream in an analyzer you use for indexing, the indexer will ignore any tokens where the stopped attribute is True. Implementation notes¶ Because object creation is slow in Python, the stock tokenizers do not create a new analysis.Token object for each token. Instead, they create one Token object and yield it over and over. This is a nice performance shortcut but can lead to strange behavior if your code tries to remember tokens between loops of the generator. Because the analyzer only has one Token object, of which it keeps changing the attributes, if you keep a copy of the Token you get from a loop of the generator, it will be changed from under you. For example: >>> list(tokenizer(u"Hello there my friend")) [Token(u"friend"), Token(u"friend"), Token(u"friend"), Token(u"friend")] Instead, do this: >>> [t.text for t in tokenizer(u"Hello there my friend")] That is, save the attributes, not the token object itself. If you implement your own tokenizer, filter, or analyzer as a class, you should implement an __eq__ method. This is important to allow comparison of Schema objects. The mixing of persistent “setting” and transient “information” attributes on the Token object is not especially elegant. If I ever have a better idea I might change it. ;) Nothing requires that an Analyzer be implemented by calling a tokenizer and filters. Tokenizers and filters are simply a convenient way to structure the code. You’re free to write an analyzer any way you want, as long as it implements __call__.
https://whoosh.readthedocs.io/en/latest/analysis.html
2018-09-18T18:57:03
CC-MAIN-2018-39
1537267155634.45
[]
whoosh.readthedocs.io
DELETE /api/apps/versions/{AppVersionID}/publicprofile/picture Deletes the avatar associated with an app version's public profile. Authorization Roles/Permissions: App team member, Business Admin This topic includes the following sections: HTTP Method DELETE URL https://{hostname}/api/apps/versions/{AppVersionID}/publicprofile/picture Sample Request The example below shows a request to delete the public profile picture for the specified app version. Request URL https://{hostname}/api/apps/versions/90VsIX3WmpP32sqoYQk8jj4J.acmepaymentscorp/publicprofile/picture Sample request headers DELETE /api/apps/versions/90VsIX3WmpP32sqoYQk8jj4J.acmepaymentscorp/publicprofile/picture HTTP/1.1 Host: {hostname} Accept: text/plain, */*; q=0.01 X-Csrf-Token_{tenant}: {TokenID} Sample request body Not applicable. Request Headers For general information on request header values, refer to HTTP Request Headers. Request Parameters Response If successful, this operation returns HTTP status code 200, with the APIVersionID returned in the response message. Sample Response The sample response below shows successful completion of this operation. Sample response headers HTTP/1.1 200 OK Content-Type: text/plain Date: Mon, 08 Jul 2013 20:46:09 GMT Sample response body 90VsIX3WmpP32sqoYQk8jj4.
http://docs.akana.com/cm/api/apps/m_apps_deleteAppPublicProfilePicture.htm
2018-09-18T18:37:16
CC-MAIN-2018-39
1537267155634.45
[]
docs.akana.com
In previous sections of the manual we have seen how to load simple sprites from files, but it is also possible to create and modify them within GameMaker: Studio. To do this, open the sprite property window by double clicking on one of your sprites (or by creating a new one) and pressing the button labeled Edit Sprite, which will open up the following window: The main area of the sprite editor is taken up by the various sub-images that make up the sprite. These sub-images all have the same over-all size, with any smaller image having generally transparent pixels around to make it "fit" the maximal size, although this will depend on whether you are importing images from a file or not (see the "File Menu" section below) and the choices you make when doing so. The other sections are explained in more detail below. To get a preview of the sprite you must tick the check-box labelled "Show Preview". This will cause the area beneath to show an animated preview of the current sprite as well as a number of options that affect how this preview looks, all with the aim of getting a good idea of how a finished sprite will look when placed in the game world. You can change the speed at which the animation is displayed by changing the corresponding value in the box, and you can also choose a colour for the background of the sprite (this only affects the preview, not the actual game). Should you wish, you can also specify a background image from your resource tree by clicking on the "Background" menu and so get a more realistic idea of how the sprite will look in the game, with the option of stretching the background to fit the size of the sprite available beneath that too. Note that these options will only be visible if you have transparent areas in your sprite! This bar at the bottom gives you some very basic information about the current sprite you are working with: - Frames: The number of sub-images that the sprite has. - Size: The width and height of the sprite. - Memory: This is the approximate texture memory that the finished sprite will take up on any device, and is calculated as: image_number * width * height * 4. The toolbar contains a number buttons that allow you to change and manipulate the position and the actual sub-images that make up your sprite: Confirm This will close the sprite editor and save any changes you have made. Be aware that there are no confirmation messages asking if you want to save. New Sprite Click on this to create a new sprite. You will be prompted to input a base width and height. Note: This will delete all previous sprite information! Create A Sprite From A File This will open the standard window for loading a sprite (see Defining Sprites) . This will replace any existing sprite images with the loaded one. Add A Sprite From A File This will open the standard window for loading a sprite (see Defining Sprites) , which is then added to the current sprite. If the dimensions of the new sprite do not match up with the current sprite you will be shown these options: Save Strip This will save your current sprite as a *.png strip, ready to be used in another game or some other animation program. Below you can see an example of an animated sprite along with the strip that GameMaker: Studio would produce when saved using this option: Insert Empty Frame This command simply inserts an empty frame into the list of sub-images. This frame will always be inserted before the currently selected sub-image. Add Empty Frame This command adds an empty frame onto the end of the current sub-images. Undo This will undo the last action, and you can repeatedly undo consecutive actions with this button, but note that the number of undos that can be performed is limited to 16. Redo If you have used the undo function, you can use this to go back to the state that you undid. This is limited to the number of undos that you have done previously. Cut You can use this to "cut" a sub image out of the list of sub-images. This cut image is stored to the clipboard and can then be pasted into another part of the same sprite, another sprite resource or even into some other program, independent of GameMaker: Studio. Please note that the transparencies may not be the same when pasted into another program. Copy This button will copy the currently selected sub-image into the clipboard for use in another place, either the same sprite, another sprite or even an alternate program. Paste You can use this button to paste whatever image you have previously stored in the clipboard (with cut, or copy) into the current sprite as a new sub-image. If the pasted image is larger or smaller than the current sprite, you will be shown the "Inserting Image" window (see "Add A Sprite From A File", above). Shift Image These buttons will move the currently selected sub-image left or right in the image order for animation. Edit Sub-image This button will open the GameMaker: Studio Image Editor where you can edit on a per-pixel basis the selected sub-image. More on this in the section Editing Sub-images. Pre-Multiply Alpha This button will pre-multiply the alpha of all sub-images of the chosen sprite. This is normally only necessary when dealing with surfaces and drawing sprites to them, or for some specific special effects and for normal sprite use you should not see any noticeable difference between the normal sprite and the pre-multiplied one. Note that this cannot be undone.. For information relating to the different menu options, please refer to the following pages: File Menu Edit Menu Transform Menu Images Menu Animation Menu
http://docs.yoyogames.com/source/dadiospice/001_advanced%20use/more%20about%20sprites/editing%20sprites.html
2018-09-18T17:22:00
CC-MAIN-2018-39
1537267155634.45
[]
docs.yoyogames.com
air mattress alternative picture of sweet home collection all season down topper alternatives to mattresses cheap. Related Post Cot Mattress Pads Portable Beds Camping Cot Mattress Coleman Air Mattress Marcy Weight Bench Cage Home Gym Free Weight Set With Bench Air Mattress Alternative Portable Mattress Folding Cot With Mattress Camping Beds For Adults Weight Benches And Weights Set Magnetic Dry Erase Sheets Cot Mattress Pad Stamina Inline Back Stretch Bench Gun Storage Bench
http://top-docs.co/air-mattress-alternative/air-mattress-alternative-picture-of-sweet-home-collection-all-season-down-topper-alternatives-to-mattresses-cheap/
2018-09-18T17:53:07
CC-MAIN-2018-39
1537267155634.45
[array(['http://top-docs.co/wp-content/uploads/2018/04/air-mattress-alternative-picture-of-sweet-home-collection-all-season-down-topper-alternatives-to-mattresses-cheap.jpg', 'air mattress alternative picture of sweet home collection all season down topper alternatives to mattresses cheap air mattress alternative picture of sweet home collection all season down topper alternatives to mattresses cheap'], dtype=object) ]
top-docs.co
These release notes provide information on the new features, enhancements, resolved escalations, and bug fixes completed in each release for the YouTube card, which is also an Appspace supported card. v 2.0 Release Date: 21 Sept 2020 An updated YouTube card with improved features and support for playing YouTube playlists and live streams. New Features The YouTube card v 2.0 comes with the following features and improvements: - Ability to add and search for YouTube videos, playlists, and live video stream URLs, within the card. - Support for YouTube video subtitle configuration, which also allows for the subtitle language to be set via a device property. - Support for the following keyboard controls during playback: - Space = Start/Stop - Right arrow = Fast Forward (1o sec) - Right arrow = Rewind (1o sec) - Up arrow = Volume Increase (+10%) - Down arrow = Volume Decrease (-10%) - M = Mute On/Off - Support for enabling or disabling audio for video. v 1.5 Release Date: 8 July 2020 The YouTube: 17 July 2020 Resolved Escalations - AE-6111 – The on-screen keyboard does not display on BrightSign devices with touch screen TVs running the YouTube Card in a playlist channel or an advanced channel. v 1.4 Release Date: – Patch Updates v 1.4.5 Release Date: 28 Apr 2020 Support for natural duration on YouTube Card This improvement allows YouTube videos to play till the end, before switching to the next content on the channel playlist, without user intervention. Previously, users were required to set the card duration on the channel playlist to match the YouTube video duration to ensure content is switched only after the video has ended. This manual process of setting the duration at times causes Appspace App to end the YouTube video a couple of seconds earlier or later. Resolved Bugs - CT-1919 – Unable to view YouTube video in channel preview. Resolved Escalations - AE-5955 – Audio unavailable when displaying YouTube Live. v 1.4.4 Release Date: 21 Feb 2020 Resolved Escalations - AE-5792 – When one YouTube card is added to a playlist channel, and the channel is set to loop, the audio does not play on alternate loops, on BrightSign devices. v 1.4.3 Release Date: 4 Sept 2019 A maintenance update to the YouTube card resolves an audio issue where audio from video content is muted by default even when the Media Zone widget audio checkbox has been checked. Resolved Escalations - AE-5395 – Audio does not play during video playback, when running on Appspace App for PWA on Chrome web browsers. - AE-5487 – YouTube card does not load on Appspace App for Chrome OS. v 1.4.2 Release Date: 16 Apr 2019 Resolved Escalations - AE-5068 – Audio on YouTube card is muted by default, when played on BrightSign devices. Technical Limitations & Workaround Audio Issues When the audio does not play during YouTube video playback on the YouTube card, may be due to the policy settings on web browsers, where audio is blocked during video playback. We recommend setting the autoplay policy to allow audio, when using the following browsers: Chrome For enterprise users, would require the IT Administrator to set the policy to always allow autoplay with audio. More information on how to do this here: (mass deployment policy) For individual users, as the autoplay-policy flag had been removed from Chrome 76, users would be required to set the autoplay-policy flag via program arguments. More information on how to do this here: Safari Navigate to Safari > Preferences > Websites tab. Ensure the “Auto-Play” option is set to “Allow All Auto-Play”. More information on how to do this here: Firefox Navigate to Menu > Preferences > Privacy & Security, and scroll down to the Permissions section. Click the “Autoplay” Settings button and select “Allow Audio and Video” as the default for all websites. More information on how to do this here: Video Issues When the display shows “Video unavailable” when running a YouTube playlist, this may be caused by the first video in the playlist which does not allow embedding. We recommend ensuring the first video in your YouTube playlist allows embedding. To enable embedding on your YouTube video, access your YouTube Studio, and click “Details > More Options tab > Additional Options section”, and check the “Allow embedding” checkbox. Technical Limitations - Keyboard controls are not supported on LG webOS 3.0 devices. - Intermittent audio issues, such as audio being muted during playback on Surface Pro UWP devices. - Playback of audio from the next video content is played, when displaying the current content in the playlist on Surface Pro devices. - Close Captioning on YouTube 2.0 card does not function on BrightSign devices with firmware version 7.x. Ensure BrightSign devices are updated to firmware 8.x, and running Appspace 2.19 and above, for Close Captioning to work correctly.
https://docs.appspace.com/latest/release-notes/youtube-card-release-notes/
2021-01-15T23:47:23
CC-MAIN-2021-04
1610703497681.4
[]
docs.appspace.com
Please include Azure DevOps Services for Supported product. Please include Azure DevOps Services for Supported product. Good feedback! It will be good to have all products at one site. Currently devops is supported in developercommunity site forum. You'll find devops forums located here on developercommunity and stackoverflow. Share your feedback, or help out by voting for other people's feedback.
https://docs.microsoft.com/en-us/answers/content/idea/24082/please-include-azure-devops-services-in-supported.html
2021-01-16T01:16:01
CC-MAIN-2021-04
1610703497681.4
[]
docs.microsoft.com
Creating Services To create a service, perform the following procedure. To create a service Sign in to the AWS Management Console and open the AWS Cloud Map console at . In the navigation pane, choose Namespaces. On the Namespaces page, choose the namespace that you want to add the service to. On the Namespace: namespace-namepage, choose Create service. On the Create service page, enter the applicable values. For more information, see Values That You Specify When You Create Services. Choose Create service. For services that are accessible by DNS queries, you cannot create multiple services with names that differ only by case (such as EXAMPLE and example). Otherwise, these services will have the same DNS name. If you use a namespace that's only accessible by API calls, then you can create services that with names that differ only by case.
https://docs.aws.amazon.com/cloud-map/latest/dg/creating-services.html
2021-07-23T20:32:52
CC-MAIN-2021-31
1627046150000.59
[]
docs.aws.amazon.com
Session Templates allow Project Admins to configure requirements for acquisitions. For example, you can specify: The minimum number of acquisitions with a given label The minimum count and measurement classification of files within the acquisition If a session does not contain the acquisitions, or files, specified in the template, it will be flagged for review. This article explains how to create a session template, gives an example template, and shows how to view sessions that don't match the template requirements. Prefer videos? We also have of a webinar showing how to create a session template: See all Flywheel webinars and videos Navigate to a project. Click the Templates's tab. Click Create new template. Fill out the template: Session name: Specify the session label that the template would apply to. If empty, it will apply to all sessions in the project. Acquisition count: Configures the number of files that should match the acquisition label Acquisition label: Configures the acquisition label that the template is checking for. File Count: Configures the number of files that match the file classification File Classification: Configures the required file measurement classifications. Learn more about measurement classifications in Flywheel. Click Add File to have multiple file rules for an acquisition. Click Add Acquisitions to have multiple acquisition rules for a session. Click Save. The template evaluates any existing session in the project. If the sessions don't follow the template, they are flagged. In this example, Flywheel evaluates sessions that are labeled anxiety_protocol_pre. The template will flag any session that does not have: At least 1 acquisition with the label MPRAGE Two files that are classified as T1 This means the the MPRAGE acquisition would have to have a DICOM file and a NIFTI file that both have been classified correctly. Flywheel automatically flags any session that does meet the requirements set in the template. These are called flagged sessions. To view flagged sessions in a project: From the project, Click the Sessions tab. From the Advanced Filter menu, select Only Flagged Enter a date range to narrow down the results or leave blank to view all flagged sessions.
https://docs.flywheel.io/hc/en-us/articles/360019519473-Session-Templates
2021-07-23T20:06:09
CC-MAIN-2021-31
1627046150000.59
[array(['/hc/article_attachments/360066966494/SessionTemplateExample.png', 'SessionTemplateExample.png'], dtype=object) ]
docs.flywheel.io
Housekeeping GitLab supports and automates housekeeping tasks within your current repository, such as compressing file revisions and removing unreachable objects. Automatic housekeeping GitLab automatically runs git gc and git repack on repositories after Git pushes. You can change how often this happens or turn it off in Admin Area > Settings > Repository ( /admin/application_settings/repository). Manual housekeeping The housekeeping function runs repack or gc depending on the Housekeeping settings configured in Admin Area > Settings > Repository. runs, similarly when the pushes_since_gc value is 200 a git gc runs. git gc(man page) runs a number of housekeeping tasks, such as compressing file revisions (to reduce disk space and increase performance) and removing unreachable objects which may have been created from prior invocations of git add. git repack(man page) re-organize existing packs into a single, more efficient pack. Housekeeping also removes unreferenced LFS files from your project on the same schedule as the git gc operation, freeing up storage space for your project. To manually start the housekeeping process: - In your project, go to Settings > General. - Expand the Advanced section. - Select Run housekeeping. How housekeeping handles pool repositories Housekeeping for pool repositories is handled differently from standard repositories. It is ultimately performed by the Gitaly RPC FetchIntoObjectPool. This is the current call stack by which it is invoked: Repositories::HousekeepingService#execute_gitlab_shell_gc Projects::GitGarbageCollectWorker#perform Projects::GitDeduplicationService#fetch_from_source ObjectPool#fetch ObjectPoolService#fetch Gitaly::FetchIntoObjectPoolRequest To manually invoke it from a Rails console, if needed, you can call project.pool_repository.object_pool.fetch. This is a potentially long-running task, though Gitaly times out in about 8 hours. git pruneor git gcin pool repositories! This can cause data loss in “real” repositories that depend on the pool in question.
https://docs.gitlab.com/13.12/ee/administration/housekeeping.html
2021-07-23T19:20:14
CC-MAIN-2021-31
1627046150000.59
[]
docs.gitlab.com
ViewMaster is designed to be installed and run on each client. Since the viewers are not downloaded from a server, they load more quickly, and server failure or downtime will not adversely affect productivity. Planning your installation involves understanding which components to install on which servers or clients, and how to handle such issues as load balancing. Review the terms of your license agreement before installing any components you might have to purchase additional licenses to install certain components on separate computers. To provide server scalability, fail-over protection, and load balancing for the management components, you can install MCS on multiple servers across your network, creating a server cluster.
https://docs.attachmate.com/eVantage/HostAccessServer/3.2/getting_started/vm_installation.htm
2021-07-23T19:05:11
CC-MAIN-2021-31
1627046150000.59
[]
docs.attachmate.com
cupy.polyfit¶ - cupy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False)[source]¶ Returns the least squares fit of polynomial of degree deg to the data y sampled at x. - Parameters x (cupy.ndarray) – x-coordinates of the sample points of shape (M,). y (cupy.ndarray) – y-coordinates of the sample points of shape (M,) or (M, K). deg (int) – degree of the fitting polynomial. rcond (float, optional) – relative condition number of the fit. The default value is len(x) * eps. full (bool, optional) – indicator of the return value nature. When False (default), only the coefficients are returned. When True, diagnostic information is also returned. w (cupy.ndarray, optional) – weights applied to the y-coordinates of the sample points of shape (M,). cov (bool or str, optional) – if given, returns the coefficients along with the covariance matrix. - Returns - of shape (deg + 1,) or (deg + 1, K). Polynomial coefficients from highest to lowest degree - tuple (cupy.ndarray, int, cupy.ndarray, float): Present only if full=True. Sum of squared residuals of the least-squares fit, rank of the scaled Vandermonde coefficient matrix, its singular values, and the specified value of rcond. - cupy.ndarray: of shape (M, M) or (M, M, K). Present only if full=Falseand cov=True. The covariance matrix of the polynomial coefficient estimates. - Return type - Warning numpy.RankWarning: The rank of the coefficient matrix in the least-squares fit is deficient. It is raised if full=False. See also
https://docs.cupy.dev/en/stable/reference/generated/cupy.polyfit.html
2021-07-23T19:53:04
CC-MAIN-2021-31
1627046150000.59
[]
docs.cupy.dev
Inference Engine (IE) tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the Inference Engine public API. All the tests are written in the Google Test C++ framework. Inference Engine Plugin tests are included in the IE::funcSharedTests CMake target which is built within the OpenVINO repository (see Build Plugin Using CMake guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on. Test definitions are split into tests class declaration (see inference_engine/tests/functional/plugin/shared/include) and tests class implementation (see inference_engine/tests/functional/plugin/shared/src) and include the following scopes of plugin conformance tests: behaviorsub-folder), which are a separate test group to check that a plugin satisfies basic Inference Engine concepts: plugin creation, multiple executable networks support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters. Single layer tests ( single_layer_tests sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from IE::funcSharedTests library: convLayerTestParamsSettuple of parameters: Templateplugin functional test instantiation: INSTANTIATE_TEST_CASE_P: subgraph_testssub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from ResNet-50and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests. Note, such sub-graphs or patterns for sub-graph tests should be added to IE::ngraphFunctionslibrary first (this library is a pre-defined set of small ngraph::Function) and re-used in sub-graph tests after. subgraph_testssub-folder) contains tests for HETEROscenario (manual or automatic affinities settings, tests for QueryNetwork). To use these tests for your own plugin development, link the IE::funcSharedTests library to your test binary and instantiate required test cases with desired parameters values. NOTE: A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested. To build test binaries together with other build artifacts, use the make all command. For details, see Build Plugin Using CMake*. Please, refer to Transformation testing guide. Inference Engine Plugin tests are open for contribution. Add common test case definitions applicable for all plugins to the IE::funcSharedTests target within the DLDT repository. Then, any other plugin supporting corresponding functionality can instantiate the new test. All Inference Engine per-layer tests check test layers functionality. They are developed using nGraph functions as input graphs used by tests. In this case, to test a new layer with layer tests, extend the IE::ngraphFunctions library, which is also included in the Inference Engine Developer package, with a new nGraph function including the corresponding operation. NOTE: When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.
https://docs.openvinotoolkit.org/latest/ie_plugin_api/plugin_testing.html
2021-07-23T19:44:29
CC-MAIN-2021-31
1627046150000.59
[]
docs.openvinotoolkit.org
Playlists The MediaPlayer enables you to create your own playlists and to enable or disable the seeking forward. Creating Playlists Players usually feature a different video based on user action. To implement your own Playlist structures, change the source of the MediaPlayer dynamically. For a runnable example, refer to the demo on creating your own playlists in the MediaPlayer which uses the ListView to create a list that holds the videos right next to the MediaPlayer element. The following example demonstrates how to change the source of the MediaPlayer. function buttonClick() { var player = $("#mediaplayer1").data("kendoMediaPlayer"); player.media({ title: "Our Company Culture - Lesson 2", source: "Video/video2.mp4" }); } Seeking Forward Some applications enforce the user to watch only the currently loaded content without the option to jump forward. The MediaPlayer provides the forwardSeek configuration, which helps to achieve this requirement.
https://docs.telerik.com/kendo-ui/controls/media/mediaplayer/playlists
2021-07-23T20:25:44
CC-MAIN-2021-31
1627046150000.59
[]
docs.telerik.com
Developing with Ganache The Torus website runs on HTTPS. The browser security model does not allow an insecure connection to a HTTP endpoint over HTTPS. Install the following npm package to proxy your HTTP traffic from Ganache through a HTTPS endpoint. The npm package redirects the traffic present on to your Ganache port (eg.) #Install and run ganache-http-proxy npm install -g ganache-http-proxyganache-http-proxy #Run Ganache on port 8546 8546 is the port where Ganache is running locally. ganache-cli -p 8546 #Connect to the Ganache localhost instance in Torus #Accept localhost certificates in your browser Chrome: Paste the following URL into the address bar. chrome://flags/#allow-insecure-localhost
https://docs.tor.us/wallet/developing-with-wallet/ganache
2021-07-23T18:48:41
CC-MAIN-2021-31
1627046150000.59
[array(['/assets/images/torus-ganache-localhost-519da3043b77cf87d5c8b38f1fc26cb5.png', 'Select localhost:8545 from in the Network selector under the Settings tab within the Torus wallet'], dtype=object) ]
docs.tor.us
How to set up a free registration wall to build your email list Watch this video to learn how to set up a free registration to build your email list. Text instructions below: - This video will go over why setting a registration wall is critical to your paid subscription success. - Here is a case study on using this approach to quickly build your email list. How to set up your free registration wall - Go to Leaky Paywall/Settings in your dashboard. - Click on the Subscriptions tab. - Create a new subscription level with a price of $0 and it will generate the free level subscription - Set the Subscription length to Forever - Set your Access options to how many articles the free level should give access to each time period... Ex: month (see screenshot below) - Under Leaky Paywall >> Settings make sure you update your Subscribe or Login Message (HTML accepted) to motivate your free registration. Here is an effective initial subscription nag: - Under Leaky Paywall >> Settings also make sure you update your Upgrade Message to motivate anyone registered to upgrade/pay for premium access. This will trigger for a logged in free registered user after they have exhausted their free allotment of articles. Here is a good example: When a new free subscriber decides to register you can also use our Custom Registration fields add-on to capture additional subscriber info on the registration form. You can also let subscribers opt-in to other newsletters (using MailChimp Groups) with our MailChimp add-on.
https://docs.zeen101.com/article/77-can-i-set-a-free-level-to-collect-email-addresses
2021-07-23T18:50:07
CC-MAIN-2021-31
1627046150000.59
[]
docs.zeen101.com
# Enable Members Firma supports both free and paid members. By default, users who sign up will automatically be free members and can sign in later with a magic link. If you set up paid tiers, members who have previously registered for free can pay a subscription to access your website's premium content. # Member Pages This page is where users can sign up on your site. If they are already free members then they will see the subscription plans you have in place and will be able to pay to become paid members. On this page members of your site will be able to sign in. The only thing they will need is their email address in order to receive a magic link with which to sign in. Membership If users visiting this page are not registered or are free members they will see the subscription plans you have in place and will be able to pay to become paid members. Account On this page members will be able to view their account details. If they are free members then they will also see a link to go to the page where the subscription plans are shown. If they are paid members then they will see a list of their subscriptions, each with information such as price, expiration date, payment information and a link to cancel. Newsletter On this page any user can enter their email address to receive in their inbox content from your site periodically. # Set Up The first thing you need to do to allow members on your site is to enable this feature in the Ghost Admin and connect a Stripe account. In this link (opens new window) you can see in detail what are the requirements and steps to correctly enable it. Then you have to follow these additional steps: Step 1 Unzip the theme folder. Step 2 Head to the Labs page in the Ghost Admin, scroll all the way down and press the Upload routes YAML button, then choose the file routes.yaml that is located in the root of the theme folder. Step 3 You must now create the following pages in the Ghost Admin: - Membership - Account - Newsletter To create each page click on Pages in the navigation menu and then click on the New page button. You can assign any title you want to each page and you don't need to add content to them. The important thing is that the slug in the URL on each page matches the one on its corresponding page according to the following table: For example, this is what the settings in the sign up page should look like: Since Ghost generates the slug automatically based on the page title, you must make sure that the value of the Page URL field matches the one in the table above, otherwise you have to change it manually. Remember that you have to do this for every page in the list above. Step 4 The Account, Sign in and Sign up links are automatically included in the main menu, if you also want to have links for the Membership and Newsletter pages you can add the following links in the Navigation section of the Design page in the Ghost Admin: Don't forget to replace YOUR_SITE_URL with your website's URL. Please make sure that the last part of each url matches its respective path in the routes.yaml file, otherwise it will not work. # Portal Portal (opens new window) is a new feature that comes enabled by default in the latest versions of Ghost, among some of its options is the option to display a button to subscribe which is visible on all pages of your site. Although this button could be useful in some situations, I recommend that you disable it since Firma already manages everything related to memberships, also because for now the button and the interface that is displayed when you press it are not very customizable and may not work very well with the language and design of your site. To disable it head to your Ghost Admin and go to Settings and click on Portal. In the window that appears you only have to disable the "Show Portal button" option, save the changes and that's it, as shown in the following screenshot:
https://firma-docs.eduardogomez.io/guide/enable-members.html
2021-07-23T17:54:11
CC-MAIN-2021-31
1627046150000.59
[array(['https://res.cloudinary.com/edev/image/upload/v1606861151/firma/CleanShot_2020-12-01_at_23.15.39_2x.png', 'Sign up page settings'], dtype=object) array(['https://res.cloudinary.com/edev/image/upload/v1610823818/firma/CleanShot_2021-01-16_at_20.02.48_2x.png', 'Portal settings'], dtype=object) ]
firma-docs.eduardogomez.io
Embedded resource gateway functionality The Onegini Security Proxy can forward requests to a resource gateway. Also basic resource gateway functionality is available embedded in the Onegini Security Proxy. This embedded resource gateway functionality is responsible for validating an access token present on a resource call and mapping the result of this call on the call to the resource server. Topic guides in this section will explain how to configure the embedded resource gateway functionality and how to customize the request mapper. Embedded resource gateway flow This graph presented below shows the communication between Mobile Application (with Onegini SDK) and Resource Server via the Onegini Security Proxy for a resource call using the embedded resource gateway functionality. Note: The main focus of this graph is to show the role of the embedded resource gateway functionality in the Mobile App - Resource Server communication so the Security Proxy functionality of decrypting incoming request was intentionally not presented. The scenario: - A client application (Onegini SDK) performs a resource call with an access token in the Authorizationheader. - The Onegini Security Proxy token validation functionality validates the access token at the Onegini Token Server. - The original request details containing the token validation result is passed on to Request Mapper component. - The Request Mapper modifies the request (it uses the token validation response which contains i.a. assigned user and scopes for that) in a way that it becomes a valid request (containing all required parameters/headers etc.) to call some specified Resource Server. - The Onegini Security Proxy sends modified request to the Resource Server.
https://docs.onegini.com/msp/security-proxy/2.03.00/topics/embedded-resource-gateway-functionality/introduction.html
2017-06-22T18:16:59
CC-MAIN-2017-26
1498128319688.9
[array(['../../diagrams/diagram-s27otef66acx9wwmi.png', 'Fetching resources via Onegini Security Proxy'], dtype=object)]
docs.onegini.com
>: cd <mongodb installation dir> Type ./bin/mongoto start mongo: ./bin. Working with the mongo Shell¶ To display the database you are using, type db: db The operation should return test, which is the default database. To switch databases, issue the use <db> helper, as in the following example: use <database>: use myNewDatabase db.myCollection.insertOne( { x: 1 } ); The db.myCollection.insertOne() is one of the methods available in the mongo shell dbrefers to the current database. myCollectionis the name of the collection. If the mongo shell does not accept the name of the collection, for instance if the name contains a space, hyphen, or starts with a number, you can use an alternate syntax to refer to the collection, as in the following: db["3test"].find() db.getCollection("3test").find(): db.myCollection.find().pretty(): > if ( x > 0 ) { ... count++; ... print (x); ... } You can exit the line continuation mode if you enter two blank lines, as in the following example: > if (x > 0 ... ... >.
https://docs.mongodb.com/v3.4/mongo/
2017-06-22T18:27:39
CC-MAIN-2017-26
1498128319688.9
[]
docs.mongodb.com
Thursday, May 31, 2018 Enable the Satis plugin on each Package you want to expose via Satis. A webhook will be installed in GitHub or GitLab to enable the automatic update of your Satis repository information. Note: You must also ensure a Resque worker is running for automatic webhook updating to work. See the section on Resque management for more information. Packages uses the information retrieved from GitHub and GitLab to generate a satis.json configuration. Satis consumes this file, and generates a packages.json file that Composer can use to resolve private dependencies. Notice: packages.jsonis publicly accessible unless secure_satis: trueis specified in config.yml. Visit your Packages installation's landing page and click the Available Packages button to see a listing of packages and versions available in the repository. Note: If secure_satis: trueis specified, you will be required to login before viewing this page. Sometimes you need to manually build or generate the exposed Satis information. You can do this by using the Packages command-line interface. bin/console satis:build In your project's composer.json, add the section repositories if it doesn't already exist. Then add a new composer repository with your Packages URL as the url. { /* ... */ "require": { /* ... */ }, "repositories": [ { "type": "composer", "url": "" } ], /* ... */ } Specify archive: true in config.yml to enable the creation of archives of each version of the available packages in your repository. Note: Run bin/console satis:buildto rebuild your entire Satis repository after you change this option. Terramar Labs
http://docs.terramarlabs.com/packages/3.2/managing-packages/satis-configuration
2019-04-18T18:23:53
CC-MAIN-2019-18
1555578526228.27
[]
docs.terramarlabs.com
AtmoSwing’s user documentation¶ Analog methods (AMs) allow for the prediction of local meteorological variables of interest (predictand) such as the daily precipitation, on the basis of synoptic variables (predictors). They can rely on outputs of numerical weather prediction models in the context of operational forecasting or outputs of climate models in the context of climate impact studies. AMs require low computing capacity and have demonstrated a useful potential for application in several contexts. AtmoSwing is an open source software written in C++ that implements AMs in a flexible manner so that different variants can be handled dynamically. It comprises four tools: a Forecaster that performs operational forecasts, a Viewer for displaying the results, a Downscaler for climate studies, and an Optimizer for inferring the relationship between the predictand and predictors. The Forecaster handles every required processing internally, such as operational predictor downloading (when possible) and reading, grid interpolation, etc., without external scripts or file conversion. The processing of a forecast is extremely low-intensive in terms of computing infrastructure and can even run on a Raspberry Pi computer. The Viewer displays the forecasts in an interactive GIS environment. It contains several layers of syntheses and details in order to provide a quick overview of the potential critical events in the upcoming days, as well as the possibility for the user to delve into the details of the forecasted predictand and criteria distributions. The Downscaler allows the use of AMs in a climatic context, either for climate reconstruction or for climate change impact studies. When used for future climate studies, it is necessary to pay close attention to the selected predictors, so that they contain the climate change signal. The Optimizer implements different optimization techniques, such as a sequential approach, Monte–Carlo simulation, and a global optimization technique using genetic algorithms. The process of inferring a statistical relationship between predictors and predictand is quite intensive in terms of processing because it requires numerous assessments over decades. To this end, the Optimizer was highly optimized in terms of computing efficiency, is parallelized over multiple threads and scales well on a Linux cluster. This procedure is only required to infer the statistical relationship, which can then be used in forecasting or downscaling at a low computing cost. Content - Getting started - The Forecaster - The Viewer - The Downscaler - The Optimizer - Changelog
https://atmoswing.readthedocs.io/en/latest/
2019-04-18T18:22:41
CC-MAIN-2019-18
1555578526228.27
[]
atmoswing.readthedocs.io
CreateCustomerGateway. Important. Request Parameters The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters. - BgpAsn For devices that support BGP, the customer gateway's BGP ASN. Default: 65000 Type: Integer Required: Yes - DryRun Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation. Type: Boolean Required: No - IpAddress The Internet-routable IP address for the customer gateway's outside interface. The address must be static. Type: String Required: Yes - Type The type of VPN connection that this customer gateway supports ( ipsec.1). Type: String Valid Values: ipsec.1 Required: Yes Response Elements The following elements are returned by the service. - customerGateway Information about the customer gateway. Type: CustomerGateway object - requestId The ID of the request. Type: String Errors For information about the errors that are common to all actions, see Common Client Errors. Example Example This example passes information to AWS about the customer gateway with the IP address 12.1.2.3 and BGP ASN 65534. Sample Request &Type=ipsec.1 &IpAddress=12.1.2.3 &BgpAsn=65534 &AUTHPARAMS Sample Response <CreateCustomerGatewayResponse xmlns=""> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <customerGateway> <customerGatewayId>cgw-b4dc3961</customerGatewayId> <state>pending</state> <type>ipsec.1</type> <ipAddress>12.1.2.3</ipAddress> <bgpAsn>65534</bgpAsn> <tagSet/> </customerGateway> </CreateCustomerGatewayResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateCustomerGateway.html
2019-04-18T19:19:40
CC-MAIN-2019-18
1555578526228.27
[]
docs.aws.amazon.com
<" settings "MarkForSuspension" settings 'suspendOnFailure' settings>60000</maximumDuration> </suspendOnFailure> </address> </endpoint> In"> <address uri=""> <timeout> <duration>30000<> </address> </endpoint> on the first endpoint, even though the second endpoint is still active. If there is only one service endpoint and the message failure is not tolerable, failovers are possible with a single endpoint. A sample failover with one address endpoint: .
https://docs.wso2.com/display/ESB480/Endpoint+Error+Handling
2019-04-18T19:05:53
CC-MAIN-2019-18
1555578526228.27
[]
docs.wso2.com
Contents Now Platform Capabilities Previous Topic Next Topic View a decision matrix Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share View TopicsUse vendor decision matrixes On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/assessments/task/t_ViewADecisionMatrix.html
2019-04-18T18:54:21
CC-MAIN-2019-18
1555578526228.27
[]
docs.servicenow.com
Deploy profiles in System Center Configuration Manager Applies to: System Center Configuration Manager (Current Branch) Profiles must be deployed to one or more collections before they can be used. Use the Deploy Wi-Fi Profile, Deploy VPN Profile, Deploy Exchange ActiveSync Profile, or Deploy Certificate Profile dialog box to configure the deployment of these profiles. As part of the configuration, you define the collection to which the profile is to be deployed and specify how often the profile is evaluated for compliance. Note If you deploy multiple company resource access profiles to the same user, the following behavior occurs: If a conflicting setting contains an optional value, it will not be sent to the device. - If a conflicting setting contains a mandatory value, the default value will be sent to the device. If there is no default value, the entire company resource access profile will fail. For example, if you deploy two email profiles to the same user and the values specified for Exchange ActiveSync host or Email address are different, then both email profiles will fail as they are mandatory settings. Before you can deploy certificate profiles, you must first configure the infrastructure and create certificate profiles. For more information, see the following topics: How to create certificate profiles in System Center Configuration Manager Important When a VPN profile deployment is removed, it is not removed from client devices. If you want to remove the profile from devices, you must manually remove it. Deploying profiles In the System Center Configuration Manager console, choose Assets and Compliance. In the Assets and Compliance workspace, expand Compliance Settings, expand Company Resource Access, and then choose the appropriate profile type, such as Wi-Fi Profiles. In the list of profiles, select the profile that you want to deploy, and then in the Home tab, in the Deployment group, click Deploy. In the deploy profile dialog box, specify the following information: Collection - Click Browse to select the collection where you want to deploy the profile. Generate an alert - Enable this option to configure an alert that is generated if the profile compliance is less than a specified percentage by a specified date and time. You can also specify whether you want an alert to be sent to System Center Operations Manager. - Random delay (hours): (Only for certificate profiles that contain Simple Certificate Enrollment Protocol settings) Specifies a delay window to avoid excessive processing on the Network Device Enrollment Service. The default value is 64 hours. Specify the compliance evaluation schedule for this profile- Specify the schedule by which the deployed profile is evaluated on client computers. The schedule can be either a simple or a custom schedule. Note The profile is evaluated by client computers when the user logs on. Click OK to close the dialog box and to create the deployment. See also How to monitor Wi-Fi, VPN, and email profiles in System Center Configuration Manager How to monitor certificate profiles in System Center Configuration Manager Feedback Send feedback about:
https://docs.microsoft.com/en-us/sccm/protect/deploy-use/deploy-wifi-vpn-email-cert-profiles
2019-04-18T18:19:43
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
namespace::clean - Keep imports and functions out of your namespace - NAME - SYNOPSIS - DESCRIPTION - METHODS - IMPLEMENTATION DETAILS - SEE ALSO - THANKS - AUTHORS NAME namespace::clean - Keep imports and functions out of your namespace SYNOPSIS package; DESCRIPTION Keeping packages clean When you define a function, or import one, into a Perl package, it will naturally also be available as a method. This does not per se cause problems, but it can complicate subclassing and, for example, plugin classes that are included via multiple inheritance by loading them as base classes.;.
http://docs.activestate.com/activeperl/5.24/perl/lib/namespace/clean.html
2019-04-18T18:18:01
CC-MAIN-2019-18
1555578526228.27
[]
docs.activestate.com
Create a unit group and add units to that group Applies To: Dynamics 365 (online), Dynamics 365 (on-premises), Dynamics CRM 2013, Dynamics CRM 2015, Dynamics CRM Online, Dynamics CRM 2016 Units are the quantities or measurements that you sell your products or services in. For example, if you sell gardening supplies, you might sell seeds in units of packets, boxes, and pallets. A unit group is a collection of these different units. In Dynamics 365, you first create a unit group and then create units within that group. Let’s look at both of these tasks, using seeds as our example. On this page Step 1: Create a unit group Step 2: Create units in a unit group Step 1: Create a unit group Make sure that you have the Manager, Vice President, CEO-Business Manager, System Administrator, or System Customizer security role or equivalent permissions. Check your security role Follow the steps in View your user profile. Don’t have the correct permissions? Contact your system administrator. Go to Settings > Product Catalog. Choose Unit Groups. To create a new unit group, choose New. -OR- To edit a unit group, open a unit group from the list. Fill in your information: Name. Type a meaningful name for the unit group. In our example, you would type “Seeds.” Primary Unit. Type the lowest common unit of measure that the product will be sold in. In our example, you would type “packet.” Other examples could include ounces, hours, or tons, depending on your product or service. Choose OK. Note You cannot delete the primary unit in a unit group. Step 2: Create units in a unit group In the unit group you want to add the units to, in the left pane, under Common, choose Units, and then on the Units tab, in the Records group, choose Add New Unit. The unit that you specified as the primary unit earlier is already in the list of units. Fill in your information: Name. Type a meaningful name for the unit. In our example, you would type “box.” Quantity. Type the quantity that this unit will contain. For example, if a box contains 12 packets, you would type “12.” Base Unit. Select a base unit. The base unit will establish the lowest unit of measurement for the unit you’re creating. Using our example, you would select “packet.” If you then create a unit called “pallet,” and one pallet contains 48 boxes, you would type “48” in Quantity and select “box” in Base Unit. Here’s how: Choose Save or Save and Close. Typical next steps - OR - ** **Create product bundles to sell multiple items together Set up a product catalog: Walkthrough
https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/admins-customizers-dynamics-365/mt826740(v=crm.8)
2019-04-18T19:17:53
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
Why are unbound version constraints a bad idea? A version constraint without an upper bound such as *, >=3.4 or dev-master will allow updates to any future version of the dependency. This includes major versions breaking backward compatibility. Once a release of your package is tagged, you cannot tweak its dependencies anymore in case a dependency breaks BC - you have to do a new release but the previous one stays broken. The only good alternative is to define an upper bound on your constraints, which you can increase in a new release after testing that your package is compatible with the new major version of your dependency. For example instead of using >=3.4 you should use ~3.4 which allows all versions up to 3.999 but does not include 4.0 and above. The ~ operator works very well with libraries follow semantic versioning. Note: As a package maintainer, you can make the life of your users easier by providing an alias version for your development branch to allow it to match bound constraints.!
https://docs.phpcomposer.com/faqs/why-are-unbound-version-constraints-a-bad-idea.html
2019-04-18T19:16:37
CC-MAIN-2019-18
1555578526228.27
[]
docs.phpcomposer.com
7.6 Random numbers, file names, GUIDs and strings are sufficiently random¶ Verify that all random numbers, random file names, random GUIDs, and random strings are generated using the cryptographic module’s approved random number generator when these random values are intended to be not guessable by an attacker. Levels: 2, 3
https://owasp-aasvs.readthedocs.io/en/latest/requirement-7.6.html
2019-04-18T19:01:25
CC-MAIN-2019-18
1555578526228.27
[]
owasp-aasvs.readthedocs.io
Share Rules Across Resources Some applications have many resources that should have similar authorization rules applied to them. This is a common scenario in workflow driven applications that have different user types and a large number of resources. A common set of rules will apply to many resources, with some exceptions. In this guide, we will cover various ways for modeling this scenario with Oso. Setup In this example, we will consider a hypothetical EMR (electronic medical records) application. We’ll discuss a few resources: - Order: A record of an action that medical staff will perform on a patient - Test: A diagnostic test that will be performed on a patient. - Lab: A lab test that will be performed on a patient. These resources are all examples of different types of patient data. Basic Policy Let’s start by considering a basic policy controlling access to these three resources: allow(actor: User, "read", resource: Order) if actor.role = "medical_staff" and actor.treated(resource.patient); allow(actor: User, "read", resource: Test) if actor.role = "medical_staff" and actor.treated(resource.patient); allow(actor: User, "read", resource: Lab) if actor.role = "medical_staff" and actor.treated(resource.patient); Let’s take a look at the first rule in the policy. This allow rule permits an actor to perform the "read" action on an Order if: - The actor’s roleproperty is equal to "medical_staff". - The actor has treated the patient associated with the Orderin question, which is verified by calling the actor’s treated()method. Note the head of the rule. Each argument uses a type specializer to ensure this rule only applies to certain types of resources and actors. This rule indicates that the actor argument must be an instance of the User class and the resource argument must be an instance of the Order class. This policy meets our goal above. We have expressed the same rule for the three types of patient data, but it is a bit repetitive. Let’s try to improve it. Using a Rule to Express Common Behavior Our policy doesn’t just need to contain allow rules. We can write any rules we’d like and compose them as needed to express our policy! can_read_patient_data(actor, "read", resource) if actor.role = "medical_staff" and actor.treated(resource.patient); allow(actor: User, "read", resource) if can_read_patient_data(actor, "read", resource); Now, we’ve taken the repeated logic and expressed it as the can_read_patient_data rule. When the allow rule is evaluated, Oso will check if the can_read_patient_data is satisfied. The policy is much shorter! Unfortunately, we’ve lost one property of our last policy: the specializers. This rule would be evaluated for any type of resource — not just our three examples of patient data above. That’s not what we want. Bringing Back Specializers We can combine this idea with our first policy to make sure only our three patient data resources use the can_read_patient_data rule. allow(actor: User, "read", resource: Order) if can_read_patient_data(actor, "read", resource); allow(actor: User, "read", resource: Test) if can_read_patient_data(actor, "read", resource); allow(actor: User, "read", resource: Lab) if can_read_patient_data(actor, "read", resource); Now, we still have three rules, but the body isn’t repeated anymore. One Rule to Rule Them All We haven’t talked about the application side of this yet. So far, we’ve assumed Order, Lab, and Test are application classes. Here’s how they might be implemented: @oso.polar_class class PatientData: def __init__(self, patient): self.patient = patient @oso.polar_class class Lab(PatientData): pass @oso.polar_class class Order(PatientData): pass @oso.polar_class class Test(PatientData): pass We used inheritance to capture some of the common functionality needed (storing the patient). In a real application these would probably be ORM models. We can use the same idea to shorten our policy even further! allow(actor: User, "read", resource: PatientData) if actor.role = "medical_staff" and actor.treated(resource.patient); Now, this allow rule will be evaluated for any instance that is a subclass of PatientData. Polar understands the class inheritance structure when selecting rules to evaluate! Summary In this guide, we saw an example of an application policy that could result in significant repetition. We tried out a few strategies for representing common policy across many resource types. First, we wrote a custom rule that moved duplicated logic into one place. Then we used specializers and application types to condense our policy even further..
https://docs-preview.oso.dev/v/main/guides/more/inheritance.html
2021-10-15T22:37:42
CC-MAIN-2021-43
1634323583087.95
[]
docs-preview.oso.dev
Topbar Menus Blender Menu - Splash Screen Open the Splash Screen. - About Blender Opens a menu displaying information about Blender. - Version The Blender version. - Date Date when Blender was compiled. - Hash The Git Hash of the build. This can be useful to give to support personal when diagnosing a problem. - Branch Optional branch name. - Release Notes Open the latest release notes. - Credits Open credits website. - License Open License website. - Blender Website Open main Blender website. - Blender Store Open the Blender store. - Development Fund Open the developer fund website. - Install Application Template Install a new application template. File Menu The options to manage files are: - New Ctrl-N Clears the current scene and loads the selected application template. - Open Ctrl-O - - Open Recent Shift-Ctrl-O Displays a list of recently saved blend-files to open. - Revert Reopens the current file to its last saved version. - Recover - - Save Ctrl-S Save the current blend-file. - Save As… Shift-Ctrl-S Opens the File Browser to specify file name and location of save. - Save Copy… Saves a copy of the current file. - Link… Links data from an external blend-file (library) to the current scene. The edition of that data is only possible in the external library. Link and Append are used to load in only selected parts from another file. See Linked Libraries. - Append… Appends data from an external blend-file to the current scene. The new data is copied from the external file, and completely unlinked from it. - Data Previews Tools for managing data-block previews. - Import Blender can use information stored in a variety of other format files which are created by other graphics programs. See Import/Export. - Export Normally you save your work in a blend-file, but you can export some or all of your work to a format that can be processed by other graphics programs. See Import/Export. - External Data External data, like texture images and other resources, can be stored inside the blend-file (packed) or as separate files (unpacked). Blender keeps track of all unpacked resources via a relative or absolute path. See pack or unpack external Data. - Automatically Pack Into .blend This option activates the file packing. If enabled, every time the blend-file is saved, all external files will be saved (packed) in it. - Pack All Into .blend Pack all used external files into the blend-file. - Unpack Into Files Unpack all files packed into this blend-file to external ones. - Make All Paths Relative Make all paths to external files Relative Paths to current blend-file. - Make All Paths Absolute Make all paths to external files absolute. Absolute ones have full path from the system’s root. - Report Missing Files This option is useful to check if there are links to unpacked files that no longer exist. After selecting this option, a warning message will appear in the Info editor’s header. If no warning is shown, there are no missing external files. - Find Missing Files In case you have broken links in a blend-file, this can help you to fix the problem. A File Browser will show up. Select the desired directory (or a file within that directory), and a search will be performed in it, recursively in all contained directories. Every missing file found in the search will be recovered. Those recoveries will be done as absolute paths, so if you want to have relative paths you will need to select Make All Paths Relative. Muista Recovered files might need to be reloaded. You can do that one by one, or you can save the blend-file and reload it again, so that all external files are reloaded at once. - Clean Up - Unused Data-Blocks Remove unused data-blocks from both the current blend-file and any Linked Data (cannot be undone). See the Outliner for more information. - Recursive Unused Data-Blocks Remove all unused data-blocks from both the current blend-file and any Linked Data including any indirectly used data-blocks i.e. those only used by unused data-blocks. - Unused Linked Data-Blocks Remove unused data-blocks from only Linked Data. - Recursive Unused Linked Data-Blocks Remove all unused data-blocks from only Linked Data including any indirectly used data-blocks i.e. those only used by unused data-blocks. - Unused Local Data-Blocks Remove all unused data-blocks from only the current blend-file. - Recursive Unused Local Data-Blocks Remove all unused data-blocks from only the current blend-file including any indirectly used data-blocks i.e. those only used by unused data-blocks. - Defaults This menu manages the startup file which is used to store the default scene, workspace, and interface displayed when creating a new file. Initially this contains the startup scene included with Blender. This can be replaced by your own customized setup. - Save Startup File Saves the current blend-file as the startup file. - Load Factory Settings Restores the default startup file and preferences. Katso myös - Quit Ctrl-Q Closes Blender and the file is saved into quit.blend. Edit Menu - Undo/Redo/History See Undo & Redo. - Menu Search Find a menu based on its name. - Operator Search Execute an operator based on its name (Developer Extras only). - Rename Active Item Rename the active object or node; see Rename tool for more information. - Batch Rename Renames multiple data types at once; see Batch Rename tool for more information. - Lock Object Modes Restrict select to the current mode. - Preferences Open the Preferences window. Render Menu - Render Image F12 Render the active scene at the current frame. - Render Animation Ctrl-F12 Render the animation of the active scene. Katso myös Rendering Animations for details. - Render Audio Mix the scenes audio file to a sound file. Katso myös Rendering audio for details. - View Render F11 Toggle show render view. - View Animation Ctrl-F11 Playback rendered animation in a separate player. Katso myös Animation player for details. Animation player preferences to select different animation players. - Lock Interface Lock interface during rendering in favor of giving more memory to the renderer. Window Menu - New Window Create a new window by copying the current window. - New Main Window Create a new window with its own workspace and scene selection. - Toggle Window Fullscreen Toggle the current window fullscreen. - Next Workspace Switch to the next workspace. - Previous Workspace Switch to the previous workspace. - Show Status Bar Choose whether the Status Bar at the bottom of the window should be displayed. - Save Screenshot Capture a picture of the current Blender window. A File Browser will open to choose where the screenshot is saved. - Save Screenshot (Editor) Capture a picture of the selected Editor. Select the Editor by LMB within its area after running the operator. A File Browser will open to choose where the screenshot is saved. Help Menu See Help System. Workspaces These sets of tabs are used to select the current Workspace; which are essentially predefined window layouts. Scenes & Layers These data-block menus are used to select the current active Scene and View Layer.
https://docs.blender.org/manual/fi/dev/interface/window_system/topbar.html
2021-10-15T23:53:04
CC-MAIN-2021-43
1634323583087.95
[array(['../../_images/interface_window-system_topbar_menus.png', '../../_images/interface_window-system_topbar_menus.png'], dtype=object) array(['../../_images/interface_window-system_topbar_workspaces.png', '../../_images/interface_window-system_topbar_workspaces.png'], dtype=object) array(['../../_images/interface_window-system_topbar_scenes-layers.png', '../../_images/interface_window-system_topbar_scenes-layers.png'], dtype=object) ]
docs.blender.org
Interacting With Running CI Jobs Overview When tests are executed for proposed changes to a repository in a PR, a number of OpenShift clusters may be involved. End-to-end tests for a component of OpenShift Container Platform or any operator deployed on OpenShift require a running OCP cluster to host them. Colloquially, these are known as ephemeral test clusters as they are created to host the test and torn down once it’s over. Furthermore, in OpenShift CI, all test workloads themselves run as Pods on a fleet of long-running OpenShift clusters known as build farm clusters. Developers may also launch short-lived development clusters that incorporate changes from their pull requests but that run outside of CI. For most end-to-end tests, the code that manages the lifecycle of the ephemeral test cluster and the code that orchestrates the test suite run on the build farm cluster. It is possible to follow the logs of test Pods in the build farm clusters and to even interact with the ephemeral test cluster that was launched for a pull request. When a repository builds some component of OpenShift Container Platform itself, the ephemeral test cluster runs a version of OCP that incorporates the changes to that component in the PR for which the test is running. Interacting with the test logs or ephemeral test cluster itself is useful when debugging test failures or to diagnosing and confirming OCP behavior due to your changes. WarningOnce your pull request is no longer a work in progress, you should no longer interact with the CI system as it is possible to alter test outcomes in this way. In most cases it’s more useful to run tests against a development cluster for which you have a $KUEBCONFIGfrom your local system rather than hijacking the PR’s running jobs. For this, follow the directions on how to run the test suites outside of CI. How and Where Do the Tests Run? For each set of unique inputs (like the commits being tested, the version of the tests being run, and the versions of dependencies) a unique Namespace in the build farm clusters is created that has a name like ci-op-<hash>. In this project, ci-operator launches Pods that administer the test suites. For example, with a pull request in the openshift-apiserver repository, several jobs are triggered. The initial tests such as ci/prow/images, ci/prow/unit, and ci/prow/verify clone and validate your changes to build the openshift-apiserver artifacts (running make build, make test, for example) and also build an ephemeral release payload that merges the latest versions of other OCP images with those built from this pull request. For more details, see the ci-operator documentation. Once the images are built, jobs that require a test cluster will start. These jobs, such as e2e-aws, e2e-cmd, e2e-*-upgrade run the OpenShift installer extracted from the updated release payload to launch an ephemeral test cluster in the configured cloud (GCP, AWS, Azure, or other). All repositories that publish components of OCP will run the same central end-to-end conformance suites for OpenShift and Kubernetes. With this testing strategy, a change in any repository making up OpenShift is ensured to be compatible with the over 100 other repositories that make up OpenShift. With every merge in every repository, the integration streams are updated to contain the latest version of each component image. Merges for every repository happen in small pools and undergo a final run of tests to ensure pull requests merging simultaneously are also compatible. Access the Namespace on Cluster/Project of the Running CI Job It is possible to authenticate to CI build farms with GitHub OAuth as well as Red Hat Single-Sign-On: GitHub OAuth requires that you are a member of the OpenShift organization. Both the authors of a pull request and the corresponding Red Hat kerberos IDs are permitted to accessing the Namespace that run jobs. InfoTo access the Namespace, it is required that GITHUB at PROFESSIONAL SOCIAL MEDIA is set up at Rover People. It takes 24 hours to synchronize the modification at Rover People to the build farms. From a pull request page on GitHub, you can access the build logs from the Details link next to each job listed in the checkbox at the bottom of the PR description page. This gives an overall picture of the test output, but you might want to follow each job and test more closely while it runs. It is especially useful to follow a PR through the CI system if it is updating a test workflow or adding a new test. In that case, you can access the CI cluster console. From the job Details, grep for this line near the top of the Build Logs to locate the Namespace on the build farm cluster where your test is running: As the pull request author, you are the administrator of the project ( ci-op-mtn6xs34 in example above). You will therefore have access to the link above. From the console, to login, choose GitHub. Once in the console, you can follow the logs from the running pods or you can grab the login command from the upper right ? -> Command Line Tools -> Copy Login Command. Again, choose the GitHub authentication, then Display Token. Copy in your terminal to access the CI build farm cluster. Before running the login command, you might run unset KUBECONFIG if you currently have an active development cluster and local $KUBECONFIG, since this login command will either update the currently set $KUBECONFIG or write/update to ~/.kube/config. The login command will look similar to this: As the project administrator, you may give access to your project to other members of the GitHub OpenShift organization. Afer logging into the project with oc, use this command to give other members access to the project running your pull request: InfoAn hour after your job completes whether due to success or failure, the project as well as the ephemeral test cluster launched from this project will be torn down and terminated. Access Test Logs From GitHub, you can access the build logs from the Details link next to each job listed in the checkbox at the bottom of the pull request description page. This gives an overall picture of the test output, but you might want to follow each job and test more closely while it runs. In that case, from the CI build farm cluster console accessed above, you have a choice between Administrator and Developer menu. Choose Administrator if it is not already chosen. You can access the Pods running the tests from Workloads -> Pods and then the Logs tab across the top. Pods usually have multiple containers, and each container can be accessed from the dropdown menu above the logs terminal. This is equivalent to running the following from your local terminal, if you are currently logged into the CI cluster: Access the Terminal of the Pod Running the Tests From the console, you can access the running pod through the Terminal tab across the top of the selected pod. ( Workloads -> Pods -> Terminal). This will give you a shell from which you can check expected file locations, configurations, volume mounts, etc. This is useful when setting up a new test workflow to get everything working. This is roughly equivalent to the following from your local terminal, if you are currently logged into the CI cluster: Access the External Cluster Launched With Your Changes If you are debugging a job during which a test cluster was launched ( e2e-aws, e2e-*), you might find it useful to access the ephemeral test cluster. After the installer pod has successfully completed (usually this is ~30 min after the job was triggered), an e2e-*-test pod will launch. From the project accessed above, ( Projects) you can grab the KUBECONFIG for the test cluster and copy it to your local system, to access the test cluster against which the extended test suites are running. Below is how to access the $KUBECONFIG file from the installer pod. Access the Terminal tab of the running test pod ( Workloads -> Pods -> Terminal). Find the kubeconfig file in the *-install-install pod’s /tmp/installer/auth directory. Or, from your local system, if currently logged into the CI cluster, create the following script as extract-kubeconfig.sh: Then, run the script: The following files should be copied to your local system: InfoIt may take some time for the ephemeral test cluster to be installed and ready. The above script will wait when necessary. Once copied to your local system, you can proceed to run oc commands against the test cluster by setting $KUBECONFIG environment variable or passing --kubeconfig to oc. Again, this is intended for work-in-progress pull requests only. The test cluster will be terminated whenever the job completes. It is usually more productive to launch a development cluster using cluster-bot through Slack and manually run openshift-tests suites against that, rather than through a pull request job’s cluster. For how to run the openshift-tests binary and to find more information about the test suites, see the documentation on how to run the test suites outside of CI. How Do I Know What Tests Will Run? It can be quite confusing to find the test command that is running in CI. Jobs are configured in the release repository. The YAML job definitions are generated from the step-registry workflow and/or the ci-operator/config files. For jobs that aren’t configured with the ci-operator/step-registry, you can find test commands in release/ci-operator/config/openshift. For example, the ci/prow/unit test command for openshift-apiserver is make test-unit. For jobs configured with the step-registry workflow, such as all the jobs that require test clusters, you can get more information with the step-registry viewer. There you’ll find detailed overviews of the workflows with a search toolbar for locating specific jobs. The viewer provides links to the code in GitHub, or you can locate the OWNERS of the workflows if you have further questions. How To Run the Test Suites Outside of CI While iterating upon a work-in-progress pull request, for those jobs that install and run tests against a test cluster, it’s useful to run the openshift-tests binary against a development cluster you have running rather than follow a pull request job in the CI cluster. You can run the openshift-tests binary against any development cluster for which you have a KUBECONFIG file. CoreOS Slack offers a cluster-bot tool you can utilize to launch a cloud-based development cluster from one or more pull requests. For more information on what cluster-bot can do, find cluster-bot under Apps in CoreOS Slack or direct message cluster-bot "help" to list its functions. For example, to launch a development cluster in AWS from a release payload that includes openshift-apiserver pull request #400, direct message cluster-bot in Slack the following: /msg @cluster-bot launch openshift/openshift-apiserver#400 aws or, to launch from release payload that includes multiple pull requests: /msg @cluster-bot launch openshift/openshift-apiserver#400,openshift/cluster-authentication-operator#501 aws Upon successful install, you will receive login information with a downloadable kubeconfig for the development cluster. If you are not modifying or adding tests to the openshift-tests binary and simply want to run a test suite or single test against a development cluster, you can do the following: openshift-tests run will list all the suites in the binary. You can find individual tests with something like: You can run an individual test or subset with the following (remove the last portion to only list first): If you are adding a test or modifying a test suite, the test binary can be built from openshift/origin repository and all of the e2e individual tests are found in origin/test/extended. First, clone the origin repository with: Then from your local system and origin directory: See the openshift-tests README for more information.
https://docs.ci.openshift.org/docs/how-tos/interact-with-running-jobs/
2021-10-15T22:33:03
CC-MAIN-2021-43
1634323583087.95
[]
docs.ci.openshift.org
STGA Tokenomics STGA is the governance token for Stargaze Altair on Binance Smart Chain. It ultimately reports to Stargaze Orion on Ethereum Chain, but will largely be self-managed. The initial supply of 27,000,000 Stargaze Altair will be distributed as follows: - 30% will be made available to the STGA treasury over the course of 6 months - 25% will be distributed through liquidity mining to users who stake index tokens or their Uniswap Ether pair LP tokens (ending in 30 days). - 20% will go to the founders, investors and future team members, subject to vesting periods. - 15% will be distributed via the treasury in a manner to be determined by governance. - 10% will be distributed through liquidity mining to users who stake index tokens or Uniswap Ether pair LP tokens After a launch period of 60 days, the ability to mint new STGA tokens will be available to the governance organization. Minting is restricted to a maximum of 10% of the supply (at the time tokens are minted) and may only occur once every 90 days. STGA governance may also disable minting permanently by changing the minter address from the timelock contract to the null address.
https://docs.stargazeprotocol.com/altair/tokenomics/
2021-10-15T22:51:01
CC-MAIN-2021-43
1634323583087.95
[]
docs.stargazeprotocol.com
Vantage follows the MGRS convention of truncating, rather than rounding values during the conversion. The net effect of the truncation is to shift points to the south and west (the lowest point in each rectangle). MGRS coordinates employ one of two lettering schemes to denote row identifiers in the grid system: MGRS-Old or MGRS-New. During conversions to and from MGRS, the MGRS lettering scheme in the returned coordinates (for the TO_MGRS function) or expected as input (to the FROM_MGRS function) depends on the spatial reference system from or to which the coordinates are being converted. If the spatial reference system is based on the Bessel, Clarke 1866, or Clarke 1880 reference ellipsoid, the MGRS coordinates use the MGRS-Old lettering scheme. Conversion to or from all other spatial reference systems use the MGRS-New scheme.
https://docs.teradata.com/r/1drvVJp2FpjyrT5V3xs4dA/M2jBGz3STjhOPR_X0hbkXQ
2021-10-16T00:59:43
CC-MAIN-2021-43
1634323583087.95
[]
docs.teradata.com
teradataml.analytics.mle.NaiveBayes = class NaiveBayes(builtins.object) Methods defined here: __init__(self, formula=None, data=None, data_sequence_column=None, data_order_column=None)DESCRIPTION: The NaiveBayesMap and NaiveBayesReduce functions generate a model from training data. A virtual data frame of training data is input to the NaiveBayesMap function, whose output is the input to NaiveBayesReduce function, which outputs the model. data: Required Argument. This is teradataml DataFrame defining the input training data. data_order_column: Optional Argument. Specifies Order By columns for data. Values to this argument can be provided as a list, if multiple columns are used for ordering. NaiveBayes. Output teradataml DataFrames can be accessed using attribute references, such as NaiveBayesObj.<attribute_name>. Output teradataml DataFrame attribute name is: result RAISES: TeradataMlException EXAMPLES: # Load the data to run the example load_example_data("NaiveBayes","nb_iris_input_train") # Create teradataml DataFrame object. nb_iris_input_train = DataFrame.from_table("nb_iris_input_train") # Run the train function naivebayes_train = NaiveBayes(formula="species ~ petal_length + sepal_width + petal_width + sepal_length", data=nb_iris_input_train) # Print the result DataFrame print(naivebayes_train.result) __repr__(self)Returns the string representation for a NaiveBay.
https://docs.teradata.com/r/xLnbN80h9C6037gi3ildag/C2z_oh2Cr~UDbH3SiO6XjA
2021-10-15T23:49:58
CC-MAIN-2021-43
1634323583087.95
[]
docs.teradata.com
TestCase is the base class for all codeception unit tests. public array|string $appConfig = '@tests/codeception/config/unit.php'. © 2008–2017 by Yii Software LLC Licensed under the three clause BSD license.
https://docs.w3cub.com/yii~2.0/yii-codeception-testcase
2021-10-15T23:14:38
CC-MAIN-2021-43
1634323583087.95
[]
docs.w3cub.com
CHT Applications > Reference > translations/ Localization: Localized labels for CHT applications Apps built with CHT Core are localized so that users can use it in the language of their choice. It is currently available in English, French, Hindi, Nepali, Spanish, and Swahili. The goal of this doc is to help our team manage these and future translations. Like the rest of our code, the translation files live in our GitHub repo. These translation files are properties files, which are a series of keys and their corresponding values. We use the English file as our default, and as such contains the entire set of keys. If any key is missing from another language file the English value is used. In order to collaboratively edit the translations we use POEditor.com. Translators can be given access to specific languages so that we can more effectively edit language text to be included in CHT Core. Once the text is ready it can be exported from POEditor to GitHub and included in the next release of our app. Note that “keys” in .properties files are referred to as terms in POEditor. New languages must be added and configured in several places: LOCAL_NAME_MAPin api. Use the language code for the key, and the local name followed by the English name for the language in brackets, eg: “fr: ‘Français (French)'". In order to trace the addition of new terms and also updates to existing translations, the default translation file (messages-en.properties) must be updated directly. Our GitHub repo provides with a command line tool (CLI) to import updates into the POEditor app. If you don’t have an API token, please contact a Medic developer. Please do not disclose this API token to anyone else. All text in the app is internationalised. <h3 translate>date.incorrect.title</h3>. Because help pages are too large to manage easily through the standard translation mechanism, and we want to include lots of markup, help pages are translated by providing md documents for each language. This isn’t yet up and running so ask for help. Much of the app is configurable (eg: forms and schedules). Because the specifics of the configuration aren’t known during development time these can’t be provided via messages. Instead we allow configurers to provide a map of locale to value for each translated property. Then use the translateFrom filter to translate from the configured map using the user’s language. To be done only by updating messages-en.properties, importing to POEditor through the CLI tool and updating the other language translations through the POEditor app. To be done only by updating messages-en.properties and importing to POEditor through the CLI tool. To be done only by exporting all translations through the CLI tool. If you don’t have an API token, please contact a Medic developer. Please do not disclose this API token to anyone else. Localization: Localized labels for CHT applications Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.communityhealthtoolkit.org/core/overview/translations/
2021-10-16T00:16:46
CC-MAIN-2021-43
1634323583087.95
[]
docs.communityhealthtoolkit.org
Analysis and Data Visualization - New KPI template – you can now combine up to 10 KPIs into one report, and use the calculated fields functionality of the Pivot visualization to calculate measures based on multiple KPIs (e.g. calculate CTR by dividing clicks and impressions). Note – changing the report resets calculated columns in the visualization. - Publications (email reports) now support all visualizations – send out any type of visualization as an image in your publication emails. Normal tables (not pivot) are still sent as text and not images. - ‘Pretty’ Aliases – you can now use spaces in CQL aliases, by wrapping them in commas. This allows you to create nicer looking reports with column titles such as “Number of Users” instead of “Number_of_users”.
https://docs.cooladata.com/release-july-2016/
2021-10-15T22:43:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.cooladata.com
. frevvo.. this account provides to login to your tenant as a tenant admin in order to fix your configuration issue. The built-in admin is able to access the.. The following Security Manager changes can be made by clicking the Change button and making a selection from the dropdown. Cloud customers should contact [email protected] to initiate the procedure. Click the button to see details about a field. Changes to the Business Calendar can be made in the Business Calendar section: Changes to the HTTP Authentication Credentials can be made by expanding the section: Make changes to the SharePoint Connector in this section. Click Submit to save your changes. The message "Tenant updated successfully" will display. To setup your tenant, first login to your new tenant as the tenant admin. The tenant admin can add users to the tenant. You need to add at least one designer user in order to begin creating forms. User Authentication Concurrent Users
https://docs.frevvo.com/d/plugins/viewsource/viewpagesrc.action?pageId=22446303
2021-10-15T23:24:28
CC-MAIN-2021-43
1634323583087.95
[]
docs.frevvo.com
Available Gutenberg Blocks Note: In order to utilize the Gutenberg integration with MemberPress, you must be using WordPress 5.2.1 or higher and MemberPress 1.4.7 or higher. Overview How to find MemberPress Blocks First you will want to go to your WordPress dashboard and click on either page or post. Then click "Add new". After you have entered in a title, you will click below the title to where you have the option to create a new block. Click the + sign and scroll down until you find MemberPress at the bottom. Next you will select which type of block you would like to include in the page. There are 4 different blocks currently which are Login Form, Account Form, Registration, and Protected Content. Examples of the 4 Different Available Gutenberg Blocks for MemberPress Login Form Block: By clicking on the MemberPress Login Form Block option, the page/post will automatically have the MemberPress login form added anywhere on the page you decide. You will have an option to use the "MemberPress login redirect URL if you would like the login form to go to the login redirect you set in MemberPress > Settings > Account tab. Account Form Block: If you select the MemberPress Account Form Block, You can easily add the account information to any page or post. Example of the account page after adding it to a post with a Gutenberg/MemberPress block. Registration Block: With this block feature, you can easily add any registration form to any page or post. You can select any membership registration form by clicking the dropdown menu. The registration page will then be visible on your page or post wherever you decide to insert the block. Below is an example post. Protected Content Block: By clicking on the MemberPress Protected Content Block, you can insert blocks of protected content anywhere on the page or post. You can add content like images, more text, audio etc. by clicking on the 3 dots and selecting where you would like to insert the content. You will then click the + sign that appears to decide what type of media/text you would like to add to the block. Here is an example of adding an image to the block that will be protected. Finally be sure to add an Access Rule to the block. We recommend creating a Partial Rule as it is easier to use over various block types. There is also an if Allowed option to hide or show the content to authorized members. When it is set to Show, only authorized members will be allowed to see the content. If it is set to Hide, the content will be hidden from authorized members. You also have the option to have a custom message for each individual block if you wish. Under the "Unauthorized Access" dropdown when clicking on the block. You can choose to "Hide Only" , "Show Message", "Show Login Form", or "Show Login Form and Message" alt=""> Below is a quick demo video showing how to add the various blocks from MemberPress Is this not working how you think it should even after following the instructions in the screenshots? Feel free to send us a Support Ticket!
https://docs.memberpress.com/article/277-available-gutenberg-blocks
2021-10-15T22:45:12
CC-MAIN-2021-43
1634323583087.95
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/5f36d5bb042863444aa05af4/file-yf0pAHSh2Y.png', None], dtype=object) ]
docs.memberpress.com
Shipping Methods - especially integrated API-based shipping methods - require that your products have set values for dimensions and weights. In this help doc, we'll cover how units of measurement are managed within your QPilot Site and how they can be used as conditions for Shipping Rates. Managing Units of Measurement in QPilot You can define default units of measurement for each of your QPilot Sites by editing your site's settings. Measurement Settings Units of Measurement at the Product Level QPilot synchronizes your Product Data including each product's weight and dimensions. If you have defined measurements at a product level in your connected QPilot site, the product measurements will override the default measurements set in QPilot. Using Multiple Units of Measurement in QPilot Your QPilot Site can convert multiple units of measurement. This means that you can define units of measurement on QPilot Shipping Rates without needing to match them to your site's default units of measurement and they will still be calculated and applied correctly. Defining Units of Measurement in QPilot Shipping Rates You can define the unit of measurement when adding a Weight Restriction to a. Updated 5 months ago
https://docs.qpilot.cloud/docs/units-of-measurement
2021-10-15T22:31:52
CC-MAIN-2021-43
1634323583087.95
[array(['https://files.readme.io/164872b-measurementsitesettings.gif', 'measurementsitesettings.gif Measurement Settings'], dtype=object) array(['https://files.readme.io/164872b-measurementsitesettings.gif', 'Click to close... Measurement Settings'], dtype=object) ]
docs.qpilot.cloud
Limits¶ Overview¶ Limit Group requests can be used to set and retrieve information about one or many Limit Groups. Limit Group requests support GET, PUT, DELETE request types. For more about these request types and their uses see the Request Formats and Responses documentation. Requests and Responses¶ List of possible requests for Limit Groups. All PUT and POST requests can return a 400 Bad Request Error message if no message body is passed, or if no command key is present in the message body. All PUT and POST requests may also return a 500 Internal Server Error error message if the command key in the message body contained an invalid command. Get Limit Group Names¶ Gets the names of all Limit Groups in the Repository. Get Limit Groups¶ Gets the Limit Groups for the provided Limit Group names. Get All Limit Groups¶ Gets the names of all Limit Groups in the Repository. Set Limit Group¶ Sets the Limit, Worker List, Allow List Flag, Release Percentage and/or Excluded Workers for an existing Limit Group, or creates a new Limit Group with the provided properties. Allow list flag is boolean Save Limit Group¶ Updates a Limit Group using a JSON object containing all the Limit Group information. Reset Limit Group¶ Resets the counts for a Limit Group. Delete Limit Groups¶ Deletes the Limit Groups for the provided Limit Group names. Limit Group Property Values¶ Values for some Limit Group properties are represented by numbers. Those properties and their possible values are listed below. - Type (LimitGroupType) 0 = General 1 = JobSpecific 2 = MachineSpecific - StubLevel (currently not used) 0 = Slave 1 = Task 2 = Machine
https://docs.thinkboxsoftware.com/products/deadline/10.1/1_User%20Manual/manual/rest-limits.html
2021-10-16T00:22:50
CC-MAIN-2021-43
1634323583087.95
[]
docs.thinkboxsoftware.com
Settings Overview for Settings > Taxes > Tax Rates This is the settings overview for the GetPaid > Settings > Taxes > Tax Rates page. This is where you can specify tax rates for specific regions. Tax Rates - Add New Tax Rate - Use this button to add a new custom tax rate for all invoices of a given region. - Reset Tax Rates - Use this button to add EU VAT tax rates. VAT Rate for EU Member States You can enter the VAT rate to be charged for EU member states. You can also edit the rates for each member state individually using the Reset Tax Rates button.
https://docs.wpgetpaid.com/article/388-settings-overview-for-settings-taxes-tax-rates
2021-10-15T22:56:18
CC-MAIN-2021-43
1634323583087.95
[]
docs.wpgetpaid.com
lockmeter Package¶ lockmeter Module¶ Lockstat is the basic tool used to control the kernel’s Lockmeter functionality: e.g., turning the kernel’s data gathering on or off, and retrieving that data from the kernel so that Lockstat can massage it and produce printed reports. See for details. NOTE: if you get compile errors from config.h, referring you to a FAQ, you might need to do ‘cat < /dev/null > /usr/include/linux/config.h’. But read the FAQ first.
https://autotest.readthedocs.io/en/latest/api/autotest.client.profilers.lockmeter.html
2019-04-18T14:26:48
CC-MAIN-2019-18
1555578517682.16
[]
autotest.readthedocs.io
Troubleshooting Neutron This page will list all gotchas and frequently asked questions pertaining to networking & neutron with Platform9 managed Openstack Host Configuration & troubleshooting Gotcha: Restarting networking on CentOS (service network restart or similar) will disconnect your virtual machines from the network (Including Floating IPs/Elastic IPs). To fix this, a "service network restart" should be followed by restart of Platform9 openstack services: "service pf9-neutron-ovs-agent restart" , "service pf9-neutron-l3-agent restart" and "service pf9-neutron-dhcp-agent restart". Note that the l3-agent and dhcp-agent will be present on network node/host in case of a Non-DVR setup. L3-agent will be present on all nodes/hosts in case of DVR. Gotcha: Some info on network namespaces: - Run "ip netns" to list the network namespaces on your host - Run "ip netns exec " to run a command within the network namespace. - Namespaces in neutron are named as "snat- ", "qrouter- ", "dhcp- ". Gotcha: Check your external network reachability by initiating a "ping" to the IP present in your SNAT namespace. You can get the external IP assigned to your router either by looking at the router ports through the UI or by running "ifconfig -a / ip a" within the snat network namespace. Also try pinging the external gateway from within the SNAT network namespace. Security Groups Gotcha: Security groups in Neutron are designed as "allow" rules. All protocols and ports are by default in "Deny" mode. Create a new security group to add Inbound & Outbound allow rules. Rules are fine grained to CIDR (Specific IP can be used by specifying mask of 32), protocols and ports. Gotcha: Default security group is created by Neutron to ensure that there is atleast some connectivity to the instances if spawned without defining a security group explicitly. The Default security group allows "All outbound" from within the VM only. Gotcha: Security groups are scoped to a particular tenant. So if you need to create security groups in multiple tenants, you will need to create a new one for each tenant. This also means that each tenant will get its own "default" security group too.
https://docs.platform9.com/support/networking-neutron-faqs/
2019-04-18T15:09:21
CC-MAIN-2019-18
1555578517682.16
[]
docs.platform9.com
{"_id":"56c41ddb2e75e0170098605e","parentDoc":null,"__v"":"","project":"56bc8e679afb8b0d00d62dcf","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-02-17T07:14:35.039Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"The Buddy Platform is custom designed to handle large volumes of data from any IoT (Internet of Things) connected device. Buddy APIs are based on HTTP/REST and can be easily called from devices with basic HTTP connectivity. This walkthrough assumes your device has a library to make HTTP/HTTPS calls, and the code samples will be written using Javascript, but can easily be adapted to any platform.\n\n### Features and Scenarios\n\nWith just a few lines of code, the Buddy Platform provides a scalable, easy to use device backend that offers:\n\n* Secure, scalable storage for device data, in a geographical location of your choosing\n* Access to a broad set of platform functionality including device registration, binary upload/download, and metrics tracking\n* Real-time visualizations of device usage statistics such as:\n * How many unique devices are deployed and at what rate?\n * Where are they?\n * How often/long are the devices being used?\n * What SKU/Model mix is beind deployed?\n * What are failure/feature use rates?\n* An easy way to make data usable to your business, ranging from a live flatscreen dashboard, to direct import of device telemetry\n\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"image.jpg\",\n \"800\",\n \"459\",\n \"#635fd5\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n** An example of a Buddy-powered Dashboard, available for any app ** \n\n### Types of Data in Buddy\n\nIn the Buddy Platform, there are a few different kinds of data that is handled by the API. Knowing these types will help you best decide what is right for your application. All forms of data are stored in the same geographical region as the application itself.\n\n* **Object Data**: Data that is associated with a Buddy object such as a User, a Checkin, or a Picture. This data is structured and consistent. For example, each of those object types have Location, Tag, Created Date, and Last Modified Data properties. Pictures have a \"caption\" property, and Users have properties like \"email\", \"dateOfBirth\", and \"gender\". Objects can be created, searched, updated, or deleted.\n\n* **Object Metadata**: Arbitrary data (key-value pairs) that can be attached to any object. This allows you to extend the object schema or store application-specific data in a way that is easy to work with.\n\n* **Metrics Events**: Metrics are markers supplied by your application to indicate when something of interest has happened such as \"Button_Pressed\" or \"Firmware_Updated\". These are arbitrary keys, and can optionally have values attached, and can also compute event duration, such as the amount of time a user is on a screen or using a feature. This data is then aggregated and can be viewed via the Buddy Dashboard, but can not be accessed by the API.\n\n* **Telemetry Data**: Telemetry data is designed for large-scale data in longer-term data warehousing. Telemetry data is not accessible via the API or the Buddy Dashboard, but is available for custom analysis when desired. \n\n### Integration Steps\n\nImagine an Internet-connected thermostat. This thermostat connects to the user's home wi-fi for the purpose of allowing the user to manage the thermostat via their mobile phone both in the home and remotely. This device is capable of generating useful data, but needs an endpoint to securely send it. Buddy provides that.\n\n#### Creating the Buddy App\n\nThe Buddy App is the secure, sandboxed backend for the device data. Just visit the [Buddy Dashboard](), create an account, then create an app. When you create this app, you'll be offered a list of geographical regions in which the app can be created. When you create an app in a region _all of the data for that application is stored geographically in that region_.\n\nOnce you've created the app, get the application key and ID, which will look something like:\n\n Application ID: bbbbbc.aaaabbbbcccc\n Application Key: 0000000-0000-0000-0000-000000000\n\n#### Basic Functionality Overview\n\nAt it's simplest, you'll need to communicate with Buddy for the following:\n\n1. Authenticate the device, and it's unique identifier, with the Buddy backend.\n2. Configure a [telemetry configuration](/docs/2/IoT%20Telemetry#ConfigureTelemetry) to tell Buddy about the data you'll be sending. This only needs to be done once, but can be modified at any time, even remotely.\n3. [Send telemetry](/docs/2/IoT%20Telemetry#AddTelemetryData) data to Buddy.\n\nWe'll walk through the steps below using a simple (and imaginary!) Javascript-based HTTP API. Please consider the code below as for demonstration/understanding purposes only. You would need to map these samples to the networking API supported by your specific device.\n\n#### Authenticating the Device\n\nThe first thing you'll need to do is use your AppID/AppKey to register the device with Buddy. This registration will return a Device Access Token that will need to be saved for further calls to Buddy.\n\n function registerDevice() {\n httpApi.makeRequest(\n {\n method: 'POST',\n url: ''\n headers: {\n contentType: 'application/json',\n accept: 'application/json'\n },\n body: {\n appid: myAppId,\n appkey: myAppKey,\n platform: myDeviceType,\n model: myDeviceModelOrSKU,\n uniqueId: myDeviceSerialNumber,\n osVersion: myFirmwareVersion\n },\n success: function(response) {\n \n // TODO: error handling!\n accessToken = response.result.accessToken;\n accessTokenExpires = response.result.accessTokenExpires;\n\n // if we receive a service root, use that, otherwise use the default.\n apiUrlRoot = response.result.serviceRoot || '';\n }\n });\n }\n\nIf successful, this call returns us:\n\n* An **accessToken** which allows us to call other Buddy APIs.\n* A **serviceRoot** which is the URL root to be used for subsequent calls.\n\nYour device needs to save both of these values for subsequent calls, so these values should be cached.\n\n#### Configuring Telemetry\n\nOnce the device has been authenticated, ensure that your telemetry configuration has been registered. Telemetry configuration tells the Buddy backend how to process the data you send it.\n\nIn this example, let's assume our thermostat sends back the following data points, as JSON:\n\n* The current temperature setting\n* What sort of schedule program it is running\n* The percentage of time it's spent heating versus cooling\n\nFor example, suppose the data looks something like this:\n\n {\n current_temp: 22,\n customer_id: 131241241,\n program_type: 'manual',\n color_theme: 'coolblue',\n statistics: {\n heating: 72,\n cooling: 12,\n idle: 16\n }\n }\n\nDevices will generate data that has different impacts and value:\n\n* Data we want to store safely for later analysis\n* Data that we want to visualize in real time\n* Data that contains customer information we _do not_ want to store with the other data.\n\nIt's up to your app how often you send data to Buddy, but Buddy helps you handle all three of these types of data.\n\nThe **telemetry configuration** allows us to make choose how we want to store this. You can see all of the options [in the documentation](/docs/IoT%20Telemetry#ConfigureTelemetry), but for this example, we'll set this basic configuration as follows:\n\n* `filterKeys` describes data that Buddy should remove from the telemetry before persisting\n* `metrics` describes a transform that allows us to convert telemetry data into [metrics events](/docs/2/Metrics), which can be displayed in near realtime.\n\n\n function setupTelemetryConfig() {\n httpApi.makeRequest(\n {\n method: 'PUT', // remove customer ID.\n filterKeys:['customer_id'],\n\n // promote current_temp and heating to metrics for\n // realtime visualization\n metrics:{\n current_temp: 'current_temp',\n heating_rate: 'statistics.heating'\n }\n }\n });\n }\n\nRemember, telemetry configuration only needs to be called once before the first data is sent. You don't need to call it each time you send data. And, if you decide you want to start handling data differently at any time, you can modify this, and subsequent data will be processed in accordance with the new configuration.\n\n#### Sending Data\n\nOK now it's time to send data! Most devices send data on some regular schedule, but that's up to you. In any case, sending data is easy, just package the data up and send it to the endpoint you configured above:\n\n function setupTelemetryConfig() {\n var deviceData = getMyDeviceData();\n httpApi.makeRequest(\n {\n method: 'POST', data: deviceData\n }\n });\n }\n\nYour data has now been stored securely with Buddy, and the appropriate metrics events have been created.\n\n#### Device Management\n\nMost devices also use the Buddy APIs for device management functions. Typically, the device flow is as follows:\n\n1. Device powers on\n2. If connected, device checks Buddy metadata for any configuration updates\n3. Device makes any changes requested configuration\n4. Device caches configuration data\n\nThis configuration can be easily managed remotely via the Buddy API. Typically, devices use [metadata](/docs/2/Metadata) to store and retrieve configuration. Metadata is a key-value store for arbitrary data. With our thermostat example, imagine some of the configuration points are:\n\n* How often to send telemetry data\n* How long to remain on after user interaction before going to sleep\n\nAn example configuration might look like this:\n\n {\n telemetry: {\n sendIntervalSeconds:120,\n includeSettings: true\n },\n sleepAfterSeconds: 30,\n firmware: {\n version: '1.1',\n // see Buddy Blob APIs for information on how to use them\n // to manage/distribute firmware updates\n firmwareBlobId: 'bbbbbc.rcdbvlNmjKbj'\n }\n }\n\nNow, we can store this configuration in Buddy by setting it as a metadata value on the `app` object itself.\n\nIn the following example, we're doing something slightly more advanced, which is to save a configuration specific to a device model name. This allows multiple configurations to exist side-by-side in the same app. Note that this code would typically be called from outside of the device, but is written in the same format as the other code samples where for clarity and simplicity:\n\n function saveDeviceConfigByModel(modelName, config) {\n httpApi.makeRequest(\n {\n method: 'PUT',\n // note we use the model name as part of the key\n url: apiUrlRoot + '/metadata/app/' + encodeUriComponent(\"config_\" + modelName)\n headers: {\n contentType: 'application/json',\n accept: 'application/json',\n // here, we use the accessToken we received in device registration above\n authorization: 'Buddy ' + accessToken\n },\n body: {\n value: config\n }\n });\n }\n\nFrom within the device, reading the configuration would look like this:\n\n function readDeviceConfig() {\n var configName = \"config_\" + myDeviceModelOrSKU;\n if (NoNetworkConnection()) {\n // use cached or default config;\n return;\n }\n httpApi.makeRequest(\n {\n method: 'GET',\n url: apiUrlRoot + '/metadata/app/' + encodeUriComponent(configName)\n headers: {\n accept: 'application/json',\n // here, we use the accessToken we received in device registration above\n authorization: 'Buddy ' + accessToken\n },\n success: function(data) {\n if (data.result) {\n updateAndCacheLocalConfig(data.result.value);\n }\n }\n });\n }\n\nBy defining settings in this way, you have an easy way to make modification to device behaviors without having to update firmware or rebuild devices.\n\n#### Tracking Activity with Metrics\n\nThere may be other information that you are interested in capturing real time, such as when a button is pressed or how often a feature is used. The following code sample shows how to record metrics events that correspond to usage of different buttons on our hypothetical Internet-connected thermostat:\n\n function recordButtonPress(buttonName) {\n httpApi.makeRequest(\n {\n method: 'POST',\n url: apiUrlRoot + '/metrics/events/button_press',\n headers: {\n accept: 'application/json',\n contentType: 'application/json',\n // here, we use the accessToken we received in device registration above\n authorization: 'Buddy ' + accessToken\n },\n body: {\n value: {button_name:buttonName}\n }\n });\n }\n\nThese metrics would then be available for analysis in the Buddy Dashboard, including breakdowns by device type, time, or geography.\n\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"image.png\",\n \"720\",\n \"239\",\n \"#d76654\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n** Example of metrics data ** \n\n### Conclusion\n\nWith Buddy, a backend for your connected device is just a few lines of code away. \n\nTo recap, with just a few lines of code, you now have a backend that:\n\n* Scales automatically with your traffic\n* Keeps your data securely, and privately, stored in a geographical location of your choosing\n* Allows real-time visualization of how many devices are deployed, where they are, and when they are being used\n* Provides a facility for long term storage of telemetry data and retrieval of the data for analytics\n* Allows for remote-management for device configuration as well as facilities for firmware updates\n\nFor any further questions, or to get started, just [contact us](mailto:support:::at:::buddy.com)!","excerpt":"","slug":"internet-connected-device-tutorial","type":"basic","title":"Getting Started With Your IoT Device"}
http://docs.buddy.com/docs/internet-connected-device-tutorial
2019-04-18T14:20:57
CC-MAIN-2019-18
1555578517682.16
[]
docs.buddy.com
Metacloud Capacity Planning Now that your cloud is up and running, your team receives an emailed weekly capacity report from [email protected]. Your capacity report contains the sections described below to help you interpret the information and plan for adding capacity. Installation Details This section contains the information for your installation, the size of it, as well as how many VCPUs are reserved. Typically, your reservations will not exceed 70-75%, watch this section for large percentage increases in instance launches month-over-month. If you need to add capacity, you may want to add an availability zone (AZ). Each AZ is independent from other zones, with new API endpoints and a different Dashboard URL. Availability zones give you physical isolation and redundancy of resources, with separate power supplies and networking equipment, so that users can achieve high availability if one zone becomes unavailable. Storage Usage (only with external storage) When you have an external storage solution, your capacity report includes a Storage Usage section. It indicates the size, how much is used and how much is available. These are labeled by file system. When physical storage levels hit pre-defined thresholds (over 70% raw capacity), Metacloud Support opens a request to address the issue. Volume Allocation (only with external storage) The Volume Allocation section indicates when your storage is over subscribed, by how much, and indicates available volumes. You may see a larger value in the Raw Storage column than in the Available column. For example, with 2.0 TB of raw storage, you can have 3.0 TB available when oversubscribed by 1.5 times. Flavor Types in Use Even with external storage, your flavors can have ephemeral storage or external storage. This section indicates which flavors are in use, along with name, memory amount, VCPUs, how much storage the root takes up, and how much ephemeral storage is available on that flavor. Instance Counts by Flavor By tracking which instance flavors are most launched, you can trim down the number of flavors you offer. This section of the report gives percentage increases or decreases as well as showing when there was no change. Watch for variances based on user needs, seasonal needs, and which guest operating systems are required. Memory Capacity This section displays Warnings for instances that cannot be launched until the RAM or disk space is available to those instances. Detailed Node Breakdown The following list, at the bottom of Node Breakdown section, indicates what is included in each column. -and reserved_host_memory_mb - ARAM - Total allocated RAM on the host - FRAM - Total free RAM for scheduling on the host - TDSK - Total disk on the host - UDSK - Total usable Disk on the host accounts for disk_allocation_ratio - ADSK - Allocated disk on the host - FDSK - Free disk on the host Some of the reported values are calculated through settings in the Nova configuration file, set by the Metacloud engineering team. The ram_allocation_ratio, cpu_allocation_ratio, and disk_allocation_ratio settings configure a virtual to physical allocation ratio when scheduling instance launches. The reserved_host_memory_mb value is 512 MB by default, for the effective memory available to the virtual machines. Contact Metacloud Support to open a request to further tune those values for your environment. The bottom of the report email also indicates instances with older container versions on a particular node. You must reboot those nodes and hypervisors. Reports in the Dashboard You can access and download your capacity reports using the Dashboard. - Log in to the Dashboard and select the Admin tab. - Select Reports to display the Reports panel. - Select the Download Reports tab. Locate the report you want and click Download. A text file named YYYY-MM-DD-capacity-report.txt (as shown below) downloads to your local computer. AZ_NAME Capacity Planning Report: YYYY-MM-DD HH:MM:Z+00:00 Installation Details +-----------------------+--------+ | Field | Value | +-----------------------+--------+ | Total Systems Managed | 13 | | Total Sockets | 16 | | Total Cores | 152 | | Total Instances | 40 | | CPU Allocation Ratio | 6.0 | | Total VCPUs | 1920.0 | | Total VCPUs Reserved | 81 | | Total VCPUs Free | 1839.0 | +-----------------------+--------+ Global Ceph Usage +----------+-----------+--------------+------------+ | Size(GB) | Avail(GB) | Raw Used(GB) | % Raw Used | +----------+-----------+--------------+------------+ | 446904.4 | 446036.5 | 867.9 | 0.19 | +----------+-----------+--------------+------------+ Ceph Pool Allocation +--------------+----------+--------+---------------+---------+ | Name | Used(GB) | Used % | Max Avail(GB) | Objects | +--------------+----------+--------+---------------+---------+ | nova-images1 | 283.6 | 0.19 | 148634.9 | 39816 | +--------------+----------+--------+---------------+---------+ Flavor Types in Use +--------------+------------+-------+----------+---------------+ | Name | Memory(MB) | VCPUs | Root(GB) | Ephemeral(GB) | +--------------+------------+-------+----------+---------------+ | m1.large | 8192.0 | 4 | 80.0 | 0.0 | | m1.medium | 4096.0 | 2 | 40.0 | 0.0 | | m1.small | 2048.0 | 1 | 20.0 | 0.0 | | m1.tiny | 512.0 | 1 | 1.0 | 0.0 | | tdub-jumpbox | 2048.0 | 2 | 160.0 | 10.0 | +--------------+------------+-------+----------+---------------+ Instance Counts by Flavor +--------------+-------+-------+----------+ | Name | Count | VCPUs | Root(GB) | +--------------+-------+-------+----------+ | m1.large | 6 | 4 | 80.0 | | m1.medium | 21 | 2 | 40.0 | | m1.small | 5 | 1 | 20.0 | | m1.tiny | 7 | 1 | 1.0 | | tdub-jumpbox | 1 | 2 | 160.0 | +--------------+-------+-------+----------+ Memory Capacity +------------------------+--------+ | Field | Value | +------------------------+--------+ | Total RAM(GB) | 2557.4 | | Total Usable RAM(GB) | 2410.5 | | Total RAM Reserved(GB) | 151 | | Total RAM Free(GB) | 2259.5 | +------------------------+--------+ Detailed Node Breakdown +------------------------------+-------+------+-------+----------+----------+----------+----------+ | Hypervisor | TCPU | ACPU | FCPU | TRAM(GB) | URAM(GB) | ARAM(GB) | FRAM(GB) | +------------------------------+-------+------+-------+----------+----------+----------+----------+ | mhv1.stage1.mc.metacloud.in | 192.0 | 12 | 180.0 | 255.7 | 241.1 | 21 | 220.1 | | mhv10.stage1.mc.metacloud.in | 192.0 | 2 | 190.0 | 255.7 | 241.1 | 4 | 237.1 | | mhv2.stage1.mc.metacloud.in | 192.0 | 8 | 184.0 | 255.7 | 241.1 | 16 | 225.1 | | mhv3.stage1.mc.metacloud.in | 192.0 | 9 | 183.0 | 255.7 | 241.1 | 16.5 | 224.6 | | mhv4.stage1.mc.metacloud.in | 192.0 | 15 | 177.0 | 255.7 | 241.1 | 28 | 213.1 | | mhv5.stage1.mc.metacloud.in | 192.0 | 8 | 184.0 | 255.7 | 241.1 | 14.5 | 226.6 | | mhv6.stage1.mc.metacloud.in | 192.0 | 12 | 180.0 | 255.7 | 241.1 | 22.5 | 218.6 | | mhv7.stage1.mc.metacloud.in | 192.0 | 9 | 183.0 | 255.7 | 241.1 | 16.5 | 224.6 | | mhv8.stage1.mc.metacloud.in | 192.0 | 4 | 188.0 | 255.7 | 241.1 | 8 | 233.1 | | mhv9.stage1.mc.metacloud.in | 192.0 | 2 | 190.0 | 255.7 | 241.1 | 4 | 237.1 | +------------------------------+-------+------+-------+----------+----------+----------+----------+ Legend +-------+----------------------------------------------------------------------------------------------+ | Field | Value | +-------+----------------------------------------------------------------------------------------------+ | and reserved_host_memory_mb) | | ARAM | Total allocated RAM on the host | | FRAM | Total free RAM for scheduling on the host | +-------+----------------------------------------------------------------------------------------------+
http://docs.metacloud.com/4.6/admin-guide/capacity-planning/
2019-04-18T14:41:27
CC-MAIN-2019-18
1555578517682.16
[]
docs.metacloud.com
How To Customize Your NPS Survey "Edit Survey" button in the top right hand corner of the dashboard. This will bring you to a series of screens that will allow you to customize the look and feel of your survey. 3.: 4.. 5. Test It Out With A Live Preview When you advance to the final "Summary" screen after you select your audience targeting, you'll have an option to "Test it out". This will display a real preview on the page..
https://docs.appcues.com/article/356-how-to-customize-your-nps-survey
2019-04-18T14:28:20
CC-MAIN-2019-18
1555578517682.16
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b69a2060428631d7a89ba97/file-1DVoCrODty.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b69b7092c7d3a03f89d6f68/file-FeXaeHJnkL.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b69a1fd0428631d7a89ba96/file-1y7GaqBeJ6.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b69a3e02c7d3a03f89d6e63/file-oaJpxSMnxP.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b69b7650428631d7a89bb8b/file-YFnYiD8dfF.png', None], dtype=object) ]
docs.appcues.com
Databricks ML Model Export Important Databricks ML Model Export is deprecated in Databricks Runtime 5.3 and Databricks Runtime 5.3 ML, and will be removed in version 6.0 of both runtimes. Use MLeap for importing and exporting models instead. Databricks ML Model Export is used to export models and full ML pipelines from Apache Spark. These exported models and pipelines can be imported into other (Spark and non-Spark) platforms to do scoring and make predictions. Model Export is targeted at low-latency, lightweight ML-powered applications. With Model Export, you can: - Use an existing model deployment system - Achieve very low latency (milliseconds) - Use ML models and pipelines in custom deployments The scoring (a.k.a. inference) library takes JSON-encoded features. {"id":5923937, // any metadata "features:": { // MLlib vector format: 0 for sparse vector, 1 for dense vector "type": 1, "values":[0.1, 1.3, 8.4, 4.2]}} The result is also encoded in JSON. {"id":5923937, "prediction": 1.0} - Exporting Apache Spark ML Models and Pipelines - Importing Models into Your Application - Versioning
https://docs.databricks.com/spark/latest/mllib/model-export-import.html
2019-04-18T15:30:37
CC-MAIN-2019-18
1555578517682.16
[]
docs.databricks.com
How do I get a free teacher account? All teachers, including school, tertiary and pre-service teachers can get free accounts with access to all our resources for evaluation and professional development. This grants access to: - teacher notes and solutions to problems - free access to all courses and competitions - any professional development resources we publish. If you are not currently teaching at a school You can get free teacher access by registering an account and then emailing us. If you are currently teaching at a school You should register an account and then request to become a verified teacher. For more details on what becoming a verified teacher means, see What is teacher verification?
https://docs.groklearning.io/article/27-how-do-i-get-a-free-teacher-account
2019-04-18T15:21:05
CC-MAIN-2019-18
1555578517682.16
[]
docs.groklearning.io
Command to control the force calibrator. More... #include <ForceCalibratorCommand.hpp> Command to control the force calibrator. Command to calibrate after the sampling process. Command to continue sampling. Note that the calibration needs to be started first. Command to start the calibration. If true, the outlier detector is active. Required minimal number of samples for the calibration. Number of samples that should be sampled for the calibration.
https://docs.leggedrobotics.com/local_guidance_doc/structrobot__utils_1_1_force_calibrator_command.html
2019-04-18T15:22:48
CC-MAIN-2019-18
1555578517682.16
[]
docs.leggedrobotics.com
After an AS group is disabled, the group will not automatically any trigger scaling actions, but the on-going scaling action will continue. Scaling policies will not trigger any scaling actions. After you manually change the number of expected instances, no scaling action is triggered although the number of actual instances is not equal to that of expected instances. The health check continues to check the health status of the instances but does not remove the instances.
https://docs.otc.t-systems.com/en-us/usermanual/as/as_faq_1201.html
2019-04-18T15:47:21
CC-MAIN-2019-18
1555578517682.16
[]
docs.otc.t-systems.com
". Standard CI 'Stages' A core concept in the Build and Test standards is the concept of "stages". Stages refer to various operations that are typically performed on a source code change as it makes its way from being initially written by a developer to being included in an official release. Stages can be 'run' which means that the actions defined for a given stage are performed. The following stages are currently defined: build-artifacts This stage defines how to build the source code into a set of user-consumable artifacts such as RPM packages or Container Images. This stage is run by the CI system when a build of the source code is needed to preform tests or it was either requested manually, build-artifacts-manual This stage defines how to manually build a project from a source TARBALL. It is used when official releases are composed via a manual process. check-patch This stage defines how to perform correctness, quality, functionality or regression checks on new code changes. The CI system run this stage to provide feedback on Gerrit patchs or GitHub pull requests. check-merged This stage is used to perform correctness, quality, functionality or regression checks on the main project source code branches. The CI system runs this stage after a patch is merged in Gerrit or commits are pushed to a branch in GitHub (E.g. via merging a pull request). poll-upstream-sources This stage is used for polling external data sources for information that is needed to perform automated source code updates. An example for such a polling process is when source code builds a container that is based on another container (With e.g. a 'FROM' in in a Dockerfile). The source code typically needs to specify a specific version of the base container so that builds are reproducible, but keeping that version up to date can be cumbersome for developers. The poll stage can be used to query for newer versions and automatically generate appropriate source code changes. The CI system run this stage periodically. This stage is also used in conjunction with the source code dependency functionality that is described below. The automation directory and STDCI configuration file By default, STDCI searches configurations and scripts to execute for each stage under a directory named automation/. This directory needs to be located in the root of your project. If you want to overwrite the default path for different scripts, you can do so by specifying your requirements in STDCI configuration file. For detailed overview and examples of STDCI configuration file, please refer to STDCI Configuration page. STDCI configuration file In order to build or test a project, the CI system requires some extra information. This information includes: -. To specify this information, a STDCI YAML configuration file needs to be placed in the project's root directory with content that resembles the following example: archs: x86_64 distros: - fc25 - fc26 - el7 release_branches: master: ovirt-master ovirt-ansible-1.0: ovirt-4.1 There are several options for the file name. They include: stdci.yaml, automation.yaml, seaci.yaml, ovirtci.yaml All options can be prefixed with . as well as the file suffix can be .yml. STDCI will first search non-hidden files with the order specified above (left to right), and with the .yaml suffix before .yml, then will search for hidden files in the same order. The file can contain the following parameters: - stages - Stages allows you to specify which STDCI stages you want to configure for your project. See Standard CI 'Stages'for more info. If the parameter is not specified, STDCI will look for execution scripts matching the default specification. See Attaching functionality to stagesfor more info. - substages - Substages allows you to specify several parallel tasks for a single event. The tasks you specify in substages will be executed on different nodes. - archs - The architectures on which to run tests and builds for the project (E.g. x86_64or ppc64le). Multiple values an be given as a list. A single value can also be specified as a plain string. If the parameter is not specified the project will be built and tested on x86_64. distros - The Linux distributions on which to run tests and builds for the project. The following values are currently supported: el6- For 'CentOS 6' el7- For 'CentOS 7' fc26- For 'Fedora 26' fc27- For 'Fedora 27' fcraw- For 'Fedora rawhide' Multiple values can be given as a list. A single value can also be specified as a plain string. If the parameter is not specified, the project will be built and tested on 'CentOS 7'. release_branches - A mapping between project branches that should be built and released with oVirt, and the oVirt versions they will be released with. An oVirt versions is specified as the name of the change queue that tests and prepares that version (E.g. ovirt-masteror ovirt-4.1). A list of change queues can be given for a single branch, in that case, changes merged to it will be submitted to all specified change queues in parallel. - script - The script section allows you to specify a custom script to run. If not specified, STDCI will search for default script name that matches your stage: { stage }.sh.{ distro }. Note that if you specify a custom script name, the other complementary configurations should be follow this name (.repos, .package, ...). Refer to Attaching functionality to stagesin this doc for more info. All parameters in the file as well as the whole file itself are optional. If not specified, projects will be tested on 'CentOS 7' on 'x86_64' by default (As long as any standard stage scripts are included), and no release branches will be configured. For a detailed overview with examples, please refer to STDCI Configuration doc. Attaching functionality to stages In order to specify what needs to be done in a given stage, one can either explicitly link a script file to a stage in the STDCI YAML file or add a script file with the name of that stage and the .sh extension in the default automation directory. For example, to specify what should be done when the 'check-patch' stage is run, create the following file: automation/check-patch.sh Despite the .sh extension, the script file could actually be written in any language as long as the right interpreter is specified in a "shabeng" line at the beginning of the script. Since sometimes it is needed to do different things on different distributions, it is also possible to specify different scripts per distribution by added a distribution suffix to the script file name. For example, to have a different check-merged script for 'CentOS 7.x', another different script for 'Fedora 26' and a 3rd fall back script for all other distributions, create the following three script files: automation/check-merged.sh.el7 automation/check-merged.sh.fc26 automation/check-merged.sh The script files can also be symbolic links in case it is desired to place the actual script file in a different location or to have the same functionality to a set of different distributions. Script runtime environment The scripts are run in an isolated, minimal environments. A clone of the project source code is made available inside those environments and the current working directory for the script is set to the root of the project source code tree. The clone includes the project's Git history, so the git command can be used to query for additional information such as committed changes and tag names. Runtime dependencies can be specified to make build tools and other resources available for the build and test scripts. The way to define those is described in the next chapter. Declaring build and test dependencies In order to provide reliable and reproducible test and build results, test and build stage scripts are typically run inside isolated, minimal environments. It is often the case that more software packages and other data is needed in order to perform a certain test or a given build process. It is possible for the stage script to include commands that obtain and install required software and tools, but this standard also specifies a way to declare requirements so that they can be provided automatically and efficiently while the environment for running the build or test script is being prepared. This standard currently defines and supports several kinds of dependencies: - Extra source code dependencies - A project can specify that it needs to be tested or built along with the source code of another repository. This can be used for example, for projects that are mainly derived from the source of other (Upstream) projects. - Package dependencies - A project can specify additional packages it requires for running test or build processes. - Package repository dependencies - A project can specify packages repositories it needs to access in order to perform test and build processes or to install dependent software packages needed for those processes. - Directory or file mounts - A project can specify that it needs to mount certain files or directories into its testing environment. This can be used to ensure certain cache files are preserved between different test runs or builds of the same project (Typically the test environment is destroyed when a build or a test is done), or to gain access to certain system devices or services. - Environment dependencies - A project can require environment variables to be configured inside it's run-time environment. Environmnet variable values can be provided from multiple locations: - Plain: Used to configure variables with custom values. - Runtime environment: Used to provide environment variables from the outer environment (for example Jenkins variables such as $BUILD_ID) - Secrets and credentials: Used to provide auth tokens, ssh keys and etc as environment variables. Dependency definition files Unless otherwise stated below, project dependencies are defined separately per-script. And can additionally be defined separately per-distribution. Project dependencies are specified via files that are placed in the same directory where the script is and take the following form: <path-to-script>/<script-name>.<dependency-type> For example, to define package dependencies for 'automation/check-patch.sh' script, you place them in the following file: automation/check-patch.packages When specifying a per-distribution dependency, a distribution suffix needs to be added. For example, to define mounts for the 'automation/check-merged.sh' script when it runs on el7, use the following file: automation/check-merged.mounts.el7 As with script files, multiple files for different distributions can be created, files can be symbolic links and the file without a distribution suffix is used as the fall back file for distributions where a more specific file was not created. There are no inheritance or inclusion mechanisms between different dependency files, only one file is used to declare dependencies for a given stage run on a given distribution. Dependency caching Systems that are based on these build and test standards can utilize caching of the build and test environments to improve performance. Therefore there is no guarantee that the test environment will always contain the latest available versions of required software packages for example. If there is a need to guarantee installation of the latest version of a certain component. It is recommended to have the stage scripts perform the installation directly instead via the dependency definition files. Doing it this way is almost guaranteed to have a performance impact on the oVirt CI system for example, so care must be taken to use this technique only where absolutely needed. Defining extra source code dependencies (AKA "Upstream Sources") A project can define that source code from other source code repositories will be obtained and merged into its own source code before build or test stages are performed. A project can specify this by including an automation/upstream_sources.yaml file. The file format is as in the following example: git: - url: git://gerrit.ovirt.org/jenkins.git commit: a4a34f0f126854137f82701bc24976b825d9d1ae branch: master The git key is used as a placeholder for future functionality, currently only Git source code repositories are supported, but other kinds may be supported in the future. The key points to a list of one or more definitions which contain the following details: - url - Specifies the URL of the repository from which to obtain the source code - commit - Specifies the checksum identifier of the source code commit to take from the specified source code repository. - branch - Specifies the branch to which the source code commit belongs. This is used to provide automated updates to this file as specified below. The way source code dependencies are provided is as following - first all the files from repositories given in the definitions in the upstream_sources.yaml are checked out in the order in which they are specified in the file, and then the project's source code repository is checked out on top of them. This means that if the same file exists in several repositories, if will be taken from the last specified one, while files from the project's own repository will override all other files. One needs to specify the exact dependency source code commit to take source code from. This is needed to ensure building or testing a specific commit of the project provides consistent results that are independent of changes done to dependency source code repositories. The downside of having to specify the exact commit to take from the dependency repository is that it can be cumbersome to maintain the upstream_sources.yaml file over time. Therefore an automated update mechanism exists for it. The dependency source code repositories will be scanned in a scheduled manner, the latest commits of specified branches will be detected, and source code patches including the required changes to the file will be created automatically and submitted for developer review. This semi-automated update functionality is done as part of the 'poll-upstream-sources' stage. The stage script is run after updates are made to the upstream_sources.yaml file and updated source code is collected, therefore it can be used to automatically check the results of the automated update process. Only one upstream_sources.yaml file can be specified per-project, therefore it is not possible to specify different source code dependencies for different stages or distributions. Package dependencies Package dependencies are specified in dependency definition files with the packages suffix. For example to specify packages for build-artifacts stage, create the following file: automation/build-artifacts.packages The definition files simply list distribution packages, one per line. Here is an example of the contents of a check-patch.packages.el6 file: pyxdg python-setuptools python-ordereddict python-requests pytest python-jinja2 python-pip python-mock python-paramiko PyYAML git Note that the testing environment is very minimal by default, so even packages that are considered to be ubiquitous such as git need to be specified. Any of the distribution base packages can be asked for. In CentOS and RHEL, packages from EPEL are also made available. For obtaining packages from other repositories, these must be made available by defining them as repository dependencies. Package repository dependencies Package repository dependencies are specified in dependency definition files with the repos suffix. For example to specify repositories for build-artifacts stage running on CentOS7, create the following file: automation/build-artifacts.repos.el7 The package repository definition file can contain one or more lines of the following format: [name,]url Where the optional name can be used to refer to the package repository via yum or dnf commands and the url point to the actual URL of the repository. In oVirt's CI system the name will also be used to detect if there is a local transactional mirror available for that repo and used it instead of using the repo directly over the internet. It is highly recommended to consult the list of CI mirrors and pick repository names and URLs from there. For more information about the CI transactional mirrors, see the dedicated document Directory or file mount dependencies Directory and file mount allow you to gain access to files and directories on the underlying testing host from your testing environment. One must be careful when using this feature since it is easy to make tests unreliable while using it. Directory and file mounts are specified in dependency definition files with the *.mounts suffix. The files consist of one or more lines in the following format: src_path[:dst_path] Where src_path is the path on the host to mount and dst_path is the path inside the testing environment. If dst_path is unspecified, the path inside the testing environment will be the same as the one on the host. If there is no file on the host in src_path, a new empty directory will be created at that location. Environment dependencies Environment dependencies allow you to configure a set of environment variables to be used by your build/test code. This mechanism can be used for: - Setting custom values for variables. - Binding variables from runtime environment such as Jenkins to gain more information regarding your environment. - Binding secrets and credentials. Environment dependencies are specified with the *.environment.yaml suffix. The file is a list of mappings where every mapping specifies a single variable. Below is a syntax reference of all the available configurations in your environment.yaml Configure a custom value for a variable: --- - name: 'MY_CUSTOM_VAR' value: 123 Will export environment variable named $MY_CUSTOM_VAR with the value 123 Bind value from the run-time environment: --- - name: 'MY_CUSTOM_VAR' valueFrom: runtimeEnv: 'BUILD_URL' Will export environment variable named $MY_CUSTOM_VAR with the value of $BUILD_URL from the outer environment (The environment where the environment runs on). Bind value from secret key reference (credentials): --- - name: 'MY_SECRET_USERNAME' valueFrom: secretKeyRef: name: 'my-secret' key: 'username' - name: 'MY_SECRET_PASSWORD' valueFrom: secretKeyRef: name: 'my-secret' key: 'password' Will export environment variable named $MY_SECRET_USERNAME with value of the specified key username under secret named my-secret. Will also export environment variable named $MY_SECRET_PASSWORD with value of the key password under the same secret named my-secret. Note: If your project requires environment variable from a secret key reference (secretKeyRef) and you want to use mock_runner for running STDCI stages locally, you will need to write a local secrets file. How to write STDCI secrets file Request for service access: If your project needs access to an external service, send an email to [email protected] with details of your project and the service you need access to. The CI team take care of the registration to the service and you will be able to access it through environment.yaml Collecting build and test results Test processes are not very interesting unless one can tell if they succeeded or not. Build processes are equally uninteresting if one cannot obtain the resulting built artifacts. Systems that support these build and test standards use the success or failure return value from the stage script as the way to determine is running a stage succeeded or failed. If the build or test stages are run by a CI system, the system gathers up any files or directories placed in the exported-artifacts directory under the project's source code root directory and makes them available for download and inspection. Specially treated files A CI system can also provide special treatment to certain files if they are found in exported-artifacts in order to provide richer output. Following is a list of files the oVirt CI system treats in a special way: - RPM package files - If any *.rpmpackage files are found in exported-artifacts, the CI system generates yum metadata files so that the entire directory can be used as a yum repository and hence any HTTP URL in which it is made accessible - HTML index file - If an index.htmlfile is found in exported-artifacts, it is included in the CI system's job summary page - JUNIT XML report files - If any files with the *.junit.xmlextension are found under exported-artifactsor in one of its sub directories, those files are read as JUNIT test result XML files. The test results are then made available for viewing from the oVirt CI Jenkins server. Test results are also tracked over time and changes can be tracked and analysed across builds. - Findbugs XML reports - If any *.xmlfiles are found in the exported-artifacts/findbugsdirectory, they are read as Findbugs result reports and made available for viewing via the ovirt CI Jenkins UI. Collecting container images While container images can be stored as plain files, it is typically not very efficient to do so, instead, containers are typically stored in a deddicated container storage. The convention for handling of containers by the oVirt CI system is that when building containers, a project would leave then on the build host's container storage and tag then with the exported-artifacts tag. The CI system would then pick up the containers and make them available for use from a dedicated container repository. Instruction on how to access the uploaded container images will be displayed in the job results screen. The dedicated container registry is currently simply an account on DockerHub, this may be subject to change in the future. Running Build and test Stages There are two major ways to run build and test stages: - Run stages locally on a developer's machine - Have stages run automatically by a CI system Running build and test stages locally Running build and stages locally can be very useful when developing stage functionality scrips. It can be also useful as a quick way to get a project built or tested without having to worry about the project's specific build or test requirements. The currently available tool for running Standard-CI stages locally is mock_runner.sh. for more details see Using mock_runner to run Standard-CI stages. Having build and test stages run by a CI system To have an automated CI system run build stages typically involves submitting the code changes to a central source code management (SCM) system such as Gerrit or GitHub and having the CI system pickup changes from there. oVirt's CI system supports testing code for projects stored on oVirt's own Gerrit server or on GitHub under the oVirt project. There are currently different configuration procedures for projects on Gerrit and GitHub. Work is under way to make the two look and feel the same. To learn how to setup use the oVirt CI system with projects hosted on oVirt's Gerrit, please refer to Using oVirt Standard-CI with Gerrit. To learn how to setup use the oVirt CI system with projects hosted on GitHub, please refer to Using oVirt Standard-CI with GitHub.
https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards/index.html
2019-04-18T15:08:09
CC-MAIN-2019-18
1555578517682.16
[]
ovirt-infra-docs.readthedocs.io
Managing user permissions and roles TestRail's permission and role system allows you to configure and restrict the project access and permissions of individual users and groups. TestRail comes with built-in roles that can be customized and extended. It is also possible to configure per-project access for users and groups, enabling you to customize TestRail's access control for your needs. Configuring roles Roles within TestRail are collections of permissions that can be assigned to users (globally and per project) and groups (per project). TestRail already comes with a few preconfigured useful roles such as Guest, Tester or Lead. You can also change the built-in roles or add your own ones. To configure TestRail's roles, select Administration > Users & Roles. One role is always the default role in TestRail. The default role is used as a fallback in case you delete roles that are still in use. The default role is also preselected if you add additional users to TestRail. Assigning roles Every user has a global role assigned to her. The global role is usually used if you don't specify the access of a user for a specific project. For example, if you choose the built-in Tester role as the global role for a user, the user can add test results to all projects that use the global role as the default access. To change the global role of a user, you can either select the role on the Users & Roles page or change the role when you edit a user account. Please note that you can also assign roles (and thus restrict permissions) to administrators. This can be useful if an administrator wants to hide specific projects or disable some functionality in her user interface. But remember that an administrator can always change her own role, so you cannot count on roles to enforce permissions for administrators. Assigning groups Groups can be used to manage a collection of users, e.g. a team of testers, geographical teams or users that belong to a specific client or customer. You can define and configure groups under Administration > Users & Roles. Project access You can also specify and override the access for specific projects. To do this, just edit a project in the administration area and select the Access tab. There are two things you can do here: you can specify the Default Access for the project and you can assign the access for specific users or groups. The Default Access is used for all users and groups that don't override the project access. For example, by default, all users have permissions according to their global role (when the Default Access for a project is set to Global Role). However, you can also select that no user should have a access to a project (i.e. No Access as Default Access), unless you override the access for a user. You can also use a role as the default access for a project. This allows you to make a project read-only for all users by default, for example. You can also override the project access for specific groups of users and this applies the configured access/permissions to all users of this group. For example, if you assign Global Role to a group, all users of this group will use their global role. Likewise, if you assign No Access, the users of this group won't have access to this project. If a user is a member of multiple groups, TestRail uses the sum of the permissions of those groups. Please note that the user access for the project (if any) has precedence over the group settings. The combination of global roles, default project access and user/group-specific access for projects makes TestRail's roles and permissions system very flexible. Please see the next section for some examples on how to configure TestRail for typical scenarios. Common scenarios The following examples explain how to configure TestRail to accomplish some common scenarios with regards to roles and permissions. - Restrict user permissions globally If you want to restrict the permissions of users, you can assign them the built-in TestRail roles or build your own roles. For example, you can use roles to allow users to add test results but not add any new cases. You can also use roles if you want to prevent users from deleting test cases, test suites or any other entity within TestRail. - Individual permissions per project To use individual permissions per user and project, just select and assign a different role to a user for a project. For example, if a user needs the Designer role for most projects, just assign her this role as her Global Role. To override this role for projects where the user needs the Lead role, just select this role on the project's Access page. - Hide projects for all users but project members You can also hide projects from users who don't need access to it. To do this, just configure No Access as the Default Access for the project. You can then assign specific roles (or their global role) to users that work on the project. - Make a project read-only If you have a project you don't work on anymore but want to keep in TestRail to keep the history of the testing data, you can make it read-only. To do this, just configure the Guest role (or equivalent) as the Default Access for the project. Unless you override this role for specific users, all users can only access the project with read-only permissions now.
http://docs.gurock.com/testrail-userguide/howto-permissions
2019-04-18T15:23:33
CC-MAIN-2019-18
1555578517682.16
[]
docs.gurock.com
Virtual Server Backups¶ A Backup is a snapshot of a virtual server disk moved to an off-site location. Backups Parameters¶ Name - Backup name Note Backups created with the Freeze filesystem option enabled are marked with a flag icon appended to their status. Backup Actions¶ Backup now - Immediately start a backup process based on a backup definition. Delete backup - Remove data of the backup. Restore backup - Restore server disk data from the backup to the existing virtual server disk or a disk of an another virtual server. Warning A backup restore results in all data loss on the target virtual server and disk, including all virtual server’s snapshots. Warning Backing up many virtual servers at the same time can have a high impact on the I/O load of the affected compute and backup nodes, potentially causing short or even long-term unavailability of services. A fair distribution of backup schedules in time can easily eliminate this problem. A backup definition overview of a particular virtual data center or backup node can be obtained via the compute node backup definition view or API: user@laptop:~ $ es get /vm/define/backup -dc admin -full --tabulate user@laptop:~ $ es get /node/node99.example.com/define/backup -full --tabulate Backup Definitions¶ A Backup Definition is a configuration based on which periodic backups are automatically created and deleted. Backup Definition Parameters¶ Name - Backup definition name. Disk ID - Virtual server disk ID. Backup type - One of: - Dataset - Backups are created by using ZFS datasets in an incremental way (optimal and recommended). - File - Backups are created by storing full ZFS datasets into files (can be used to store backups onto remote data storages connected via NFS or Samba). Backup node - Compute node with backup capabilities enabled. Storage - Name of the node storage on the chosen backup node. Schedule - Automatic scheduler configuration in Cron format. Use your local time for the hour field (it will be internally converted into UTC). Retention - Maximum number of backups to keep. After exceeding this number of backups the oldest backup associated with the backup definition will be automatically removed. Active - Whether the backup definition is active. Description Compression - Compression of File backups. One of: off (no compression), gzip (fast compression), bzip2 (more effective compression). Bandwidth limit - Backup speed limit in bytes per second. Advanced Backup Definition Parameters¶ - Freeze filesystem? - Whether to create application-consistent backups .
https://docs.danubecloud.org/user-guide/gui/servers/backups.html
2019-04-18T15:17:22
CC-MAIN-2019-18
1555578517682.16
[array(['../../_images/backup_list.png', '../../_images/backup_list.png'], dtype=object) array(['../../_images/backup_definition_list.png', '../../_images/backup_definition_list.png'], dtype=object) array(['../../_images/backup_definition_update.png', '../../_images/backup_definition_update.png'], dtype=object) array(['../../_images/backup_definition_update_advance.png', '../../_images/backup_definition_update_advance.png'], dtype=object)]
docs.danubecloud.org
Linux.. Settings for Importing Keil uVision Projects Settings for Importing Renesas High-performance Embedded Projects Parasoft Project Center. concerto.reporting=true #Specifies the host name of the Parasoft Project Center. dtp.user_defined_attributes=Type:Nightly;Project:Project1 # Determines whether the results sent to Parasoft Project Center are marked as being from a nightly build. DTP
https://docs.parasoft.com/pages/viewpage.action?pageId=6386568
2019-04-18T14:23:30
CC-MAIN-2019-18
1555578517682.16
[]
docs.parasoft.com