content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Genomic Features¶
Genomic Features are defined segments of a genome. Most often features will code for proteins or RNAs, however some correspond to pseudogenes or repeat regions. We currently support over 40 Genomic Feature Types.
Learn how to find and use PATRIC Feature Tables in our Feature Tab User Guide.
Genome Annotation¶
Genome annotation refers to the systematic analysis of a genome to identify all protein and RNA coding genes and characterize their functions. PATRIC supports genome annotations from multiple sources, including:
Original annotations from GenBank / RefSeq
Consistent annotations across all bacterial genomes using RAST anotation pipeline
Genomic Features¶
Genomic Features refer to defined segments of a genome, which often code for proteins and RNAs. Common feature types include:
Gene
CDS
rRNA
tRNA
Misc RNA
Pseudogene
Functional Properties¶
Functional properties refer to the description and ontological terms used to characterize protein functions. Common functional properties assigned to proteins include:
Gene name
Function
GO terms
EC numbers
Protein families
Subsystems
Metabolic pathways
Specialty Genes¶
Specialty genes refer to the genes possessing properties that are of special interest to the infectious disease researchers. Classes of specialty genes include:
Antibiotic resistant genes
Virulence factors
Transporters
Essential genes
Drug and vaccine targets
Human homologs | https://docs.patricbrc.org/user_guides/data/data_types/genomic_features.html | 2021-09-17T04:22:31 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.patricbrc.org |
In the following steps, we will create a light that moves, changes colors and brightness using Matinee., Hallway and Light
In this step, we will add our Matinee Actor, create a small hallway and add a light.
From the Toolbar menu, select Matinee and click the Add Matinee button.
If the Matinee undo warning message appears, click the Continue button.
The Matinee Editor window will open.
Click for Full View.
Minimize the Matinee window, then from the Modes menu under the BSP tab, drag a Box into the viewport.
Try to drag the box onto the Template Text icon, it will center it on the text.
In the Details panel for the box, set the Brush Settings to the values below.
Fly inside the box and grab the Red arrow of the Translation widget and move it towards the text to remove the pieces of geometry that clip inside.
You can also grab the blue arrow of the Translation widget and pull it up slightly to raise the roof of the box.
From the BSP tab, drag-and-drop another Box into the viewport and set its settings to the values below.
Move the Subtraction box to one end of the Addition box to create an opening in the box.
With the Subtraction box still selected, press Control+W to duplicate it and move it to the other end of the box to create another opening.
From the Lights tab of the Modes menu, drag-and-drop a Point Light into the level and center it near the top of one of the door openings.
Add the Light to Matinee and Setup Tracks
In this step, we will add the light to Matinee and set up our tracks with keyframes.
Open Matinee by clicking on the Matinee Actor in the World Outliner and choosing Open Matinee from the Details panel.
Minimize Matinee, click on the Point Light then re-open Matinee and Right-Click in the Tracks window and select Add New Lighting Group.
In the Name Group box that appears, give it a name such as Light1.
Grab the ending marker at 5.00 and drag it over to 8.00 to increase the length of the Matinee.
Right-Click on the Radius track and select Delete Track, then Right-Click on the Light1 group and select Add New Float Property Track.
We will be adjusting the Attenuation Radius which will affect the range at which the light is displayed.
In the pop-up menu that appears, select LightComponent0.AttenuationRadius and press Ok to add the track.
Click the Movement track, and press Enter to add a keyframe then Right-Click on the keyframe and choose Set Time and set it to 2.
Repeat the previous step and assign keys to 0, 2, 4, 6, and 8.
Repeat the previous two steps for the Intensity, Light Color, and AttenuationRadius tracks.
Adjust Movement and Intensity
In this step, we will set up movement for the light and adjust its intensity.
In Matinee on the Movement track, click on the second keyframe (at 2.00) then minimize Matinee and move the light to the center of the hallway.
You can zoom out then grab green arrow of the Translation widget and slide it to the right into the center.
Return to Matinee and click on the third keyframe of the Movement track (at 4.00), minimize Matinee, then move the light to the end of the hallway.
Return to Matinee, click on the fourth keyframe of the Movement track (at 6.00), minimize Matinee and then move the light to the center of the hallway.
In the Details panel for the light, under Transform, find the Mobility section and click the third icon to enable the Movable setting for the light.
In Matinee, Right-click on the second keyframe of the Intensity track (at 2.00) and Set Value to 20,000; do the same for the fourth keyframe (at 6.00).
This will increase the intensity of the light, making it brighter, as it moves to the center of the hallway.
Adjust Light Color and Attenuation Radius
In this step, we will adjust the color of the light as it moves through the hallway as well as its size (or Attenuation Radius).
In Matinee on the Light Color track, click on the first keyframe (at 0.00) and select Set Color; in the Color Picker window select any color.
Repeat the previous step for the third keyframe (at 4.00) and in the Color Picker window, select a different color.
Repeat the previous step for the last keyframe (at 8.00) and in the Color Picker window, select the color that was used in step 1.
Right-click on the second keyframe of the Attenuation Radius track (at 2.00) and Set Value to 250, do the same for the fourth keyframe (at 6.00).
Finishing Up - Building and Playing
In this step, we will finish the Matinee, Build the geometry and lighting, then Play in the editor to see the finished result.
In the World Outliner, select the Matinee Actor and under the Play section, enable Play on Level Load and Looping.
From the main toolbar, click the Build icon, then when building is complete, select the Apply Now button in the lower right portion of the screen.
When building is complete...
will appear. Click the Apply Now button."
From the main toolbar, click the Play icon to play in the editor.
When you enter the hallway, you should see the light moving up and down the hallway.
The light will blend between colors as it moves through the hallway and reduce in size as it enters the center of the hallway. | https://docs.unrealengine.com/4.26/en-US/AnimatingObjects/Matinee/HowTo/MHT_5/ | 2021-09-17T05:08:59 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.unrealengine.com |
Formulas have a step type that allows a user to retry a formula execution in the case of an error that necessitates re-executing the entire formula. However, if the email in the Notification Settings for a formula instance is specified, each formula execution failure will result in an error notification email, which due to the formula execution retry step in a formula will result in a non-actionable notification. Imagine if your formula executes a 1000 times in a given day and for whatever reason 20% of the executions fail and are retried. This would lead to 200 unnecessary emails. That does not count any execution retries that fail and are retried again.
One could argue to switch off notifications on the formula instance and use the
notify.email() function in a formula script step. However, that will still end up with unnecessary notifications as the formula execution is still deemed a failure. So how can this be fixed? Please read on.
Instructions
The following steps should help solve the above problem. Please note, some changes need to be made to your formula.
Disable notifications on the formula instance.
If your formula has notifications enabled, as shown below,
disable notifications by removing the email address from the Notification Settings as shown below,
and save the formula instance.
Now to the formula changes. The following formula is one that simulates a step failure and finally executes a Formula Execution Retry step when the previous step fails, i.e., it re-runs the whole formula.
We first add a configuration variable to our formula to set the maximum number of retry attempts for the formula execution retry. This variable will be used to determine if the formula is going to retry its execution or if it has run out of execution attempts. The illustrations below show this configuration variable and an example value for the formula instance.
Pro Tip: When adding a configuration variable to a formula that has existing instances, you do have to edit the instance.
We now need to add a step,
check-retry-attempt-count, between http-request-that-will-fail and the retry-formula steps.
The
check-retry-attempt-count step is a script step, and should be triggered as the
onFailure action of the previous step, i.e. the step that fails which results in a formula execution retry. The Javascript content of the
check-retry-attempt-count step is shown below.
let retryFormulaStep = steps["retry-formula"]; if (retryFormulaStep === undefined || retryFormulaStep === null) { throw("retry the formula"); } else { if (retryFormulaStep.attempt < config.maxRetryAttempts) { throw("retry the formula"); } else { notify.email("[email protected], "<This is the subject line of the email>", "<This is the error message for the email.>"); } }
Let's walk through the above logic.
- The script first tries to find the formula execution retry step by name, in this case
retry-formula.
- For the very first occurrence of an error in the preceding step (the
http-request-that-will-failstep), i.e., when the formula execution has not been retried as yet, the step value will be
undefinedor
null.
- This will result in the
check-retry-attempt-countstep throwing an error, and the formula being retried.
- If the
http-request-that-will-failstep fails once again during the retry attempt, then the
check-retry-attempt-countwill check the retry attempt number via the
attemptattribute of the
retry-formulastep.
- If the attempt number is less than the maximum retry attempts set in the formula instance configuration, then an error will be thrown by the
check-retry-attempt-countstep, as in #2 above, which will trigger another retry of the formula execution.
- If the attempt number is equal to the configured maximum retry attempts then a notification email with the error message can be sent to a specific email address (for multiple email addresses, use comma separated email addresses). Note: Although the notification can also be achieved in the last step, i.e., the
gracefully-endstep, the email recipient and the error message may vary based upon which step resulted in this code path, so the
check-retry-attempt-countstep can be made specific to the trigger of the error message.
When the number of formula executions have been exhausted, the
check-retry-attempt-count step will end with a success, and can trigger a final script step to gracefully end the formula execution. In this example, the following code is implemented in the
end-gracefully step.
done({ success: false });
In summary, when actionable notification are required from a formula, the
notify.email() function is a better approach to use than enabling the Notifications Settings on a formula instance. | https://docs.cloud-elements.com/home/95cbd50 | 2021-05-06T00:07:53 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c891f476e121ce61d1864a9/n/1552490311318.png',
None], dtype=object) ] | docs.cloud-elements.com |
Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Architecture.
Table of Contents
The picture below shows the anatomy of an application that uses the Kafka Streams library. Let’s walk through some details.
Logical view of a Kafka Streams application that contains multiple stream threads, each of which in turn containing multiple stream tasks. stream partitions and stream stream tasks. More specifically, Kafka Streams creates a fixed number of stream tasks based on the input stream partitions for the application, with each task being assigned a list of partitions from the input streams (i.e., Kafka topics). The assignment of stream partitions to stream tasks never changes, hence the stream task is a fixed unit of parallelism of the application. Tasks can then instantiate their own processor topology based on the assigned partitions; they also maintain a buffer for each of its assigned partitions and process input data one-record-at-a-time from these record buffers. As a result stream tasks can be processed independently and in parallel without manual intervention.
Slightly simplified, the maximum parallelism at which your application may run is bounded by the maximum number of stream tasks, which itself is determined by maximum number of partitions of the input topic(s) the application is reading from. For example, if your input topic has 5 partitions, then you can run up to 5 applications instances. These instances will collaboratively process the topic’s data. If you run a larger number of app instances than partitions of the input topic, the “excess” app instances will launch but remain idle; however, if one of the busy instances goes down, one of the idle instances will resume the former’s work. We provide a more detailed explanation and example in the FAQ. stream across stream tasks that run in the application instances. You can start as many threads of the application as there are input Kafka topic partitions so that, across all running instances of an application, every thread (or rather, the stream tasks that the thread executes) process records, stream task
2 from
instance1-thread1 on the first machine was migrated.).. For each state store, it:. | https://docs.confluent.io/3.2.0/streams/architecture.html | 2021-05-06T00:42:42 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['../_images/streams-architecture-overview.jpg',
'../_images/streams-architecture-overview.jpg'], dtype=object)] | docs.confluent.io |
Upgrading an existing Linux v0.2.X installation to v3.1.0¶
Note
This guide assumes you have at least a basic level of familiarity with Linux and the command line.
This document will walk you through migrating an existing v0.2.X version of QATrack+ to a new Ubuntu 18.04 or Ubuntu 20.04 server. Although it may be possible, upgrading an Ubuntu 14.04 or Ubuntu 16.04
The process to generate and restore a database dump may vary depending on how you have things configured, your operating system version, or the version of database software you are using. The steps below can be used as a guide, but they may need to be tweaked for your particular installation.).
# postgres sudo -u postgres pg_dump -d qatrackplus > backup-0.2.X.sql # or for MySQL mysqldump --user qatrack --password=qatrackpass qatrackplus > backup-0.2.X.sql
and create an archive of your uploads directory:
tar czf qatrack-uploads.tgz qatrack/media/uploads/
On your new server¶
Copy the backup-0.2.X.sql and qatrack-uploads.tgz to your new server, these will be needed below.
Prerequisites¶
Make sure your existing packages are up to date:
sudo apt update sudo apt upgrade
You will need to have the make command and a few other packages available for this deployment. Install install them as follows:
sudo apt install make build-essential python-dev python3-dev python3-tk python3-venv
Installing a Database System¶
If you were using Postgres before, then install it again. Likewise, if your previous server was using a MySQL database, then install MySQL/MariaDB
Installing PostgreSQL (Only required if you were previously using Postgres)¶
If you do not have an existing database server, you will need to install PostgreSQL locally. Run the following commands:
sudo apt-get install postgresql libpq-dev postgresql-client postgresql-client-common the instances of peer to md5
Restoring your previous database¶
We can now restore your previous database:
sudo -u postgres psql -c "CREATE DATABASE qatrackplus;" sudo -u postgres psql -d qatrackplus < backup-0.2.X.sql sudo -u postgres psql -c "CREATE USER qatrack with PASSWORD 'qatrackpass';" sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE qatrackplus to qatrack;" # or for MySQL (omit the -p if your mysql installation doesn't require a password for root) sudo mysql -p -e "CREATE DATABASE qatrackplus;" sudo mysql -p --database=qatrackplus < backup-0.2.X.sql sudo mysql -p -e "GRANT ALL ON qatrackplus.* TO 'qatrack'@'localhost';"
Now confirm your restore worked:
# postgres: Should show Count=1234 or similar PGPASSWORD=qatrackpass psql -U qatrack -d qatrackplus -c "SELECT COUNT(*) from qa_testlistinstance;" # mysql: Should show Count=1234 or similar sudo mysql --password=qatrackpass --database qatrackplus -e "SELECT COUNT(*) from qa_testlistinstance;"
Assuming your database restoration was successful, you may now proceed with upgrading the database to v0.3.0.
Installing and configuring Git and checking out the QATrack+ Source Code¶0.2.9.2
Restore your upload files¶
Assuming you are on a new server and have an uploads file that you want to restore you should do so now:
# assuming your qatrack-uploads.tgz is in your home directory cd ~/web/qatrackplus mv ~/qatrack-uploads.tgz . sudo tar xzf qatrack-uploads.tgz
Use your favourite text editor to create a local_settings.py file in ~/web/qatrackplus/qatrack/ with the following contents:
# for postgres DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'qatrackplus', 'USER': 'qatrack', 'PASSWORD': 'qatrackpass', 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. }, } # or for mysql DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'qatrackplus', 'USER': 'qatrack', 'PASSWORD': 'qatrackpass', 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. }, }
First install virtualenv, then create and activate a new Python 2 environment:
cd ~/web/qatrackplus sudo apt install python-virtualenv mkdir -p ~/venvs virtualenv -p python2 ~/venvs/qatrack2 source ~/venvs/qatrack2/bin/activate pip install --upgrade pip
Now install the required Python packages:
pip install -r requirements/base.txt # for postgres pip install psycopg2-binary # for mysql pip install mysqlclient.19
Creating our virtual environment¶
Create and activate a new Python 3 virtual environment:
mkdir -p ~/venvs python3 -m venv ~/venvs/qatrack3 source ~/venvs/qatrack3/bin/activate pip install --upgrade pip
We will now install all the libraries required for QATrack+ with PostgresSQL (be patient, this can take a few minutes!):
# for postgres pip install -r requirements/postgres.txt # or for MySQL: pip install -r requirements/mysql.txt
Next Steps¶
Now that you have upgraded to 0.3.0, you should proceed directly to upgrading to v3.1.0 from v0.3.0; | https://docs.qatrackplus.com/en/stable/install/linux_upgrade_from_02X.html | 2021-05-06T00:51:15 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.qatrackplus.com |
Overview of Episode Review in ITSI
Use Episode Review to see a unified view of all your service-impacting alerts. Episode Review displays episodes (groups of notable events) and their current status.
A notable event represents an anomalous incident detected by an ITSI multi-KPI alert, a correlation search, or anomaly detection algorithms. For example, a notable event can represent:
- An alert that ITSI ingests from a third-party product into the
itsi_tracked_alertsindex.
- A single KPI (such as cpu_load_percent) that exceeds a pre-defined threshold.
- The result of a multi-KPI alert that correlates the status of multiple KPIs based on multiple trigger conditions.
- The result of a correlation search that looks for relationships between data points.
- An anomaly that has been detected when anomaly detection is enabled. use this example workflow to triage and work on episodes in Episode Review:
- An IT operations analyst monitors the Episode Review, sorting and performing high-level triage on newly-created episodes.
- When an episode warrants investigation, the analyst acknowledges the episode, which moves the status from New to In Progress.
- The analyst researches and collects information on the episode using the drilldowns and fields in the episode details. The analyst records the details of their research in the Comments section of the episode.
- If the analyst cannot immediately find the root cause of the episode, the analyst might open a ticket in Remedy or ServiceNow.
- After the analyst has addressed the cause of the episode and any remediation tasks have been escalated or solved, the analyst sets the episode status to Resolved.
- The analyst assigns the episode to a final analyst for verification.
- The final analyst reviews and validates the changes made to resolve the episode, and sets the status to Closed.
When you close an episode created by an aggregation policy, this breaks the episode (no more events can be added to it) even if the breaking criteria specified in the aggregation policy were not! | https://docs.splunk.com/Documentation/ITSI/4.4.1/User/OverviewofITSINotableEventsReview | 2021-05-06T01:44:10 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
AddMissingIndicator¶
The AddMissingIndicator() adds a binary variable indicating if observations are missing (missing indicator). It adds a missing indicator for both categorical and numerical variables. A list of variables for which to add a missing indicator can be passed, or the imputer will automatically select all variables.
The imputer has the option to select if binary variables should be added to all variables, or only to those that show missing data in the train set, by setting the option how=’missing_only’. addBinary_imputer = mdi.AddMissingIndicator( variables=['Alley', 'MasVnrType', 'LotFrontage', 'MasVnrArea']) # fit the imputer addBinary_imputer.fit(X_train) # transform the data train_t = addBinary_imputer.transform(X_train) test_t = addBinary_imputer.transform(X_test) train_t[['Alley_na', 'MasVnrType_na', 'LotFrontage_na', 'MasVnrArea_na']].head()
API Reference¶
- class
feature_engine.missing_data_imputers.
AddMissingIndicator(how='missing_only', variables=None)[source]¶
The AddMissingIndicator() adds an additional column or binary variable that indicates if data is missing.
AddMissingIndicator() will add as many missing indicators as variables indicated by the user, or variables with missing data in the train set.
The AddMissingIndicator() works for both numerical and categorical variables. The user can pass a list with the variables for which the missing indicators should be added as a list. Alternatively, the imputer will select and add missing indicators to all variables in the training set that show missing data.
- Parameters
how (string, defatul='missing_only') –
Indicates if missing indicators should be added to variables with missing data or to all variables.
missing_only: indicators will be created only for those variables that showed missing data during fit.
all: indicators will be created for all variables
variables (list, default=None) – The list of variables to be imputed. If None, the imputer will find and select all variables with missing data. Note: the transformer will first select all variables or all user entered variables and if how=missing_only, it will re-select from the original group only those that show missing data in during fit.
fit(X, y=None)[source]¶
Learns the variables for which the missing indicators will be created.
- Parameters
-
variables\_
the lit of variables for which the missing indicator will be created.
- Type
-
transform(X)[source]¶
Adds the binary missing indicators.
- Parameters
X (pandas dataframe of shape = [n_samples, n_features]) – The dataframe to be transformed.
- Returns
X_transformed – The dataframe containing the additional binary variables. Binary variables are named with the original variable name plus ‘_na’.
- Return type
pandas dataframe of shape = [n_samples, n_features] | https://feature-engine.readthedocs.io/en/0.6.x_a/imputers/AddMissingIndicator.html | 2021-05-06T01:17:52 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['../_images/missingindicator.png',
'../_images/missingindicator.png'], dtype=object)] | feature-engine.readthedocs.io |
Click on the Scroll To Top app to configure Scroll To Top options to display on the store to allow customers back to the top just by clicking on that button.
By clicking on the “Scroll To Top”, you will find the settings to provide the Scroll To Top button to the customers by configuration some Settings.
– Status: Enable the status to display the configured Scroll To Top button on the store to allow customers to navigate to the top. | http://docs.appjetty.com/document/scroll-to-top/scroll-to-top-settings/ | 2021-05-06T01:18:54 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.appjetty.com |
Organization Settings
View Organization Settings
Organization Settings allow you to view and edit preferences, some which are established when your
Organization is added to the DOC environment.
In the General section, you can view the Org Name, Slug/ID, ID, and Legacy Platform ID.
The Data Store section allows you to view Data Store details if one has been provisioned for your
Organization..
Edit Organization Settings
To edit Organization Settings:
From the Settings page (General section), click the Edit button.
From the Edit Organization page, make any needed modifications.
You cannot edit the Slug/ID field.
To store updates, click Save. To disregard, click Cancel.
View Data Store
If a Data Store has not been provisioned for your Organization, you can only view the Provision Database
button in the Data Store section.
Enable Data Store
To enable a Data Store:
From the Settings page (Data Store section), click the Provision Database button.
A Redshift User Information modal displays, providing a User Name and Password.
Copy the Password using the icon at the end of this field.
Click the now enabled OK button.
You are returned to the primary Organization Settings page. The Data Store section now displays the
populated Database Name, Host, Port, and User fields. On the Destinations page,
a Redshift Data Lake database also has been created.
You must create a table for each Collection in your Organization. Creating a table adds this table
to the database. The table contains Snapshots specific to Collections. Each Snapshot copies to this table.
Enabling a Data Store creates a Redshift Data Lake database on top of your managed import.io data lake
(or Data Store). | https://docs.import.io/workbench/beta/org/settings.html | 2021-05-06T01:00:36 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['../_images/org-settings1.png', 'org settings1'], dtype=object)] | docs.import.io |
Worksheet.OnShutdown Method
This API supports the Visual Studio Tools for Office infrastructure and is not intended to be used directly from your code.
This member overrides EntryPointComponentBase.OnShutdown().
Namespace: Microsoft.Office.Tools.Excel
Assembly: Microsoft.Office.Tools.Excel.v9.0 (in Microsoft.Office.Tools.Excel.v9.0.dll)
Syntax
'Declaration Protected Overrides Sub OnShutdown 'Usage Me.OnShutdown()
protected override void OnShutdown()
.NET Framework Security
- Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see Using Libraries from Partially Trusted Code.
See Also
Reference
Microsoft.Office.Tools.Excel Namespace | https://docs.microsoft.com/en-us/previous-versions/yz42t70w(v=vs.100) | 2021-05-06T02:22:43 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.microsoft.com |
spark-streaming (community library)
Summary
Port of Arduino Streaming.h to spark, originaly from Mikal Hart
Library Read Me
This content is provided by the library maintainer and has not been validated or approved. Fork of Mikal Hart Streaming library.
Implements C++ Streaming operator (<<) for various print operations.
Changes in this fork include:
1) Updated for Arduino 1.0 2) Changed macro 'BIN' to 'BINARY' to be compatible with ATtiny. NB: The define of 'BIN' in Print.h must be similarly changed to use this library
Browse Library Files | https://docs.particle.io/cards/libraries/s/spark-streaming/ | 2021-05-06T01:51:17 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
After you have integrated Pushwoosh iOS SDK into your project, you need to configure pushes on Apple side. Follow the guide to configure iOS manually or click the link below to use our Auto Configuration feature!
Make sure you’ve create a Distribution Provisioning Profile for your app to submit it to the App Store. Learn more about provisioning profiles here.
Launch Keychain Access, go to Certificate Assistant and click Request a Certificate From a Certificate Authority:
2. Enter the required information and choose Saved to disk. You should leave the CA email field blank. Click Continue and Save the certificate request using the suggested name.
3. Sign into the Apple Developer Portal and open Certificates, Identifiers & Profiles in the Account tab. There, click Add.
4. Choose the type of certificate you need and click Continue in the bottom of the page.
You can select the "Sandbox&Production" certificate type shown on the screenshot below, and choose the appropriate gateway when configuring the iOS platform in Pushwoosh Control Panel. However, you still can create a Sandbox certificate separately from the Production one.
5. Select the App ID of your project on the next page. Then, click Continue.
Skip the About Creating a Certificate Signing Request (CSR): you've done it earlier.
6. Choose the Certificate Signing Request you created previously.
7. Download the certificate and add it to the Keychain Access. Once you click on the certificate, Keychain Access will be launched.
In Keychain Access, right-click the certificate you just added and choose Export.
Save the Personal Information Exchange (.p12) file. You will be prompted to set up a password
After you type in your password, click "Allow" to finish exporting the Private Key.
In Pushwoosh Control Panel, choose your app and click on platforms.
Click Configure in the iOS row.
In the opened form, choose manual configuration mode:
To configure iOS platform automatically, refer to the Auto Configuration guide.
Fill in the following fields:
Push Certificate (.p12)
Private key password
Framework
Gateway
The Certificate file (.cer) field is optional and can be empty.
Click Save. All done! | https://docs.pushwoosh.com/platform-docs/pushwoosh-sdk/ios-push-notifications/ios-platform-configuration | 2021-05-05T23:46:17 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.pushwoosh.com |
Quick Start¶
If you’re new to Feature-engine this guide will get you started. Feature-engine transformers have the methods fit() and transform() to learn parameters from the data and then modify the data. They work just like any Scikit- learn transformer.
Installation¶
Feature-engine is a Python 3 package and works well with 3.6 or later. Earlier versions have not been tested. The simplest way to install Feature-engine is from PyPI with pip, Python’s preferred package installer.
$ pip install feature-engine
Note, you can also install it with a _ as follows:
$ pip install feature_engine
Note that Feature-engine is an active project and routinely publishes new releases. In order to upgrade Feature-engine to the latest version, use
pip as follows.
$ pip install -U feature-engine
If you’re using Anaconda, you can take advantage of the conda utility to install the Anaconda Feature-engine package:
$ conda install -c conda-forge feature_engine
Once installed, you should be able to import Feature-engine without an error, both in Python and in Jupyter notebooks.
Example Use¶
This is an example of how to use Feature-engine’s transformers to perform missing data imputation. median_imputer = mdi.MeanMedianImputer(imputation_method='median', variables=['LotFrontage', 'MasVnrArea']) # fit the imputer median_imputer.fit(X_train) # transform the data train_t= median_imputer.transform(X_train) test_t= median<<
More examples can be found in the documentation for each transformer and in a dedicated section in the repository with Jupyter notebooks.
Feature-engine with the Scikit-learn’s pipeline¶
Feature-engine’s transformers can be assembled within a Scikit-learn pipeline. This way, we can store our feature engineering pipeline in one object and save it in one pickle (.pkl). Here is an example on how to do it:
from math import sqrt import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import Lasso from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline as pipe from sklearn.preprocessing import MinMaxScaler from feature_engine import categorical_encoders as ce from feature_engine import discretisers as dsc from feature_engine import missing_data_imputers data variable to indicate missing information for the 2 variables below ('continuous_var_imputer', mdi.AddMissingIndicator(variables = ['LotFrontage'])), # replace NA by the median in the 2 variables below, they are numerical ('continuous_var_median_imputer', mdi.MeanMedianImputer( imputation_method='median', variables = ['LotFrontage', 'MasVnrArea'])), # replace NA by adding the label "Missing" in categorical variables () # Evaluate print('Lasso Linear Model train mse: {}'.format(mean_squared_error(y_train, np.exp(pred_train)))) print('Lasso Linear Model train rmse: {}'.format(sqrt(mean_squared_error(y_train, np.exp(pred_train))))) print() print('Lasso Linear Model test mse: {}'.format(mean_squared_error(y_test, np.exp(pred_test)))) print('Lasso Linear Model test rmse: {}'.format(sqrt(mean_squared_error(y_test, np.exp(pred_test)))))
Lasso Linear Model train mse: 949189263.8948538 Lasso Linear Model train rmse: 30808.9153313591 Lasso Linear Model test mse: 1344649485.0641894 Lasso Linear Model train rmse: 36669.46256852136
plt.scatter(y_test, np.exp(pred_test)) plt.xlabel('True Price') plt.ylabel('Predicted Price') plt.show()
More examples can be found in the documentation for each transformer and in a dedicated section of Jupyter notebooks.
Dataset attribution) | https://feature-engine.readthedocs.io/en/0.6.x_a/quickstart.html | 2021-05-06T00:38:05 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['_images/medianimputation1.png', '_images/medianimputation1.png'],
dtype=object)
array(['_images/pipelineprediction.png', '_images/pipelineprediction.png'],
dtype=object) ] | feature-engine.readthedocs.io |
20.6.5 DAS Spectra data
The DAS Spectrum object stores one spectrum or multiple spectra calculated from the raw DAS data recordings. The spectra are typically calculated as N-point fast Fourier transforms (FFTs) along a window of length FilterWindowSize Raw DAS sample. The FilterWindowSize equals the FFT size N and is the number of output points from the discrete Fourier transform (DFT) calculation.
For each locus in the array of loci specified by StartLocus and NumberOfLoci, the FFT calculation produces FilterWindowSize data points. FFT calculations can be repeated by shifting the filter-window over the raw DAS samples. The windows can be overlapping and the number of samples that overlap is specified by the FilterWindowOverlap attribute. This means that the spectrum values are stored in a 3D array [time:locus:FFT-value]. Figure 20.6.5-1 shows an example of 4 overlapping windows in the spectra object in the example file.
Figure 20.6.5-2 shows one of the FFT sub-arrays (FFT index 4 out of 32) for the 4 overlapping windows (index 0 to 3 on the vertical axis) calculated for the 5 loci (index 0 to 4 on the horizontal axis).
| http://docs.energistics.org/PRODML/PRODML_TOPICS/PRO-DAS-000-066-0-C-sv2000.html | 2021-05-06T00:53:14 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['PRODML_IMAGES/PRODML-000-159-0-sv2000.png', None], dtype=object)
array(['PRODML_IMAGES/PRODML-000-160-0-sv2000.png', None], dtype=object)] | docs.energistics.org |
Clearing the Citrix Gateway Configuration
You can clear the configuration settings on Citrix Gateway. You can choose from among the following three levels of settings to clear:
Important: Citrix recommends saving your configuration before you clear the Citrix Gateway configuration settings.
- Basic. Clears all settings on the appliance except for the system IP address, default gateway, mapped IP addresses, subnet IP addresses, DNS settings, network settings, high availability settings, administrative password, and feature and mode settings.
- Extended. Clears all settings except for the system IP address, mapped IP addresses, subnet IP addresses, DNS settings, and high availability definitions.
- Full. Restores the configuration to the original factory settings, excluding the system IP (NSIP) address and default route, which are required to maintain network connectivity to the appliance.
When you clear all or part of the configuration, the feature settings are set to the factory default settings.
When you clear the configuration, files that are stored on Citrix Gateway, such as certificates and licenses, are not removed. The file ns.conf is not altered. If you want to save the configuration before clearing the configuration, save the configuration to your computer first. If you save the configuration, you can restore the ns.conf file on Citrix Gateway. After you restore the file to the appliance and restart Citrix Gateway, any configuration settings in ns.conf are restored.
Modifications to configuration files, such as rc.conf, are not reverted.
If you have a high availability pair, both Citrix Gateway appliances are modified identically. For example, if you clear the basic configuration on one appliance, the changes are propagated to the second appliance.
To clear Citrix Gateway configuration settingsTo clear Citrix Gateway configuration settings
- In the configuration utility, on the Configuration tab, in the navigation pane, expand System and then click Diagnostics.
- In the details pane, under Maintenance, click Clear configuration.
- In Configuration Level, select the level you want to clear and then click Run. | https://docs.citrix.com/en-us/citrix-gateway/12-1/install/ns-maintain-config-settings-viewing-con/ns-maintain-clear-configuration-tsk.html | 2019-07-15T23:24:35 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.citrix.com |
, the JSS also supports distributing Mac App Store apps and eBooks to computers by associating redeemable VPP codes with apps and eBooks. For more information, see VPP Code Distribution for Computers.
VPP-Managed Distribution for Computers
The JSS, the JSS has full control of your Mac App Store apps. The JSS can be used to automatically update apps in the JSS OS X v10.11 or later. To distribute Mac App Store apps to computers using VPP-managed distribution, you need a VPP account set up in the JSS. For more information, see Integrating with VPP.
To distribute a Mac App Store app directly to a computer, when configuring the app distribution settings, choose the VPP account that purchased the app for VPP-managed distribution. For more information, see Mac
The JSS also allows you to distribute Mac App computers with OS X v10.9 or later.
Related Information
For related information, see the following sections in this guide:
Simple VPP Content Searches for Computers
Find out how to search the VPP content in the JSS.
VPP-Managed Distribution for Mobile Devices | https://docs.jamf.com/9.9/casper-suite/administrator-guide/VPP-Managed_Distribution_for_Computers.html | 2019-07-15T21:56:10 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jamf.com |
API Files
This topic contains detailed info about how APIs can use Legato's interface definition language (IDL). Legato's IDL helps apps be written in multiple, different programming languages.
Also see
Syntax
C Language Support
Related info
ifgen
Definition Files
Extend helloWorld
Overviewusability of components and can force using a programming language not ideally suited to a particular problem domain or developer skillset. It also leaves inter-process communication (IPC) to be implemented manually, which can be time-consuming and fraught with bugs and security issues.
To simplify things, Legato has an IDL similar to C that helps define APIs so they can be used in multiple, different programming languages.
These IDL files are called API (
.api) files.
They're processed by the ifgen tool that generates function definitions and IPC code in an implementation language chosen by the component writer. Most of the time, developers don't need to know much about
ifgen, because it's automatically run by other build tools, as needed.
An API client:
- import the API into their component (add the
.apifile to the
api:subsection of the
requires:section of the component's
Component.cdeffile)
- include/import the generated code into their program (e.g., in C:
#include "interfaces.h")
- call the functions in the API
This automatically will do IPC connection opening/closing, message buffer allocation/release, message passing, synchronization between client and server threads/processes, and sandbox security access control.
An API server:
- export the API from their component (add the
.apifile to the
api:subsection of the
provides:section of the component's
Component.cdeffile)
- include/import the generated code into their program (e.g., in C:
#include "interfaces.h")
- implement the functions in the API
The server's functions are called automatically when the client calls the auto-generated client-side versions of those functions. | https://docs.legato.io/latest/apiFiles.html | 2019-07-15T22:36:43 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.legato.io |
Quickstart: Use your own notebook server to get started with Azure Machine Learning
Use your own Python environment and Jupyter Notebook Server to get started with Azure Machine Learning service. For a quickstart with no SDK installation, see Quickstart: Use a cloud-based notebook server to get started with Azure Machine Learning.
This quickstart shows how you can use the Azure Machine Learning service workspace to keep track of your machine learning experiments. You will run Python code that log values into the workspace.
View a video version of this quickstart:
If you don’t have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning service today.
Prerequisites
- A Python 3.6 notebook server with the Azure Machine Learning SDK installed
- An Azure Machine Learning service workspace
- A workspace configuration file (.azureml/config.json).
Get all these prerequisites from Create an Azure Machine Learning service workspace.
Use the workspace
Create a script or start a notebook in the same directory as your workspace configuration file (.azureml/config.json).
Attach to workspace
This code reads information from the configuration file to attach to your workspace.
from azureml.core import Workspace ws = Workspace.from_config()
Log values
Run this code that uses the basic APIs of the SDK to track experiment runs.
- Create an experiment in the workspace.
- Log a single value into the experiment.
- Log a list of values into the experiment.
from azureml.core import Experiment # Create a new experiment in your workspace. exp = Experiment(workspace=ws, name='myexp') # Start a run and start the logging service. run = exp.start_logging() # Log a single number. run.log('my magic number', 42) # Log a list (Fibonacci numbers). run.log_list('my list', [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]) # Finish the run. run.complete()
View logged results
When the run finishes, you can view the experiment run in the Azure portal. To print a URL that navigates to the results for the last run, use the following code:
print(run.get_portal_url())
This code returns a link you can use to view the logged values in the Azure portal in your browser.
Clean up resources
Important
You can use the resources you've created here as prerequisites to other Machine Learning tutorials and how-to articles.
If you don't plan to use the resources that you created in this article, delete them to avoid incurring any charges.
ws.delete(delete_dependent_resources=True)
Next steps
In this article, you created the resources you need to experiment with and deploy models. You ran code in a notebook, and you explored the run history for the code in your workspace in the cloud.
You can also explore more advanced examples on GitHub or view the SDK user guide.
Feedback | https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-run-local-notebook | 2019-07-15T22:31:27 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['media/quickstart-run-local-notebook/logged-values.png',
'Logged values in the Azure portal'], dtype=object) ] | docs.microsoft.com |
These are the docs for 13.8, an old version of SpatialOS. 14.0 is the newest →
spatial alpha local launch
Start a SpatialOS simulation locally for flexible project layout.
Description
Start a cloud-like SpatialOS environment which runs and manages the specified deployment configuration locally.
spatial alpha local launch [flags]
Options
--enable_inspector_v2 Enable serving of inspector-v2 route. (default true) -. --launch_config string Path to launch configuration file to start the deployment with (optional). This flag overrides any launch configuration file specified in the main project configuration file. - Path to the snapshot file to start the deployment with (optional). This flag overrides any snapshot file path specified in the launch configuration file.. | https://docs.improbable.io/reference/13.8/shared/spatial-cli/spatial-alpha-local-launch | 2019-07-15T22:56:50 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.improbable.io |
Local Enrollment Using Recon
If you have physical access to the OS X computer that you want to enroll, you can run Recon locally on the computer. This allows you to submit detailed inventory information for the computer. It also allows you to add computers to a site during enrollment.
Enrolling a Computer by Running Recon Locally
On the computer you want to enroll, open Recon and authenticate to the JSS.
(Optional) Enter an asset tag and/or use a bar code scanner to enter bar codes.
The computer name is populated by default.
Enter credentials for a local administrator account that you want to use to manage computers.
This can be an existing or new account. If the account does not already exist, Recon creates it.
Note: If the account you specify does not have SSH (Remote Login) access to the computer, Recon enables SSH during enrollment.
. | https://docs.jamf.com/9.9/casper-suite/administrator-guide/Local_Enrollment_Using_Recon.html | 2019-07-15T21:58:20 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jamf.com |
GDPR functionality¶
The General Data Protection Regulation requires systems to be upgraded to follow certain rules. Some of the requirements can’t be handled by a drop-in solution, but some can. That’s why LogSentinel supports a number of features – a GDPR register and GDPR-specific logging endpoints.
The GDPR endpoints are under
/api/log-gdpr/. There you can log consent and all requests by data subjects in a way that you can prove to regulators the events that happened. More details can be found in the API console .
We support a GDPR Article 30 register, where a company should enlist all its processing activities (what types of data about what types of data subjects it processes). What does this have to do with audit logs? Since the regulation requires processing of data to be authorized, and the integrity of the data to be guaranteed, audit logs can be mapped to a particular processing activity in the register. That for each processing activity you’ll be able to track the relevant actions. This is done simply by providing an extra GET parameter to the logging call –
gdprCorrelationKey. Each processing activity can be assigned a unique correlation key to make it match the audit log records.
Additionally, you can use the GDPR register (via the
/gdpr API endpoints) to fetch information about processing activities in order to display it to users for the purpose of collecting their consent. It’s best to have the register and your website in sync, so that all consent-dependent activities are covered and the user explicitly agrees to each of them. | https://docs.logsentinel.com/en/latest/gdpr.html | 2019-07-15T23:13:02 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.logsentinel.com |
What is Bing Entity Search API?
The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. Place results include restaurants, hotel, or other local businesses. Bing returns places if the query specifies the name of the local business or asks for a type of business (for example, restaurants near me). Bing returns entities if the query specifies well-known people, places (tourist attractions, states, countries/regions, etc.), or things.
Workflow
The Bing Entity Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the REST API, or the SDK.
- Create a Cognitive Services API account with access to the Bing Search APIs. If you don't have an Azure subscription, you can create an account for free.
- Send a request to the API, with a valid search query.
- Process the API response by parsing the returned JSON message.
Next steps
- Try the interactive demo for the Bing Entity Search API.
- To get started quickly with your first request, try a Quickstart.
- The Bing Entity Search API v7 reference section.
- The Bing Use and Display Requirements specify acceptable uses of the content and information gained through the Bing search APIs.
Feedback | https://docs.microsoft.com/en-us/azure/cognitive-services/bing-Entities-Search/overview | 2019-07-15T22:34:05 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.microsoft.com |
Manage express installation files for Windows 10 updates
Configuration Manager supports express installation files for Windows 10 updates. Configure the client to download only the changes between the current month's Windows 10 cumulative quality update and the previous month's update. Without express installation files, Configuration Manager clients download the full Windows 10 cumulative update each month, including all updates from previous months. Using express installation files provides for smaller downloads and faster installation times on clients.
To learn how to use Configuration Manager to manage update content to stay current with Windows 10, see Optimize Windows 10 update delivery.
Important
The OS client support is available in Windows 10, version 1607, with an update to the Windows Update Agent. This update is included with the updates released on April 11, 2017. For more information about these updates, see support article 4015217. Future updates leverage express for smaller downloads. Prior versions of Windows 10, and Windows 10 version 1607 without this update don't support express installation files.
Enable the site to download express installation files for Windows 10 updates
To start synchronizing the metadata for Windows 10 express installation files, enable it in the properties of the software update point.
In the Configuration Manager console, go to the Administration workspace, expand Site Configuration, and select the Sites node.
Select the central administration site or the stand-alone primary site.
In the ribbon, click Configure Site Components, and then click Software Update Point. Switch to the Update Files tab, and select Download both full files for all approved updates and express installation files for Windows 10.
Note
You can't configure the software update point component to only download express updates. The site downloads the express installation files in addition to the full files. This increases the amount of content stored in the content library, and distributed to and stored on your distribution points.
Tip
To determine the actual space being used on disk by the file, check the Size on disk property of the file. The Size on disk property should be considerably smaller than the Size value. For more information, see FAQs to optimize Windows 10 update delivery..
Note
This is a local port that clients use to listen for requests from Delivery Optimization or Background Intelligent Transfer Service (BITS) to download express content from the distribution point. You don't need to open this port on firewalls because all traffic is on the local computer.
Once you deploy client settings to enable this functionality on the client, it attempts to download the delta between the current month's Windows 10 cumulative update and the previous month's update. Clients must run a version of Windows 10 that supports express installation files.
Enable support for express installation files in the properties of the software update point component (previous procedure).
In the Configuration Manager console, go to the Administration workspace, and select Client Settings.
Select the appropriate client settings, and click Properties on the ribbon.
Select the Software Updates group. Configure to Yes the setting to Enable installation of Express Updates on clients. Configure the Port used to download content for Express Updates with the port used by the HTTP listener on the client.
Feedback | https://docs.microsoft.com/en-us/sccm/sum/deploy-use/manage-express-installation-files-for-windows-10-updates | 2019-07-15T22:10:46 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.microsoft.com |
DWORD MsgWaitForMultipleObjects( DWORD nCount, const HANDLE *pHandles, BOOL fWaitAll, DWORD dwMilliseconds, DWORD dwWakeMask );
Parameters
nCount
The number of object handles in the array pointed to by pHandles. The maximum number of object handles is MAXIMUM_WAIT_OBJECTS minus one. If this parameter has the value zero, then the function waits only for an input event.
pHandles.
fWait
The time-out interval, in milliseconds. If a nonzero value is specified, the function waits until the specified objects are signaled or the interval elapses. If dwMilliseconds is zero, the function does not enter a wait state if the specified objects are not signaled; it always returns immediately. If dwMilliseconds is INFINITE, the function will return only when the specified objects are signaled.
dwWakeMaskObjects function determines whether the wait criteria have been met. If the criteria have not been met, the calling thread enters the wait state until the conditions of the wait criteria have been met or the time-out interval elapses...
The MsgWaitForMultipleObjects function can specify handles of any of the following object types in the pHandles array:
- Change notification
- Console input
- Event
- Memory resource notification
- Mutex
- Process
- Semaphore
- Thread
- Waitable timer
Requirements
See Also
MsgWaitForMultipleObjectsEx
Synchronization Functions | https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-msgwaitformultipleobjects | 2019-07-15T23:41:01 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.microsoft.com |
Retrieving Mailing Lists
Use the get-list operation to retrieve preferences of specified mailing lists. Use filters to specify mailing lists by name, ID, site ID, or site name. For information on filters, refer to the Available Filters section.
Request Packet Structure
A request XML packet retrieving a mailing list preferences includes the get-list operation node:
<packet version="1.6.7.0"> <maillist> <get-list> ... </get-list> </maillist> </packet>
The get-list node is presented by the MaillistGetListInputType
type (
maillist.xsd), and its graphical representation is as follows:
Note: The interactive schema navigator for all request packets is available here:.
- The filter node is required. It specifies the filtering rule. For information on this filter, refer to the Available Filters section. Data type: MaillistFilterType (
maillist.xsd) .
Remarks
You can retrieve parameters of multiple mailing lists using different filtering rules in a single packet. Add as many get-list operations as the number of different filtering rules to be applied.
<get-list> ... </get-list> ... <get-list> ... </get-list>
Response Packet Structure
The get-list node of the output XML packet is presented by type MaillistGetListOutputType (maillist.xsd)-list operation. Data type: string. Allowed values: ok | error.
- The errcode node is optional. Is returns the error code if the get-list operation fails. Data type: integer.
- The errtext node is optional. It returns the error message if the get-list operation fails. Data type: string.
- The filter-id node is optional. It holds the filtering rule parameter. For information, refer to the Available Filters section. Data type: anySimpleType.
- The id node is optional. It returns ID of the mailing list in Plesk database if the operation succeeds. Data type: integer.
- The name node is optional. It is required if the operation succeeds. It returns the name of the mailing list. Data type: string.
- The list-status is optional. It is required if the operation succeeds. It returns the status of the mailing list. Data type: boolean.
Samples
Retrieving information on a single mailing list
This request packet retrieves preferences of the mailing list called MyMailer.
<packet> <maillist> <get-list> <filter> <name>MyMailer</name> </filter> </get-list> </maillist> </packet>
Response:
<packet> <maillist> <get-list> <result> <status>ok</status> <filter-id>MyMailer</filter-id> <id>2</id> <name>MyMailer</name> <list-status>false</list-status> </result> </get-list> </maillist> </packet>
If mailing list MyMailer was not found on the server, the response is as follows:
<packet> <maillist> <get-list> <result> <status>error</status> <errcode>1013</errcode> <errtext>Maillist does not exist</errtext> <filter-id>MyMailer</filter-id> </result> </get-list> </maillist> </packet>
Retrieving information on multiple mailing lists
This request packet retrieves preferences of all mailing lists on the sites with ID 1 and ID 21.
<packet> <maillist> <get-list> <filter> <site-id>1</site-id> <site-id>21</site-id> </filter> </get-list> </maillist> </packet>
Response (if the site with ID 21 was not found on the server, and the site with ID 1 has two active mailing lists):
<packet> <maillist> <get-list> <result> <status>ok</status> <filter-id>1</filter-id> <id>12</id> <name>MailerOne</name> <list-status>true</list-status> </result> <result> <status>ok</status> <filter-id>1</filter-id> <id>17</id> <name>MailerTwo</name> <list-status>true</list-status> </result> <result> <status>error</status> <errcode>1013</errcode> <errtext>Domain does not exist</errtext> <filter-id>21</filter-id> </result> </get-list> </maillist> </packet> | https://docs.plesk.com/en-US/onyx/api-rpc/about-xml-api/reference/managing-mailing-lists/retrieving-mailing-lists.36182/ | 2019-07-15T22:46:11 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['/en-US/onyx/api-rpc/images/36184.png', 'image-36184.png'],
dtype=object)
array(['/en-US/onyx/api-rpc/images/36187.png', 'image-36187.png'],
dtype=object) ] | docs.plesk.com |
.buttonMarkup( options, overwriteClasses )Returns: jQueryversion deprecated: 1.4.0
Description: Adds button styling to an element
.buttonMarkup( options, overwriteClasses )
- options
- corners (default:
true)Adds the class
ui-corner-allwhen
trueand removes it when
false. This gives the button-styled element rounded corners.
This option is also exposed as a data-attribute:
data-corners="false"
- icon (default:
"")Adds an icon class by prefixing the value with the string "ui-icon-" and an icon position class based on the value of the
iconposoption.
For example, if the value is "arrow-r" and the value of the
iconposoption is "left", then
.buttonMarkup()will add the classes
ui-icon-arrow-rand
ui-btn-icon-leftto each of the set of matched elements.
This option is also exposed as a data-attribute:
data-icon="arrow-r"
- iconpos (default:
"left")Adds an icon position class by prefixing the value with the string "ui-btn-icon-" when the button-styled element has an icon.
For example, if the value is "right" and the button-styled element has an icon, then the class
ui-btn-icon-rightwill be added to each of the set of matched elements.
This option is also exposed as a data-attribute:
data-iconpos="right"
- iconshadow (default:
false)This option is deprecated in 1.4.0 and will be removed in 1.5.0.
Adds the class
ui-shadow-iconto each of the set of matched elements when set to
trueand the button-styled element has an icon.
This option is also exposed as a data-attribute:(version deprecated: 1.4.0)
data-iconshadow="true"
- inline (default:
false)Adds the class
ui-btn-inlineto each of the set of matched elements when set to true.
This option is also exposed as a data-attribute:
data-inline="true"
- mini (default:
false)Type:Adds the class
ui-minito each of the set of matched elements when set to true.
This option is also exposed as a data-attribute:
data-mini="true"
- shadow (default:
true)Adds the class
ui-shadowto each of the set of matched elements when set to true.
This option is also exposed as a data-attribute:
data-shadow="false"
- theme (default:
null, inherited from parent)The value is a letter a-z identifying one of the color swatches in the current theme, or
null.
This option adds a class constructed by appending the string "ui-btn-" to the value to each of the set of matched elements. If set to
null, no class is added, and the swatch is inherited from the element's parent.
For example, a value of "b" will cause the class
ui-btn-bto be added to each of the set of matched elements.
This option is also exposed as a data-attribute:
data-theme="b"
- overwriteClasses (default:
false)When set to
true,
.buttonMarkup()discards all classes on each of the set of matched elements and adds classes based on the values passed into the
optionsargument. You can use this feature to increase performance in situations where the element you wish to enhance does not have any classes other than the button styling classes added by
.buttonMarkup().
Conversely, when set to
false,
.buttonMarkup()first parses the existing classes found on each of the set of matched elements and computes a set of existing options based on the presence or absence of classes related to button styling already present. It separately records any classes unrelated to button styling. It then merges the options specified in the
optionsparameter with the computed options such that the
optionspassed in take precedence, and calculates a list of classes that must be present for those options to be expressed in the element's styling. It then re-applies the classes unrelated to button styling as well as the classes that reflect the new set of options. This means that calling
.buttonMarkup()on the same element multiple times will have the expected effect:
// Initially corners are turned off $( "#myAnchor" ).buttonMarkup({ corners: false }); // Later on we turn off shadow - the lack of corners is retained $( "#myAnchor" ).buttonMarkup({ shadow: false }); // Later still we turn corners back on - the lack of shadow is retained $( "#myAnchor" ).buttonMarkup({ corners: true });
buttonor
aelements by simply adding classes.
Transition to class-based stylingKeeping in mind the followings will make it easy for you to transition from the button styling based on data attributes to the class-based process:
- When using icons, you must always specify an icon position class along with the icon class, because there is no longer a default icon position. In the example below the class
ui-btn-icon-leftis added to make sure the icon (
ui-icon-arrow-r) will be displayed.
<a href="" class="ui-btn ui-icon-arrow-r ui-btn-icon-left ui-corner-all ui-shadow ui-btn-inline">Example</a>
- Although the style-related data attributes are deprecated, the data attributes related to linking behavior remain unchanged. In the example below the button is styled using classes, but the data attributes related to linking are retained.
<a href="/" data-Home</a>
- We do not recommend mixing styling based on data attributes and class-based styling during the deprecation period.
Button markup
You can use
.buttonMarkup() to style any element as a button that is attractive and useable on a mobile device. It is a convenience function that allows you to manipulate the classes related to button styling. For each element in the set of matched elements this function converts the
options parameter to a list of classes to be applied to the element, while respecting the element's existing classes that are not related to button styling. You may also set the parameter
overwriteClasses to
true for performance reasons. When
overwriteClasses is set to
true the function discards existing classes and applies the classes corresponding to the options provided.
Autoinitialization
The framework will automatically apply button styling to anchors that have the attribute
data-role="button" as well as
button elements, anchors contained directly in bars and controlgroup widgets. You can specify button styling options via data attributes that you add to the anchor or
button element. The data attribute corresponding to each
.buttonMarkup() option is documented in the options of
.buttonMarkup(). The example below shows the markup needed for an anchor-based button.
<a href="index.html" data-Link button</a>
Produces this anchor-based button:
Button based button:
.buttonMarkup() also automatically enhances
button elements such as the one below:
<button>Button element</button>
Disabled appearance
You can style an anchor as disabled by adding the class
ui-state-disabled.
Note: It is not inherently possible to "disable" anchors. The class
ui-state-disabled merely adds styling to make the anchor look disabled. It does not provide the same level of functionality as the
disabled attribute of a form button. It may still be possible to follow the anchor using navigation methods that do not involve the pointing device.
<a href="index.html" data-Link button</a>
Produces an anchor-based button styled as disabled:
In the case of
button elements, you should apply the
ui-state-disabled class when you set the
button element's
disabled attribute:
// Toggle the class ui-state-disabled in conjunction with modifying the value // of the button element's "disabled" property $( "button#myButton" ) .prop( "disabled", isDisabled ) .toggleClass( "ui-state-disabled", isDisabled );
Inline buttons>
If you want buttons to sit side-by-side but stretch to fill the width of the screen, you can use the content column grids to put normal full-width buttons into 2- or 3-columns.
Mini version.
Adding Icons to Buttons
The:
<a href="index.html" data-Delete</a>
Icon set
The following
data-icon attributes can be referenced to create the icons shown below:
Icon positioning
By default, all icons in buttons are placed to the left of the button text.
This default may be overridden using the
data-iconpos attribute to set the icon to the right, above (top) or below (bottom) the text. For example:
<a href="index.html" data-Delete</a>:
Mini & Inline
The mini and inline attributes can be added to produce more compact buttons:
Custom Icons:after class to specify the icon background source. The framework contains an inline (data URI) SVG test and adds class
ui-nosvg to the
html element if this is not supported. If you are using SVG icons you can use this class to provide a fallback to external PNG icons.
.ui-icon-myapp-email:after { background-image: url('data:image/svg+xml;...'); } .ui-nosvg .ui-icon-myapp-email:after { background-image: url( "app-icon-email.png" ); }.
Icon example
Grouped buttons
Occasionally, you may want to visually group a set of buttons.)
Labels
Because the
label element will be associated with each individual
input or
button and will be hidden for styling purposes, we recommend wrapping the buttons in a
fieldset element that has a
legend which acts as the combined label for the group. Using the
label as a wrapper around an input prevents the framework from hiding it, so you have to use the
for attribute to associate the
label with the input.
Theming button-styled elements
j" will be automatically assigned the button theme of "a". Here are examples of the button theme pairings in the default theme. All buttons have the same HTML markup:
Assigning theme swatches
Buttons can be manually assigned any of the button color swatches from the theme to add visual contrast with the container they sit inside by adding the
data-theme attribute on the button markup and specifying a swatch letter.
<a href="index.html" data-Swatch b</a>
Here are 2 buttons with icons that have a different swatch letter assigned via the
data-theme attribute. | https://docs.w3cub.com/jquerymobile/buttonmarkup/ | 2019-07-15T22:13:58 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.w3cub.com |
(i2cdrv, addr=0x49, clk=400000)¶
Creates an intance of a new TSL2561. dedicaded “Address” pin that allows to select 1 of 3 available address as shown in the table below.
init(gain=1, timing=0, pack=1)¶
Initialize the TSL2561 setting the gain, timing and kind of package.
get_raw_fullspectrum()¶
Retrieves the current raw value read on channel0 (full-spectrum photodiode).
Returns raw_fs
get_raw_infrared()¶
Retrieves the current raw value read on channel1 (infrared photodiode).
Returns raw_ir
get_raw_visible()¶
Retrieves the difference between the current raw value read on channel0 and raw value on channel1 (visible spectrum).
Returns raw_vis = (raw_fs - raw_ir) | https://docs.zerynth.com/latest/official/lib.ams.tsl2561/docs/official_lib.ams.tsl2561_tsl2561.html | 2019-07-15T22:00:46 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.zerynth.com |
BadgeOS Integration - Navigation
From PeepSo Docs
The earned badges show in user profiles. Under Badges submenu that shows right under user profile cover. All of them are listed there. There are 3 menu items and ways for users to view their badges:
- Link on the PeepSo toolbar under Profile > Badges – can be disabled in backend.
- Link in the PeepSo Profile Widget > Badges – can be disabled in backend.
- Link in the User Profile under the cover.
Navigation in Widget, Profile > and under User Cover | https://docs.peepso.com/wiki/BadgeOS_Integration_-_Navigation | 2017-09-19T17:07:05 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.peepso.com |
Local_5<<.
-_13<<
- | https://docs.toonboom.com/help/activation/Content/Activation/006_local_license_email.html | 2017-09-19T16:48:12 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/download.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_welcome_noline.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_activation_options3.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_email_activation.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_email_activation1.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_email_activation2.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_gen_activation_req.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_email_activation3.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_save_activation_req.png',
None], dtype=object)
array(['../Resources/Images/Activation/v3/local_email/9.jpg',
'Send generated request file by email to Toon Boom Send generated request file by email to Toon Boom'],
dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_input_response.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_complete_lic_activation.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_select_lic_file.png',
None], dtype=object)
array(['../Resources/Images/Activation/HAR12/HAR12_complete_lic_activation1.png',
None], dtype=object) ] | docs.toonboom.com |
The Permissions API is a set of extensible authorization features that provide capabilities for determining access privileges for application resources. This chapter describes the ACL (Access Control List) features and the management of persistent resource permissions via the
PermissionManager. It also explains how the
PermissionResolverSPI can be used to in conjunction with a custom
PermissionVoterimplementation, allowing you to plugin your own custom authorization logic.
The
Permissioninterface is used in a number of places throughout the Permissions API, and defines the following methods:
public interface Permission { Object getResource(); Class<?> getResourceClass(); Serializable getResourceIdentifier(); IdentityType getAssignee(); String getOperation(); }
Each permission instance represents a specific resource permission, and contains three important pieces of state:
- The assignee, which is the identity to which the permission is assigned.
- The operation, which is a string value that represents the exact action that the assigned identity is allowed to perform.
- Either a direct reference to the resource (if known), or a combination of a resource class and resource identifier. This value represents the resource to which the permission applies.
To understand better what a permission is for, here are some examples:
- John is allowed to read FileA.txt
- John is allowed to load/read entity Confidential with identifier 123.
- John is allowed to view the /myapp/page page.
- John is allowed to access Mary's profile
- John is allowed to view button 'Delete' on page /myapp/page.
Basically, permissions can be
stringor
type-based. In the latter case, if you are using JPA entities, you can also manage permissions for a specific entity given its identifier.
Permissions can be also grouped into roles or groups. Or any other
IdentityTypeyou want. For example, let's say that we have a role wich gives read access to a file. If you grant this role to users they are going to inherit all privileges/permissions from the role they were granted. You can even grant this role to a entire group, where all members of the group are also going to inherit the privileges. | http://docs.jboss.org/picketlink/2/latest/reference/html/chap-Identity_Management_-_Permissions_API_and_Permission_Management.html | 2017-09-19T17:07:39 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.jboss.org |
Reports section consists of three elements (available in Pro edition only):
Access Report contains a table of all your courses with numbers of visits by learners.
Fig. 'Access Report'
Completion Report contains course completion information for all your courses and learners. Completion date can be shown if respective option is enabled.
Fig. 'Completion Report'
User Grades Report allows to track selected user’s progress in all their courses.
Fig. 'User Grades Report'
It is possible to export the data of all three types of reports. Access Report and Completion Report data is available both in CSV and XLS formats, User Grades Report data is available in XLS format only.
If you have any questions or suggestions regarding our help documentation, please post them to our ticket system. | http://docs.joomlalms.com/reports.htm | 2017-09-19T17:07:44 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.joomlalms.com |
Tingbot OS
Download: check the the latest release on Github.
Tingbot OS is a customised Raspbian "Jessie" Linux. If you're familiar with Linux, feel free to SSH in and have a poke around! The default user has username 'pi' and password 'raspberry'.
Apps
Apps on the home screen are stored in
/apps. There are also two symlinks in this folder,
/apps/home and
/apps/startup.
/apps/homeThis symlink points to the app to run when the 'home' button combo is pressed, or when an app exits. By default, this points to our 'home screen', the springboard.
/apps/startupThis symlink points to the app to launch at startup. If your Tingbot is running only one app most of the time, it makes sense to run that at startup. By default this points to
/app/home.
For example, to change the startup link, SSH into the Tingbot and do-
ln -snf /path/to/your/app.tingapp /apps/startup
Logs
When working on a Tingbot app, it can be useful to see the logs of the running app. On Tingbot, you can view the log stream of the current app by using the
tbtail command.
Updates
Updates can be installed from the Springboard settings pane, or SSH in and run the
tbupgrade command.
More info
For more information on Tingbot OS, check out the Github repos-
Tingbot OS
tingbot/tingbot-os
Builds the tingbot-os.deb file and disk images
tbprocessd
tingbot/tbprocessd
Daemon process that manages the running of apps on Tingbot OS
springboard
tingbot/springboard
The Tingbot home screen | http://docs.tingbot.com/reference/tingbot-os/ | 2017-09-19T16:51:04 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.tingbot.com |
Attention
This API is currently available in C++ and Python.
class OEFunc2 : public OEFunc1
The OEFunc2 is an abstract base class. This class defines the interface for functions which take a set of variables and compute a function value, gradients and second derivatives.
bool operator()(const double *x, double *h, double *g)
This method defines the interface for function evaluation along with gradients and second derivatives. The corresponding second derivatives for the given set of variable is returned in the second argument, and the gradients are returned in the third argument.
There should be a one to one correspondence between the elements of the variable and that second derivatives and gradients arrays. Methods that override this operator method must not initialize the second derivatives and the gradient arrays, but rather assume that the arrays has already been initialized. | https://docs.eyesopen.com/toolkits/cpp/oefftk/OEOptClasses/OEFunc2.html | 2017-09-19T17:17:11 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.eyesopen.com |
OEGetXLogPResult(OEMolBase mol, OEFloatArray atomxlogps=None) -> OEXLogPResult
Returns an OEXLogPResult object. The object contains the XLogP for the given molecule as described in the LogP section and a boolean flag indicating whether or not the computation is valid. The main reason the computation may fail is if one or more atoms are not parameterized. The returned value will be equal to the sum on the individual atom contributions plus the linear regression constant -0.127.
The atomxlogps parameter can be used to retrieve the contribution of each atom to the total XLogP as shown in Listing 2. See example in Figure: Example of depicting the atom contributions of the XLogP.
This function exists to make the calling by wrapped languages more convenient. This function should be preferred over the original OEGetXLogP function for wrapped languages.
Listing 2: Example of retrieving individual atom contributions to XLogP
atomXLogP = OEFloatArray(mol.GetMaxAtomIdx()) result = OEGetXLogPResult(mol, atomXLogP) if (result.IsValid()): print("XLogP =", result.GetValue()) for atom in mol.GetAtoms(): idx = atom.GetIdx() print(idx, atomXLogP[idx]) else: print("XLogP failed for molecule")
Example of depicting the atom contributions of the XLogP
See also
The Python script that visualizes the atom contributions of the total XLogP can be downloaded from the OpenEye Python Cookbook | https://docs.eyesopen.com/toolkits/python/molproptk/OEMolPropFunctions/OEGetXLogPResult.html | 2017-09-19T17:18:09 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.eyesopen.com |
Exporter Unit¶
The purpose of exporter units is to provide an easy way to customize the plainbox reports by delegating the customization bits to providers.
Each exporter unit expresses a binding between code (the entry point) and data. Data can be new options, different Jinja2 templates and/or new paths to load them.
File format and location¶
Exporter entry units are regular plainbox units and are contained and shipped with plainbox providers. In other words, they are just the same as job and test plan units, for example.
Fields¶
Following fields may be used by an exporter unit.
id:
- (mandatory) - Unique identifier of the exporter. This field is used to look up and store data so please keep it stable across the lifetime of your provider.
summary:
- (optional) - A human readable name for the exporter. This value is available for translation into other languages. It is used when listing exporters. It must be one line long, ideally it should be short (50-70 characters max).
entry_point:
- (mandatory) - This is a key for a pkg_resources entry point from the plainbox.exporters namespace. Allowed values are: jinja2, text, xlsx, json and rfc822.
file_extension:
- (mandatory) - Filename extension to use when the exporter stream is saved to a file.
options:
(optional) - comma/space/semicolon separated list of options for this exporter entry point. Only the following options are currently supported.
- text and rfc822:
- with-io-log
- squash-io-log
- flatten-io-log
- with-run-list
- with-job-list
- with-resource-map
- with-job-defs
- with-attachments
- with-comments
- with-job-via
- with-job-hash
- with-category-map
- with-certification-status
- json:
Same as for text and additionally:
- machine-json
- xlsx:
- with-sys-info
- with-summary
- with-job-description
- with-text-attachments
- with-unit-categories
- jinja2:
- No options available
data:
- (optional) - Extra data sent to the exporter code, to allow all kind of data types, the data field only accept valid JSON. For exporters using the jinja2 entry point, the template name and any additional paths to load files from must be defined in this field. See examples below.
Example¶
This is an example exporter definition:
unit: exporter id: my_html _summary: Generate my own version of the HTML report entry_point: jinja2 file_extension: html options: with-foo with-bar data: { "template": "my_template.html", "extra_paths": [ "/usr/share/javascript/lib1/", "/usr/share/javascript/lib2/", "/usr/share/javascript/lib3/"] }
The provider shipping such unit can be as follow:
├── data │ ├── my_template.css │ └── my_template.html ├── units ├── my_test_plans.pxu └── exporters.pxu
Note that exporters.pxu is not strictly needed to store the exporter units, but keeping them in a dedicated file is a good practice.
How to use exporter units?¶
In order to call an exporter unit from provider foo, you just need to use in in the launcher.
Example of a launcher using custom exporter unit:
#!/usr/bin/env checkbox-cli [launcher] launcher_version = 1 [transport:local_file] type = file path = /tmp/submission.html [exporter:my_html] unit = com.foo.bar::my_html [report:local_html] transport = local_file exporter = my_html
For more information about generating reports see Generating reports | http://checkbox.readthedocs.io/en/latest/units/exporter.html | 2017-09-19T17:03:48 | CC-MAIN-2017-39 | 1505818685912.14 | [] | checkbox.readthedocs.io |
Conveniences for decoding and encoding url encoded queries.
Plug allows a developer to build query strings that map to Elixir structures in order to make manipulation of such structures easier on the server side. Here are some examples:
iex> decode("foo=bar")["foo"] "bar"
If a value is given more than once, the last value takes precedence:
iex> decode("foo=bar&foo=baz")["foo"] "baz"
Nested structures can be created via
[key]:
iex> decode("foo[bar]=baz")["foo"]["bar"] "baz"
Lists are created with
[]:
iex> decode("foo[]=bar&foo[]=baz")["foo"] ["bar", "baz"]
Maps can be encoded:
iex> encode(%{foo: "bar", baz: "bat"}) "baz=bat&foo=bar"
Encoding keyword lists preserves the order of the fields:
iex> encode([foo: "bar", baz: "bat"]) "foo=bar&baz=bat"
When encoding keyword lists with duplicate keys, the key that comes first takes precedence:
iex> encode([foo: "bar", foo: "bat"]) "foo=bar"
Encoding named lists:
iex> encode(%{foo: ["bar", "baz"]}) "foo[]=bar&foo[]=baz"
Encoding nested structures:
iex> encode(%{foo: %{bar: "baz"}}) "foo[bar]=baz"
Decodes the given binary
Decodes the given tuple and stores it in the accumulator. It parses the key and stores the value into the current accumulator
Encodes the given map or list of tuples
Decodes the given binary.
Decodes the given tuple and stores it in the accumulator. It parses the key and stores the value into the current accumulator.
Parameter lists are added to the accumulator in reverse order, so be sure to pass the parameters in reverse order.
Encodes the given map or list of tuples.
© 2013 Plataformatec
Licensed under the Apache License, Version 2.0. | http://docs.w3cub.com/phoenix/plug/plug.conn.query/ | 2017-09-19T17:05:09 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.w3cub.com |
The commit dialog is used for committing changes to the Team Server. You can enter a message and - if applicable - select related stories.
Enter a message describing the changes you have made. This message may contain multiple lines. If you want to confirm the form by keyboard and you are inside the message box you can use Ctrl+Enter.
Related stories
Tick the boxes next to the stories that are related to your commit. We recommended small sets of changes and then there is usually just one related story.
Changes in model
If there are changes in the model this tab page will show a summary of those changes in the form of a grid.
Changes on disk
If there are changes on disk this tab page will show a summary of those changes in the form of a grid. The tab page will be hidden if there are no disk changes. In the very common case that there are model changes and the only change on disk is the project file (.mpr) it will also be hidden, because it does not add useful information in that case. | https://docs.mendix.com/refguide6/commit-dialog | 2017-09-19T17:06:42 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.mendix.com |
OverviewOverview
Vendor Advance Report Addon for Magento 2 is a CedCommerce Multi-Vendor Marketplace add-on that provides the facility to the vendors and admin to view the reports related to CedCommerce Multi-Vendor Marketplace.
Once the Vendor Advance Report add-on is installed and enabled, the Vendor Advance Report sub-menu appears on the MARKETPLACE menu of the Admin panel and the Advance Report menu appears on the left navigation bar of the Vendor panel.
It is compatible only with the CedCommerce Multi-Vendor Marketplace extension.
This add-on facilitates the vendors to view the following reports:
Admin can view the Sales Report and the Payment Report of all the vendors from the admin panel.
Note: Vendor and Admin can also view the returned order reports if the Vendor RMA add-on is already installed in Magento 2. | https://docs.cedcommerce.com/magento-2/vendor-advance-report-addon-user-guide | 2017-11-18T02:48:53 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png',
None], dtype=object) ] | docs.cedcommerce.com |
molly.wurfl – Device Detection¶
This is a utility app which provides device detection
Configuration¶
- expose_view: If defined, this exposes a single page which allows users to see what their device is being identified as.
Sample:
Application('molly.wurfl', 'device_detection', 'Device detection', display_to_user = False, expose_view = True, ),
Troubleshooting¶
The WURFL database is the part of the Molly framework that needs the most upkeep due to the ever-changing nature of the mobile market. The installation process for Molly will update your Wurfl database to the most recent version at every install and update (except when in development mode), but new devices may not yet appear in the Wurfl, and the Wurfl neglects to cover user agents belonging to desktop browsers. Therefore, Molly maintains a “local patch” to the Wurfl in molly/wurfl/data/local_patch.xml. This patch file format is documented by Wurfl and is merged into the main Wurfl file at update time. This file lives in the main Molly repository, and if you have come across a device which the Wurfl does not recognise, we would encourage you to commit it back to the main Molly Project as a patch so all users can benefit.
When modifying this file, you must first identify the user agent of the device, and if this device is a newer version of an already existing device, the Wurfl ID of the older version of the device (assuming that the newer device inherits the attributes of the older version). You can then simply add a new line like so:
<device user_agent="User-Agent-Of-My-New-Device" fall_back="wurfl_id_of_old_device" id="my_new_wurfl_id"/>
New devices can be added following the format specified in the main Wurfl docs.
The Wurfl will cover most mobile devices eventually, so you should be able to remove this patch after a period of time. Desktop browsers appear to be slower to be updated in the Wurfl desktop patch. | http://molly.readthedocs.io/en/latest/ref/wurfl.html | 2017-11-18T02:27:58 | CC-MAIN-2017-47 | 1510934804518.38 | [] | molly.readthedocs.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
GETrequest to the
/Route 53 API version/healthcheck/health check ID/lastfailurereasonresource.
Namespace: Amazon.Route53
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the GetHealthCheckLastFailureReason service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MRoute53IRoute53GetHealthCheckLastFailureReasonGetHealthCheckLastFailureReasonRequestNET45.html | 2017-11-18T03:21:09 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
CreatePlatformApplicationaction. APNS/APNS_SANDBOX, PlatformCredential is "private key". For GCM, PlatformCredential is "API key". For ADM, PlatformCredential is "client secret". The PlatformApplicationArn that is returned when using
CreatePlatformApplicationis then used as an attribute for the
CreatePlatformEndpointaction. For more information, see Using Amazon SNS Mobile Push Notifications.
Namespace: Amazon.SimpleNotificationService
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the CreatePlatformApplication service method.
This example shows how to create a mobile push application.
var snsClient = new AmazonSimpleNotificationServiceClient(); var request = new CreatePlatformApplicationRequest { Attributes = new Dictionary<string, string>() { { "PlatformCredential", "AIzaSyDM1GHqKEdVg1pVFTXPReFT7UdGEXAMPLE" } }, Name = "TimeCardProcessingApplication", Platform = "GCM" }; snsClient.CreatePlatformApplication(request);
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MSNSSNSCreatePlatformApplicationCreatePlatformApplicationRequestNET35.html | 2017-11-18T03:21:23 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.aws.amazon.com |
To install IaaS on your distributed virtual or physical Windows servers, you download a copy of the IaaS installer from the vRealize Automation appliance.
About this task
If you see certificate warnings during this process, continue past them to finish the installation.
Prerequisites
Configure the Primary vRealize Automation Appliance and, optionally, Add Another vRealize Automation Appliance to.
- Using an account with administrator privileges, log in to the Windows server.
- Point a Web browser to the following URL on the vRealize Automation appliance.
- Click IaaS Installer.
- Save setup__vrealize-automation-appliance-FQDN@5480 to the Windows server.
Do not change the installer file name. It is used to connect the installation to the vRealize Automation appliance.
- Download the installer file to each IaaS Windows server on which you are installing components.
What to do next
Install an IaaS database, see Choosing an IaaS Database Scenario. | https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vrealize.automation.doc/GUID-B5740EC5-563D-4850-9DC6-335128970E86.html | 2017-11-18T02:38:42 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
Your deployment must meet minimum system resources to install virtual appliances and minimum hardware requirements to install IaaS components on the Windows Server.
For operating system and high-level environment requirements, including information about supported browsers and operating systems, see the vRealize Automation Support Matrix.
The Hardware Requirements table shows the minimum configuration requirements for deployment of virtual appliances and installation of IaaS components. Appliances are pre-configured virtual machines that you add to your vCenter Server or ESXi inventory. IaaS components are installed on physical or virtual Windows 2008 R2 SP1, or Windows 2012 R2 servers.
An Active Directory is considered small when there are up to 25,000 users in the OU to be synced in the ID Store configuration. An Active Directory is considered large when there are more than 25,000 users in the OU. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vrealize.automation.doc/GUID-0E9088C0-DEAF-4B8D-997B-AF316A74AB6B.html | 2017-11-18T02:38:30 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
The plug-in factory defines how Orchestrator finds objects in the plugged-in technology and performs operations on the objects.
To create the plug-in factory, you must implement and extend the IPluginFactory interface from the Orchestrator plug-in API. The plug-in factory class that you create defines the finder functions that Orchestrator uses to access objects in the plugged-in technology. The factory allows the Orchestrator server to find objects by their ID, by their relation to other objects, or by searching for a query string.
The plug-in factory performs the following principal tasks.
Finds objects
You can create functions that find objects according to their name and type. You find objects by name and type by using the IPluginFactory.find() method.
Finds objects related to other objects
You can create functions to find objects that relate to a given object by a given relation type. You define relations in the vso.xml file. You can also create finders to find dependent child objects that relate to all parents by a given relation type. You implement the IPluginFactory.findRelation() method to find any objects that are related to a given parent object by a given relation type. You implement the IPluginFactory.hasChildrenInRelation() method to discover whether at least one child object exists for a parent instance.
Define queries to find objects according to your own criteria
You can create object finders that implement query rules that you define. You implement the IPluginFactory.findAll() method to find all objects that satisfy query rules you define when the factory calls this method. You obtain the results of the findAll() method in a QueryResult object that contains a list of all of the objects found that match the query rules you define.
For more information about the IPluginFactory interface, all of its methods, and all of the other classes of the plug-in API, see Orchestrator Plug-In API Reference. | https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-dev.doc/GUIDA07EAFD2-F304-4C8F-AEB9-040575E7BCE0.html | 2017-11-18T02:34:47 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
Add Comments, Resources, and Files
Introduction
You can add several resources, files, and even commonly used comments to each question on a form. Utilizing these tools will cut down on the amount of time you spend filling out a form on your mobile device. Comments, files, and resources are customized to fit your specific needs.
First, let's define what each of these additional areas means and what we recommend for each area.
- Canned Comments: This is where you should write commonly used comments given to the question. You can even associate a specific answer to a canned comment. When the user answers the question, they will be able to choose a canned comment and have it pre-fill the comment field with that canned comment.
- Resources: This is where you should write any reference material, procedures, or regulations that are relevant to the question being asked. When the user answer the question, they will be able to read any relevant resources and even add a resource to the comment field.
- Files: This is where you should add any images, pdfs, or word documents relevant to the question. This can be used to display schematics, manufacturer manuals, dimension tables, or any other diagrams to every user who answers the question. They can even attach images to the question and sketch directly on the attached image.
Adding Comments & Resources
- Within the Form Builder in the web app, click into the form you want to work in.
- Click
Comments & Resourcesbelow the question you want to add your comment or resource to.
- Click the + button in the section you'd like to add to.
- Click
Createbutton in the dialog after you've create the comment or resource
File types accepted:
- Images (.jpg, .png, .gif)
- Microsoft Office Documents (.doc, .xls, .ppt)
- PDFs (.pdf)
- Video (.mp4)
- Audio (.mp3)
- We strongly recommend you keep all uploaded file sizes < 10mb. | http://docs.inspectall.com/article/152-add-comments-resources-and-files | 2017-12-11T02:19:47 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.inspectall.com |
Working with Data¶
Where to Find Data¶
Help us add useful sources of Free data to this list.
Raster data
- ReadyMap.org - Free 15m imagery, elevation, and street tiles for osgEarth developers
- USGS National Map - Elevation, orthoimagery, hydrography, geographic names, boundaries, transportation, structures, and land cover products for the US.
- NASA BlueMarble - NASA’s whole-earth imagery (including topography and bathymetry maps)
- Natural Earth - Free vector and raster map data at various scales
- Virtual Terrain Project - Various sources for whole-earth imagery
- Bing Maps - Microsoft’s worldwide imagery and map data ($)
Elevation data
- CGIAR - World 90m elevation data derived from SRTM and ETOPO (CGIAR European mirror)
- SRTM30+ - Worldwide elevation coverage (including batymetry)
- GLCF - UMD’s Global Land Cover Facility (they also have mosaiced LANDSAT data)
- GEBCO - Genearl Batymetry Chart of the Oceans
Feature data
- OpenStreetMap - Worldwide, community-sources street and land use data (vectors and rasterized tiles)
- Natural Earth - Free vector and raster map data at various scales
- DIVA-GIS - Free low-resolution vector data for any country
Tips for Preparing your own Data¶
Processing Local Source Data
If you have geospatial data that you would like to view in osgEarth, you can usually use the GDAL driver. If you plan on doing this, try loading it as-is first. If you find that it’s too slow, here are some tips for optimizing your data for tiled access.
Reproject your data
osgEarth will reproject your data on-the-fly if it does not have the necessary coordinate system. For instance, if you are trying to view a UTM image on a geodetic globe (epsg:4326). However, osgEarth will run much faster if your data is already in the correct coordinate system. You can use any tool you want to reproject your data such as GDAL, Global Mapper or ArcGIS.
For example, to reproject a UTM image to geodetic using gdal_warp:gdalwarp -t_srs epsg:4326 my_utm_image.tif my_gd_image.tif
Build internal tiles
Typically formats such as GeoTiff store their pixel data in scanlines. However, using a tiled dataset will be more efficient for osgEarth because of how it uses tiles internally.
To create a tiled GeoTiff using gdal_translate, issue the following command:gdal_translate -of GTiff -co TILED=YES input.tif output.tif
Take is a step further and use compression to save space. You can use internal JPEG compression if your data contains no transparency:gdal_translate -of GTiff -co TILED=YES -co COMPRESS=JPG input.tif output.tif
Build overviews
Adding overviews (also called ‘’pyramids’’ or ‘’rsets’‘) can sometimes increase the performance of a large data source in osgEarth. You can use the gdaladdo utility to add overviews to a dataset:gdaladdo -r average myimage.tif 2 4 8 16
Building tile sets with osgearth_conv
Pre-tiling your imagery can speed up load time dramatically, especially over the network. In fact, if you want to serve your data over the network, this is the only way!
osgearth_conv is a low-level conversion tool that comes with osgEarth. One useful application of the tool is tile up a large GeoTIFF (or other input) in a tiled format. Note: this approach only works with drivers that support writing (MBTiles, TMS).
To make a portable MBTiles file:osgearth_conv --in driver gdal --in url myLargeFile.tif --out driver mbtiles --out filename myData.mbtiles --out format jpg
If you want to serve tiles from a web server, use TMS:osgearth_conv --in driver gdal --in url myLargeData.tif --out driver tms --out url myLargeData/tms.xml --out format jpg
That will yield a folder (called “myLargeData” in this case) that you can deploy on the web behind any standard web server (e.g. Apache).
Tip: If you are tiling elevation data, you will need to add the
--elevationoption.
Tip: The
jpgformat does NOT support transparency. If your data was an alpha channel, use
pnginstead.
Just type osgearth_conv for a full list of options. The
--inand
--outoptions correspond directly to properties you would normally include in an Earth file.
Building tile sets with the packager
Another way to speed up imagery and elevation loading in osgEarth is to build tile sets.
This process takes the source data and chops it up into a quad-tree hierarchy of discrete tiles that osgEarth can load very quickly. Normally, if you load a GeoTIFF (for example), osgEarth has to create the tiles at runtime in order to build the globe; Doing this beforehand means less work for osgEarth when you run your application.
osgearth_package
osgearth_package is a utility that prepares source data for use in osgEarth. It is optional - you can run osgEarth against your raw source data and it will work fine - but you can use osgearth_package to build optimized tile sets that will maximize performance in most cases. Usage:osgearth_package file.earth --tms --out output_folder
This will load each of the data sources in the the earth file (
file.earthin this case) and generate a TMS repository for each under the folder
output_folder. You can also specify options:
–bounds xmin ymin xmax ymax Bounds to package (in map coordinates; default=entire map) –out-earth Generate an output earth file referencing the new repo –overwrite Force overwriting of existing files –keep-empties Writes fully transparent image tiles (normally discarded) –db-options An optional OSG options string –verbose Displays progress of the operation
Spatial indexing for feature data
Large vector feature datasets (e.g., shapefiles) will benefit greatly from a spatial index. Using the ogrinfo tool (included with GDAL/OGR binary distributions) you can create a spatial index for your vector data like so:ogrinfo -sql "CREATE SPATIAL INDEX ON myfile" myfile.shp
For shapefiles, this will generate a ”.qix” file that contains the spatial index information. | http://osgearth.readthedocs.io/en/latest/data.html | 2017-12-11T02:06:32 | CC-MAIN-2017-51 | 1512948512054.0 | [] | osgearth.readthedocs.io |
RNA-Seq Analysis¶
Overview¶
The RNA-Seq Analysis Service provides services for aligning, assembling, and testing differential expression on RNA-Seq data. The service provides three recipes for processing RNA-Seq data: 1) Rockhopper, based on the popular Rockhopper tool for processing prokaryotic RNA-Seq data; 2) Tuxedo, based on the tuxedo suite of tools (i.e., Bowtie, Cufflinks, Cuffdiff); and 3)).
Parameters¶
Strategy¶
This parameter governs the software used to align, assemble, quantify, and compare reads from different samples.
Rockhopper¶
Runs the Rockhopper software designed for RNA-Seq on prokarytoic organisms. With this strategy selected Rockhopper will handle all steps (alignment, assembly, quanitification, and differential expression testing).
Tuxedo¶
Runs the tuxedo strategy using Bowtie2, Cufflinks, and CuffDiff to align, assemble, and compare samples respectively. This is a similar strategy as used by RNA-Rocket.
Host HISAT2¶
Runs HISAT2 for alignment against the selected host and then uses the remainder of the Tuxedo strategy. tuxedo strategy using Cufflinks, and CuffDiff to assemble, and compare samples respectively... | https://docs.patricbrc.org/user_guide/differential_expression_data_and_tools/rna_seq_analysis_service.html | 2017-12-11T02:15:44 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.patricbrc.org |
For a more detailed description of the functionality in App Studio in QuarkXPress, please consult the QuarkXPress App Studio Guide on the Downloads page. It's also located in the the Documents folder in the QuarkXPress application folder on your computer.
The basic App Studio publishing process involves the following steps:
The next step is to Test and Preview your content. | http://docs.appstudio.net/display/AppStudio/QuarkXPress+to+App+Studio | 2017-12-11T01:51:13 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.appstudio.net |
DeleteConfigurationSetTrackingOptions
Deletes configuration sets, see Configuring Custom Domains to Handle Open and Click Tracking in: | http://docs.aws.amazon.com/ses/latest/APIReference/API_DeleteConfigurationSetTrackingOptions.html | 2017-12-11T02:23:11 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.aws.amazon.com |
Auto Answer All Forms
There may be instance where your process requires you to only find exceptions or priorities without individually going through every question in a folder. For such situations we have a feature called "Pencil Whipping". Pencil Whipping auto answers every unanswered question on every uncompleted form in a folder that have default answers setup for them. Again, tapping this button automatically answers each question in every form in the folder that meets all of the following:
- Has a default answer
- Is unanswered
- Is on a Form that is marked "Incomplete"
To Auto Answer all Forms in a folder:
- Tap the menu while in a folder.
- Tap Auto Answer All Forms:
- Confirm and watch it do it's job: | http://docs.inspectall.com/article/448-auto-answer-all-forms | 2017-12-11T02:23:05 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.inspectall.com |
The library provides 4 predefined skins:
To set the skin for a dhtmlxPopup object, use the setSkin method.
var myPop = new dhtmlXPopup(); myPop.setSkin("dhx_web");
The following priority (from higher to lower) is used to determine the skin to apply:
For example, if you include on the page the only css file - "dhxpopup_dhx_terrace" - and instantiate dhtmlxPopup without specifying the skin parameter, then the "dhx_terrace" skin will be detected automatically.Back to top | http://docs.dhtmlx.com/popup__skin.html | 2018-07-15T23:05:52 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.dhtmlx.com |
Decent Comments is a flexible WordPress plugin that allows to show comments and provides a lot of useful options. It’s a great alternative to your dull Recent Comments widget but there’s a lot more you can do with it. If you want to show comments along with their author’s avatars and an excerpt of their comment, then this is the right plugin for you.
Decent Comments shows what people say.
This tool provides configurable widgets, shortcodes and an API to display comments in sensible ways, including author avatars, links, comment excerpts …
Documentation
Installation
Decent Comments is a free plugin and can be installed directly from your WordPress Dashboard. Simply visit the Plugins section and search for decent comments by itthinx.
Download
You can also download the Decent Comments WordPress Plugin from the Plugin Directory here. | http://docs.itthinx.com/document/decent-comments/ | 2018-07-15T23:01:10 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.itthinx.com |
Guides
- Fastly Status
Purging API cache with surrogate keys
Last updated May 03, 2018
Fastly makes it possible for you to cache your API so you can accelerate the performance of your service-oriented architecture. Of course, caching your API is one thing - efficiently invalidating the API cache is another matter entirely. If you've already enabled API caching and implemented API cache control, you've probably run into this problem, which was aptly described by Phil Karlton:
There are only two hard things in computer science: cache invalidation and naming things.
This guide explains how to use the Fastly API to purge your API cache with surrogate keys. Surrogate keys allow you to reduce the complexity of caching an API by combining multiple cache purges into a single key-based purge.
What's a surrogate key?
Surrogate keys allow you to selectively purge related content. Using the
Surrogate-Key header, you can "tag" an object, such as an image or a blog post, with one or more keys. When Fastly fetches an object from your origin server, we check to see if you've specified a
Surrogate-Key header. If you have, we add the response to a list we've set aside for each of the keys.
When you want to purge all of the responses associated with a key, issue a key purge request and all of the objects associated with that key will be purged. This makes it possible to combine many purges into a single request. Ultimately, it makes it easier to manage categorically related data.
To learn more about surrogate keys and to see how you can integrate them into your application, see our guide on getting started with surrogate keys.
Example: Purging categories
To see how surrogate keys work in conjunction with an API endpoint, imagine you have an online store and an API endpoint that returns the details of a product. When a user wants to get information about a specific product, like a keyboard, the request might look like this:
GET /product/12345
If your API is using Fastly and the response is not already cached, Fastly will make a request to your API's origin server and receive a response like this:
HTTP/1.1 200 OK Content-Type: text/json Cache-Control: private Surrogate-Control: max-age=86400 Surrogate-Key: peripherals keyboards {id: 12345, name: "Uber Keyboard", price: "$124.99"}
You knew that entire product categories would occasionally need to be purged, so you thoughtfully included the
peripherals and
keyboards product categories as keys in the
Surrogate-Key header. When Fastly receives a response like this, we add it to an internal map, strip out the
Surrogate-Key header, cache the response, and then deliver it to the end user.
Now imagine that your company decides to apply a 10% discount to all peripherals. You could issue the following key purge to invalidate all objects tagged with the
peripherals surrogate key:
PURGE /service/:service_id/peripherals
When Fastly receives this request, we reference the list of content associated with the
peripherals surrogate key and systematically purge every piece of content in the list.
Relational dependencies
Your API can use surrogate keys to group large numbers of items that may eventually need to be purged at the same time. Consider the example presented above. The API for your online store could have surrogate keys for product types, specific sales, or manufacturers.
From this perspective, the
Surrogate-Key header provides Fastly with information about relations and possible dependencies between different API endpoints. Wherever there's a relation between two different types of resources in an API, there might be a good reason to keep them categorized by using a surrogate key.
Example: Purging product reviews and action shots
To learn how surrogate keys can help with relational dependencies, imagine that your online store wants to allow buyers to post product reviews and "action shots" depicting the products in use. To support these new features, you'll need to change your API. First, you'll need to create a new
review endpoint:
GET /review/:id POST /review
Next, you'll need to create a new
action_shot endpoint:
POST /product/:id/action_shot GET /product/:id/action_shot/:shot_id
Since both of the new endpoints refer to specific products, they'll need to be purged when relevant product information changes. Surrogate keys are a perfect fit for this use case. You can implement them by modifying the
review and
action_shot to return the following header:
Surrogate-Key: product/:id
This relates each of the endpoints to a specific product in the cache (where
:id the product's unique identifier). When the product information changes, your API issues the following a key purge:
PURGE /service/:service_id/product/:id
When Fastly receives this request, we purge each of the related endpoints at the same time.
Variations on a theme
You'll also want to consider using surrogate keys if your API has many different endpoints that all derive from a single source. Any time the source data changes, each of the endpoints associated with it will need to be purged from the cache. By associating each of the endpoints with a surrogate key, a single purge can be issued to purge them from the cache when the source changes.
Example: Purging product images
To understand how this works, imagine that your online store has an API endpoint for retrieving product images in various sizes:
GET /product/:id/image/:size
This endpoint returns an image of the appropriate
:size (e.g.,
small,
medium,
large) for the product of the given
:id. To save disk space, you opt to have the API generate each specifically sized image from a single source image using an imaging library like ImageMagick. Since the sales and marketing team uses the API to upload new product images, you set up the endpoint to include a surrogate key:
Surrogate-Key: product/:id/image
When the PUT endpoint for uploading a product image is called, the API sends the following purge request:
PURGE /service/:service_id/product/:id/image
When Fastly receives this request, we purge all size variations of the product image. | https://docs.fastly.com/guides/api-caching/purging-api-cache-with-surrogate-keys | 2018-07-15T22:35:08 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.fastly.com |
22 Apr 2020
Enabled Editors
This release contains an option for setting preferred and available editors for opening files from the Explore menu. Other general fixes are also included.
Features:
- Option for setting preferred and available editors with
_enabled_editors
- Improved domain status UI for Cloudflare proxy state
- Date and datetime inputs now show full timezone
- Improved screenshot rendering for Dashboard section
Fixes:
- Images that are downsized and have an EXIF orientation header now correctly resized
- Title field correctly populated after initially creating a new HTML file
- Resolved some edge case issues with editor cookies | https://docs.cloudcannon.com/2020/04/22/enabled-editors/ | 2020-05-25T05:53:44 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.cloudcannon.com |
Glossary Item Box
The aim of this topic is to explain how you can create a simple web application using Telerik OpenAccess ORM and Telerik Rad Controls.
Creating The SofiaCarRental Sample Database
Every application that works with data needs to actually save that data. For the purpose of this we need to create a simple database. Detailed explanation of how you can create a database can be found here: Creating the sofiacarrental sample database
Generating the Sofia Car Rental Data Model
In order for your application to use the data from the database(modify,add,delete) you will have to map the tables of the database to classes in visual studio that will represent the database objects. To do so you can follow the step by step guide that can be found here: Generating the sofia car rental data model
Creating the Car Rental Agency Application
In order for you to be able to modify the database records you will need to create a simple application. The application is consisted from several web pages that perform various operations over the database. You can find out how to create the web application in this topic: Creating the car rental agency application
Querying Objects and References
For every application its essential to be able to retrieve information from the database. The following topic explains in details how we can write various queries to retrive all kinds of information from our database: Querying Objects and References
Inserting and Updating Data
Telerik OpenAccess ORM enables you to perform all kind of modifications over your database. If you want to add or modify already existing objects you can see a step by step example here : Isterting and updating data | https://docs.telerik.com/help/openaccess-classic/openaccess-tasks-howto-sofiacarrental-web-site.html | 2020-05-25T04:15:53 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.telerik.com |
Setting up for installation on a Windows server
You must perform the following tasks before starting the installation on a Microsoft Windows server:
Extracting the installation files
Perform the following steps to extract the installation files:
- Locate the file that you downloaded from the BMC Electronic Product Distribution (EPD) site, or on media if you purchased the product with media.
For information about the EPD site, see Downloading the installation files.
On media, the Microsoft Windows installation files are in the \install\windows subdirectory. For either downloads or media, the file name is bna-server- v.r.mm -win64.zip
Extract the archive. The following table lists the files contained in the download:
Note
In case of application server or remote device agent upgrade, ensure that you do not extract the archive into the existing Disk1 directory. Either extract into a new directory or delete the existing Disk1 directory before extracting the archive.
Files contained in the download
(Optional) Creating a user account on a Windows server
The BMC Network Automation installation on a Windows server requires a user account (for example, bcan). This account is referred to as the BCAN_USER account. You can create this account either before installation or during installation.
This user account cannot be an administrator account and must have privileges to log on locally.
This account would own all the installed files in BCAN_HOME and BCAN_DATA . It would also be used to initialize and run the embedded postgres service, if you use that option.
Note
You can optionally use the BCAN_USER account for FTP and SCP file transfers. For more information about remote device agents and FTP/SCP file transfers, see Administering remote device agents.
To create the BCAN_USER account and assign the required permissions
Log on as an Administrator. The BCAN_USER account must be a local account. Create the BCAN_USER account under Control Panel > User Accounts as a Limited account. Assign a password to the account.
Recommendation
BMC strongly recommends not using the at sign (@) in the password. Some device file transfers might fail because they use the user:password@host/file format. If the password contains an at sign, the file transfer treats all characters after the at sign as the host name.
- Go to Control Panel > Administrative Tools > Local Security Policy.
- Verify that the BCAN_USER account is permitted to log on locally.
- Add BCAN_USER to Local Policies > User Rights Assignment > Allow log on locally.
- Remove BCAN_USER from Local Policies > User Rights Assignment > Deny logon locally.
Note that BCAN_USER might need to be added to the Remote Desktop Group if the installation and upgrades are be done using a Remote Desktop Connection.
- Ensure that the BCAN_USER account has access to the TFTP, FTP, and SCP directories. This access is the default for a newly created account in Windows.
- Log off as Administrator.
- You must log on using the BCAN_USER account to ensure that the home directory, C:\Users and profile are created. If the home directory is not created, the installation fails.
This step also confirms that the BCAN_USER account has the required user policy rights.
- While logged as BCAN_USER, open a command prompt and type
echo %USERDOMAIN%.
The response to this command is the domain where the BCAN_USER account is validated. During installation you are asked to provide this value.
- Log out as BCAN_USER and log in as Administrator.
- Go to Control Panel > Administration Tools > Services. Ensure that the Secondary Logon, the Windows service is started and has the Startup Type set to Automatic.
Checking required disk space on a Windows server
Installation of the BMC Network Automation server requires approximately 1.2 GB of free disk storage on a Windows server.
Do not install the software on a networked drive. You must install the software on a local drive.
Installing Microsoft .NET 3.5 for TFTP server
To use TFTP as the file transfer protocol for devices, you must install Microsoft .NET Framework 3.5.x. To install .NET Framework specific to your OS, see.
Installing Microsoft Visual C++ 2013 (x64)
To use the embedded PostgreSQL database, you must install Microsoft Visual C++ 2013 (x64). For installation instructions, see.
Determining whether to install FTP or SCP on a Windows server
If you plan to use File Transfer Protocol (FTP) or Secure Copy (SCP) for device configuration and software image management, install the FTP server (see Installing an FTP server on Windows) and the SSH/SCP server (see Installing an SSH and SCP server on Windows) per the installation instructions specified before making a configuration snapshot. The software installs a Trivial FTP (TFTP) server only on Windows platforms as part of its installation process.
Checking security software
If your server is running any security software (such as a firewall, anti-malware, anti-virus, or intrusion protection software), you need to ensure the software does not interfere with any of the applications installed by BMC Network Automation.
Ensure all of the following:
- Blocked ports: If you are running the built-in Microsoft Windows firewall or any third-party firewall on the server, you must ensure that all ports that might be required by the software (for example, syslog, TFTP, SSH, FTP) are not blocked.
For more information about how to configure Windows firewall ports used by BMC Network Automation, see Troubleshooting Windows firewall ports .
If you are deploying any remote device agents, you must ensure that the RMI port (default 1099) specified during the installation of the remote device agent is not blocked by any firewall.
All other security software, such as anti-virus or malware software, must also be configured to ensure that no ports are blocked that might be required by the BMC Network Automation web server or file transfer services.
- TFTP server: Many security software packages can block or quarantine a TFTP server as malware because TFTP is an insecure protocol. Note that installing the TFTP server is an option during the BMC Network Automation installation procedure.
- BCAN_DATA directory:
- File scanning: If an anti-virus software package is installed on the server, set it to exclude virus checking on the BCAN_DATA directory. Otherwise, every file transfer from a device (for example, configuration file backup) is run through the virus checker.
- File permission changes: Anti-virus software also needs to be excluded from scanning the BCAN_DATA directory to prevent file permissions on Postgres database files from being altered. Failure to do so can cause database corruption.
- Locking database files: Ensure that there no application running on the server can lock BCAN_DATA data files, such as file-level backups, because file-level locks can cause database corruption.
Enabling Windows 8.3 file names
To successfully install the application server and remote device agent, you must enable Microsoft Windows 8.3 file names before the installation. Perform the following steps to verify or enable Windows 8.3 file names:
- Verify whether the Windows 8.3 file names feature is enabled: In a Windows command prompt enter
fsutil behavior query disable8dot3.
- If the output is
disable8dot=0, then Windows 8.3 file names are enabled.
- If the output is
disable8dot=1, then Windows 8.3 file names are disabled. Continue with the next step to enable Windows 8.3 file names.
- In a Windows command prompt, enable Windows 8.3 files names by entering
fsutil behavior set disable8dot3 0.
- Restart Windows.
Disabling data execution prevention
Perform the following steps to disable DEP on Windows:
- Select Start > Control Panel, and open the System utility.
- Select the Advanced tab.
- In the Performance area, click Settings.
- Select the DataExecutionPrevention tab.
- Verify that the Turn on DEP for all programs and services except for those I select option is selected.
Select the appropriate option, step 6 or step 7.
- If Turn on DEP for all programs and services except for those I select is selected, then add the installation program to the list:
- Select Add.
- Browse to the directory where you extracted the installation files in Extracting the installation files, select the installation application, setup.cmd, and then click Open.
The selected program is added to the DEP program area.
- Click Apply, and then click OK.
- In the dialog box that informs you that you must restart your computer for the setting to take effect, click OK.
If Turn on DEP for all programs and services except for those I select is not selected, Click OK to close System Properties.
If you do not correctly configure the DEP feature and terminal services, when you run the installer a wizard panel appears indicating that you need to handle these issues.
Updating Windows Terminal Services options
Microsoft Windows Terminal Services configuration options need to be updated. Perform one of the following tasks depending on your OS version:
Configuring databases for Windows
The followoing sections describe how to configure PostgreSQL , Oracle and Microsoft SQL Server databases , and Microsoft SQL databases for Microsoft Windows.
Configuring PostrgreSQL database encoding
If you use a remote PostrgreSQL database, it must be initialized with UTF-8 encoding. Specify the
-encoding UTF-8 option when you initialize the database.
Configuring Oracle and SQL Server databases
Read the topics in this section to understand the tasks that you need to perform on Oracle and SQL Server databases before installing the product on a Windows computer.
SQL Server database user account
BMC recommends creating a user account for use only by BMC Network Automation. BMC Network Automation strictly prohibits using the sa user account.
SQL Server database schema
BMC recommends creating a new schema for BMC Network Automation objects. Confirm that the user login properties has mapping to a user-defined schema.
SQL Server isolation level
On SQL Server, set the
READ COMMITTED SNAPSHOT isolation level of the BMC Network Automation database to
ON using the following statements:
ALTER DATABASE <databaseName> SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE <databaseName> SET READ_COMMITTED_SNAPSHOT ON
SQL and Oracle database user account privileges
The BMC Network Automation Oracle or SQL Server user account must have the following privileges:
Oracle user naming conventions
When creating database users for the BMC Network Automation installation, ensure that the user names meet these requirements:
- User names contain upto 30 characters.
- User names contain only alphanumeric characters from your database character set and the underscore (_), dollar sign ($), and pound sign (#).
- User names do not contain hyphens (-).
- Oracle Database reserved words are not used as user names.
For more information about naming database users, see the guidelines and rules stated for the non-quoted identifiers in the Schema Object Names and Qualifiers section in the Oracle documentation.
Oracle RAC data file path
If your database is an Oracle Real Application Cluster (RAC) using Automatic Storage Management (ASM) to manage the data file, the path to the data file must use the following format:
+DATA_SPACE
or
+DATA_SPACE/path/data_file_name
For example, if the data space name in your Oracle RAC environment is named
DATA, you would enter
+DATA.
Oracle RAC databases that are not using ASM should use the standard format, the absolute file path to the database data file.
Oracle 12c
When performing a fresh installation with Oracle 12c, you must execute one of the the following commands to ensure that the pluggable database is started if the Create New User option is selected.
alter pluggable database all open; or
alter pluggable database <pluggable_db_name> open;
Note
If you want to connect to the database by using a system ID (SID) instead of a service, you must perform the following steps to ensure that the BMC Network Automation installation does not fail:
- Set the
USE_SID_AS_SERVICE_listener_nameparameter in the listener.ora file.
- Restart the listener.
For details about how to connect to a pluggable database, see the Oracle documentation at.
Configuring Microsoft SQL databases
When performing a fresh installation and selecting the Create New Database option, ensure that the SQL Server service log-on account is Local System Account .
Checking IPv6 configuration on Windows
If you are installing the BMC Network Automation server or remote device agent on a Microsoft Windows host computer that either has both the IPv4 and IPv6 protocols or only the IPv6 protocol, confirm that the DNS is properly configured.
To confirm, run the
nslookup command on the local host name and confirm that both IPv4 and IPv6 addresses are configured, as shown in the following example:
Windows nslookup to verify IP addresses
Example
C:\Users\Administrator>nslookup -type=any vw-pun-bpm-qa05
Server: ppat5814.ipv6.bmc.com
Address: 2001:500:100:1100:4d27:9d12:e995:5e59
vl-pun-bna-dv06.ipv6.bmc.com internet address = 10.128.251.112
vl-pun-bna-dv06.ipv6.bmc.com AAAA IPv6 address = 2001:500:100:1100:250:56ff:f
Related topic
Installation fails due to spaces in the installation directory path | https://docs.bmc.com/docs/NetworkAutomation/89/installing/preparing-for-installation/setting-up-the-installation-environment/setting-up-for-installation-on-a-windows-server | 2020-05-25T05:15:27 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Information on the "Create New Non-SAP System" Page
FlexNet Manager Suite 2019 R1 (On-Premises Edition)
Complete these details to identify your new non-SAP system.
Tip: You can access this web page in two different ways. See Creating a Non-SAP System via the Indirect Access Page and Creating a Non-SAP System Using the System Landscape Editor for details.
Click Create to save your changes and create the new record. | https://docs.flexera.com/fnms2019r1/EN/WebHelp/reference/SAP-CreateNewNonSAPSystem.html | 2020-05-25T03:36:05 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.flexera.com |
aiohttp-tus¶
tus.io server implementation for aiohttp.web applications.
For uploading large files, please consider using aiotus (Python 3.7+) library instead.
Works on Python 3.6+
Works with aiohttp 3.5+
BSD licensed
Latest documentation on Read The Docs
Source, issues, and pull requests on GitHub
Quickstart¶
Code belows shows how to enable tus-compatible uploads on
/uploads URL for
aiohttp.web application. After upload, files will be available at
../uploads
directory.
from pathlib import Path from aiohttp import web from aiohttp_tus import setup_tus app = setup_tus( web.Application(), upload_url="/uploads", upload_path=Path(__file__).parent.parent / "uploads", )
Chunk Size¶
Please, make sure to configure
client_max_size for
aiohttp.web Application and
supply proper
chunkSize for Uppy.io or other tus.io client.
CORS Headers¶
To setup CORS headers you need to use cors_middleware from aiohttp-middlewares package. aiohttp-cors library not supported cause of aio-libs/aiohttp-cors#241 issue.
Reverse proxy and HTTPS¶
When
aiohttp application deployed under the reverse proxy (such as nginx) with HTTPS
support, it is needed to use https_middleware
from aiohttp-middlewares package to ensure that
web.Request instance has proper
schema.
Examples¶
examples/ directory
contains several examples, which illustrate how to use
aiohttp-tus with some tus.io
clients, such as tus.py and
Uppy.io.
License¶
aiohttp-tus is licensed under the terms of BSD License.
Contents¶
- Usage
- API Reference
- Authors & Contributors
- ChangeLog | https://aiohttp-tus.readthedocs.io/en/latest/ | 2020-05-25T05:31:38 | CC-MAIN-2020-24 | 1590347387219.0 | [] | aiohttp-tus.readthedocs.io |
This user guide contains information on how to use C/C++test - an integrated development testing solution designed to help you improve software quality. The guide consist of two core parts:
Automation User Guide - This guide describes how to use the C/C++test command line interface, which enables you to automate code analysis and test execution within your build environment. It covers configuration options for executing the tool, as well as how to integrate it with supported build systems.
Desktop User Guide - C/C++test integrates with popular IDEs, providing a desktop interface ideal for analyzing and testing code as it's being written. This guide describes how to integrate the tool into the supported IDEs and how to use the features and functionality available on the desktop.
You can use C/C++test within Eclipse and Visual Studio IDEs. Except for the installation section, the screenshots in this manual are examples for Eclipse.
Conventions Used in this Guide
The following conventions have been used in this guide:
[INSTALL_DIR]- Refers to the product installation directory. The default name of this directory is
cpptest.
tool- Refers to the Parasoft tool you are using: Jtest, C/C++test, or dotTEST. | https://docs.parasoft.com/display/CPPTEST1041/About+this+User+Guide | 2020-05-25T06:02:45 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.parasoft.com |
TOPICS×
Rich Text Editor
The Rich Text Editor is a basic building block for inputting textual content into AEM. It forms the basis of various components, including:
- Text
- Text Image
- Table
Rich Text Editor
The WYSIWYG editing dialog provides a wide range of functionality:
The features available can be configured for individual projects, so might vary for your installation.
In-Place Editing.
Features of the Rich Text Editor
The Rich Text Editor provides a range of featues, these depend on the configuration of the individual component. The features are available for both the touch-optimized and classic UI.
Basic Character Formats
Here you can apply formatting to characters you have selected (highlighted); some options also have short-cut keys:
- Bold (Ctrl-B)
- Italic (Ctrl-I)
- Underline (Ctrl-U)
- Subscript
- Superscript
All operate as a toggle, so reselection will remove the format.
Predefined Styles and Formats
Your installation can include predefined styles and formats. These are available with the Style and Format drop down lists and can be applied to text that you have selected.
A style can be applied to a specific string (a style correlates to CSS):
Whereas a format is applied to the entire text paragraph (a format is HTML-based):
.
Cut, Copy, Paste
The standard functions of Cut and Copy are available. Several flavors of Paste are provided to cater for differing formats.
- Cut ( Ctrl-X )
- Copy ( Ctrl-C )
- PasteThis is the default paste mechanism ( Ctrl-V ) for the component; when installed out-of-the-box this is configured to be "Paste from Word".
- Paste as TextStrips all styles and formatting to paste only the plain text.
- Paste from WordThis pastes the content as HTML (with some necessary reformatting).
Undo, Redo
>>IMAGE.
Alignment
Your text can be either left, center or right aligned.
Indentation
The indentation of a paragraph can be increased, or decreased. The selected paragraph will be indented, any new text entered will retain the current level of indentation.
Lists
><<
Links
><<
You can:
-:
Anchors
><<
Find and Replace
><<
Images
Images can be dragged from the content finder to add them to the text.
AEM also offers specialized components for more detailed image configuration. For example the Image and Text Image components are available.
Spelling Checker
The spelling checker will check all the text in the current component.
Any incorrect spellings will be highlighted:
The spelling checker will operate in the language of the website by taking either the language property of the subtree or extracting the language from the URL. For example the en branch will be checked for English and the de branch for German.
Tables
Tables are available both:
- As the Table component
- From within the Text componentAlthough tables are available in the RTE, it is recommended to use the Table component when creating tables.
In both the Text and Table components table functionality is available via the context menu (usually the right-mouse-button) clicked within the table; for example:
In the Table component, a specialized toolbar is also available, including various standard rich text editor functions, together with a subset of the table-specific functions.
The table specific functions are:
Table Properties
The basic properties of the table can be configured, before clicking OK to save:
- WidthThe total width of the table.
- HeightThe total height of the table.
- BorderThe size of the table border.
- Cell paddingThis defines the white space between the cell content and its borders.
- Cell spacingThis defines the distance between the cells.
Width , Height and certain cell properties can be defined in either:
- pixels
- percentages
Adobe strongly recommends that you define a Width for your table.
Cell Properties
The properties of a specific cell, or series of cells, can be configured:
- Width
- Height
- Horizontal Align - Left, Center or Right
- Vertical Align - Top, Middle, Bottom or Baseline
- Cell type - Data or Header
- Apply to:
- Single cell
- Entire row
- Entire column
Add or Delete Rows
Rows can be added either above or below the current row.
The current row can also be deleted.
Add or Delete Columns
Columns can be added either to the left or right of the current column.
The current column can also be deleted.
Selecting Entire Rows or Columns
Selects the entire current row or column. Specific actions (e.g. merge) are then available.
Merge Cells
- If you have selected a group of cells you can merge these into one.
- If you have have only one cell selected then you can merge it with the cell to either the right or below.
Split Cells
Select a single cell to split it:
- Splitting a cell horizontally will generate a new cell to the right of the current cell, within the current column.
- Splitting a cell vertically will generate a new cell underneath the current cell, but within the current row.
Creating Nested Tables
Creating a nested table will create a new, self-contained table within the current cell.
Certain additional behavior is browser dependent:
- Windows IE: Use Ctrl+primary-mouse-button-click (usually left) to select multiple cells.
- Firefox: Drag the mouse to select a cell range.
Remove Table
This will remove the table from within the Text component.
Special Characters
Special characters can be made available to your rich text editor; these might vary according to your installation.
Use mouseover to see a magnified version of the character, then click for it to be included at the current location in your text.
Source Editing Mode
The source editing mode allows you to see and edit the underlying HTML of the component.
So the text:
Will looks as follows in source mode (often the source is much longer, so you will have to scroll):
When leaving source mode, AEM makes certain validation checks (for example, ensuring that the text is correctly contained/nested in blocks). This can result in changes to your edits. | https://docs.adobe.com/content/help/en/experience-manager-64/classic-ui/authoring/classic-page-author-rich-text-editor.html | 2020-05-25T04:45:10 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_basicchars.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_inlineediting.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_basicchars.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_basicchars_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_stylesparagraph.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_styles_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_paragraph_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_cutcopypaste.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_undoredo.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_alignment.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_alignment_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_indent.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_indent_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_lists.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_lists_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_links.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/chlimage_1-12.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_link_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/chlimage_1-13.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_anchor.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_anchor_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/chlimage_1-145.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_findreplace.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_find_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_findreplace_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_image_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_spellchecker.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_spellchecker_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_tablemenu.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_tableproperties_icon.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_tableproperties_dialog.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_cellproperties_icon.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_cellproperties_dialog.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_rows.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_columns.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/chlimage_1-147.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_cellmerge.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_cellmerge-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_cellsplit.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/chlimage_1-148.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_removetable.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_specialchars.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_specialchars_use.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/do-not-localize/cq55_rte_sourceedit.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_sourcemode_1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-classic-ui-authoring/assets/cq55_rte_sourcemode_2.png',
None], dtype=object) ] | docs.adobe.com |
SPChangeQuery class
Defines a query that is performed against the change log in Microsoft SharePoint Foundation.
Inheritance hierarchy
System.Object
Microsoft.SharePoint.SPChangeQuery
Namespace: Microsoft.SharePoint
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Syntax
'Declaration Public NotInheritable Class SPChangeQuery 'Usage Dim instance As SPChangeQuery
public sealed class SPChangeQuery
Remarks.
Examples(); } } }
Imports System Imports Microsoft.SharePoint Module ConsoleApp Sub Main() Using siteCollection As SPSite = New SPSite("") Using rootSite As SPWeb = siteCollection.RootWeb ' Construct a query. Dim query As New SPChangeQuery(False, False) ' Set a limit on the number of changes returned on a single trip. query.FetchLimit = 500 ' Select the object type. query.Group = True ' Select the change type. query.GroupMembershipAdd = True ' Get the users and groups for the site collection. Dim users As SPUserCollection = rootSite.AllUsers Dim groups As SPGroupCollection = rootSite.Groups ' Convert to local time. Dim timeZone As SPTimeZone = rootSite.RegionalSettings.TimeZone ' total changes Dim total As Integer = 0 ' Loop until we reach the end of the log. While True Dim changes As SPChangeCollection = siteCollection.GetChanges(query) total += changes.Count ' running total For Each change As SPChangeGroup In changes ' Try to get the group name. Dim groupName As String = String.Empty Try Dim group As SPGroup = groups.GetByID(change.Id) groupName = group.Name Catch ex As SPException groupName = "Unknown" End Try ' Try to get the user name. Dim loginName As String = String.Empty Try Dim user As SPUser = users.GetByID(change.UserId) loginName = user.LoginName Catch ex As SPException loginName = "Unknown" End Try ' Write to the console. Console.WriteLine(vbCrLf + "Date: {0}", _ timeZone.UTCToLocalTime(change.Time).ToString()) Console.WriteLine("{0} was added to the {1} group.", _ loginName, groupName) Next change ' Break out of loop if we have the last batch. If changes.Count < query.FetchLimit Then Exit While End If ' Otherwise, go get another batch. query.ChangeTokenStart = changes.LastChangeToken End While Console.WriteLine(vbCrLf + "Total of {0:#,#} changes", total) End Using End Using Console.Write(vbCrLf + "Press ENTER to continue...") Console.ReadLine() End Sub End Module
Thread safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
See also
Reference
Microsoft.SharePoint namespace | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server/ms438610(v%3Doffice.15) | 2020-05-25T06:15:24 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
SQL Server Express LocalDB
SQL Server
Azure SQL Database
Azure Synapse Analytics (SQL DW)
Parallel Data Warehouse
Microsoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services..
Installation media
LocalDB is a feature you select during SQL Server Express installation, and is available when you download the media. If you download the media, either choose Express Advanced or the LocalDB package.
Alternatively, you can install LocalDB through the Visual Studio Installer, as part of the Data Storage and Processing workload, the ASP.NET and web development workload, or as an individual component.
Install LocalDB
Install LocalDB through the installation wizard or by using the SqlLocalDB.msi program. LocalDB is an option when installing SQL Server Express LocalDB.
Select LocalDB on the Feature Selection/Shared Features page during installation..
An instance of SQL Server Express LocalDB is managed by using the
SqlLocalDB.exe utility. SQL Server Express LocalDB should be used in place of the SQL Server Express user instance feature, which was deprecated.
Description Visual Studio Local Data Overview, Create a database and add tables in Visual Studio.
For more information about the LocalDB API, see SQL Server Express LocalDB Reference..
An instance of LocalDB owned by the built-in accounts such as
NT AUTHORITY\SYSTEMcan have manageability issues due to windows file system redirection. Instead use a normal windows account as the owner. feature. Automatic instances.
Shared instances of LocalDB
To support scenarios where multiple users of the computer need to connect to a single instance of LocalDB, LocalDB supports instance sharing. An instance owner can choose to allow the other users on the computer to connect.
Start LocalDB and connect to LocalDB
Connect.
The naming convention and connection string for LocalDB format changed in SQL Server 2014. Previously, the instance name was a single v character followed by LocalDB and the version number. Starting with SQL Server 2014, this instance name format is no longer supported, and the connection string mentioned previously should be used instead.
Note
- The first time a user on a computer tries to connect to LocalDB, the automatic instance must be both created and started. The extra time for the instance to be created can cause the connection attempt to fail with a timeout message. When this happens, wait a few seconds to let the creation process complete, and then connect again.
Create and connect to a named instance
In addition to the automatic instance, LocalDB also supports named instances. Use the SqlLocalDB.exe program to create, start, and stop a named instance of LocalDB. For more information about SqlLocalDB.exe, see SqlLocalDB Utility.
REM Create an instance of LocalDB "C:\Program Files\Microsoft SQL Server\130\Tools\Binn\SqlLocalDB.exe" create LocalDBApp1 REM Start the instance of LocalDB "C:\Program Files\Microsoft SQL Server\130\Tools\Binn\SqlLocalDB.exe" start LocalDBApp1 REM Gather information about the instance of LocalDB "C:\Program Files\Microsoft SQL Server\130\Tools\Binn\SqlLocalDB.exe" info LocalDBApp1
The last line above, returns information similar to the following.
Note
If your application uses a version of .NET before 4.0.2 you must connect directly to the named pipe of the LocalDB. The Instance pipe name value is the named pipe that the instance of LocalDB is listening on. The portion of the Instance pipe name after LOCALDB# will change each time the instance of LocalDB is started. To connect to the instance of LocalDB by using SQL Server Management Studio, type the instance pipe name in the Server name box of the Connect to Database Engine dialog box. From your custom program you can establish connection to the instance of LocalDB using a connection string similar to
SqlConnection conn = new SqlConnection(@"Server=np:\\.\pipe\LOCALDB#F365A78E\tsql\query");
Connect to a shared instance of LocalDB
To connect to a shared instance of LocalDB add
\.\ (backslash +.
Troubleshooting
For information about troubleshooting LocalDB, see Troubleshooting SQL Server 2012 Express LocalDB.
Permissions
An instance of SQL Server Express LocalDB is an instance created by a user for their use. Any user on the computer can create a database using an instance of LocalDB, store files under their user profile, and run.
Note
LocalDB always runs under the users security context; that is, LocalDB never runs with credentials from the local Administrator's group. This means that all database files used by a LocalDB instance must be accessible using the owning user's Windows account, without considering membership in the local Administrators group. | https://docs.microsoft.com/sv-se/sql/database-engine/configure-windows/sql-server-express-localdb?view=sql-server-2017 | 2020-05-25T04:05:00 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
19 May 2020
TLS 1.3 support and minor fixes
This release adds TLS 1.3 support, and fixes a variety of bugs.
Features:
- Enabled support to connections using TLS 1.3.
Fixes:
- URL view on the Dashboard should now be visually consistent with URL view shown in site settings
- Fixed an issue where site icons were not displaying in the site list
- Using
_disable_addnow properly prevents editors from creating new items
- Fixed a bug that caused the Visual Editor to sometimes get stuck on “Uploading Changes…”
- Fixed a bug causing the editor to sometimes get stuck on “Collecting Changes” after creating a new post
- Updated the error codes returned from invalid html form submission
- Fixed an issue with image rotation in specific circumstances on Chrome
- Minor Quality of Life improvements
- Minor UI touch-ups | https://docs.cloudcannon.com/2020/05/19/tls-1-3-support-and-minor-fixes/ | 2020-05-25T03:48:27 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.cloudcannon.com |
Displays the properties of the selected document or documents. If only one document is selected in the tree view, the properties that display on this tab are the properties of that selected document. If multiple documents are selected, only those properties with the same value for all documents appear. Any properties with varying values across the documents appear with blank values in these fields.
Selected documents
Displays a list of the documents selected for publishing. You must populate this list by selecting documents before you use the Publish command.
Last Published
Indicates the date on which the document or documents were last published.
Name
Displays the name of the document.
Source
Indicates the authoring tool in which the document was created.
Type
Displays the type of document or documents selected.
Issue Only
Allows you to issue request documents without republishing them. Use this option when no changes were made to the drawing and you only want to add it to a contract.
Even with this option set, you can still publish the documents. If any of the documents have never been published, they must be published, regardless of this setting.
You will receive an error message if you select multiple documents and activate this option when one or more of the selected documents cannot be changed. For example, the error message displays if the selected set of documents includes both a new document (for which this field can be set only to No) and current or locked documents (for which this field can be set only to Yes). The error message prompts you to select a smaller set of documents.
Revision
Displays the current revision number of the selected document or documents.
Revision Scheme
Displays the revision scheme applied to the selected document or documents.
Version
Indicates the current version of the selected document or documents.
Workflow
Indicates the workflow to which the selected document or documents are assigned.
Check and publish released claims for previously deleted items
Specifies that you want to resolve issues where deleted items were restored from an earlier version and the claim on them was released. This check takes additional time and should only be used when deleted items have been restored. This option is not supported in this release. This check box should also be activated when publishing after a backup is restored or when releasing the claim on an object forces another tool to release the claim on a related object that was previously deleted. In this specific case, the tool fetches the object again from As-Built and releases the claim.
Operation
Specifies the operation to perform on the selected document.
Publish Now publishes the selected document immediately.
Background publish publishes the selected document immediately as a separate process, allowing you to perform other tasks at the same time.
Custom
Opens the Custom dialog box. This functionality is available only if defined by your project implementation team. | https://docs.hexagonppm.com/reader/RwTajSoMLfCnFEJWYujp0g/O~ELMGEuPComkfKn2HSGYg | 2020-05-25T05:08:54 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.hexagonppm.com |
Use this to control whether or not the appropriate OnCollisionExit2D or OnTriggerExit2D callbacks should be called when a Collider2D is disabled.
If the Collider2D being disabled has at least a single contact with another Collider2D then with this property set to true, a callback would be produced. With the property set to false, no callback would be produced.
Only "OnCollisionExit2D" or "OnTriggerExit2D" will be called by disabling a Collider2D. | https://docs.unity3d.com/es/2017.1/ScriptReference/Physics2D-callbacksOnDisable.html | 2020-05-25T05:37:02 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.unity3d.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Remove-SQSMessage-QueueUrl <String>-ReceiptHandle <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
ReceiptHandleof the message (not the
MessageIdwhich. The
ReceiptHandleis associated with a specific instance of receiving a message. If you receive a message more than once, the
ReceiptHandleis different each time you receive a message. When you use the
DeleteMessageaction, you must provide the most recently received
ReceiptHandlefor.
PS C:\>Remove-SQSMessage -QueueUrl -ReceiptHandle AQEBd329...v6gl8Q==This example deletes the message with the specified receipt handle from the specified queue.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-SQSMessage.html | 2020-09-18T18:14:14 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.aws.amazon.com |
>> there Cloud™: 7.2.9, 7.2.4, 7.2.6, 7.2.7, 7.2.8, 7.2.10
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/7.2.4/SearchTutorial/GetthetutorialdataintoSplunk | 2020-09-18T18:10:27 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Audio Playback and Recording
LinkIt Smart 7688 development board has an I2S interface for audio playback and recording. This is not available on LinkIt Smart 7688 Duo development board. You'll need an audio DAC to convert the I2S to analog sound data.
A simple option is to get a LinkIt Smart 7688 breakout board from Seeed Studio, and use it for audio playback and recording.
The recording function is only supported with firmware v0.9.3 and above. The required breakout board version is LinkIt Smart 7688 breakout v2. For the v1 breakout board, hardware rework is required to enable the recording function.
Setup the board
- Attach the LinkIt Smart 7688 development board to the breakout, as shown below:
- Plug an earphone to the audio jack.
- Power up the board.
- Connect it with a USB drive that contains the audio files.
Audio playback
MP3 playback
To play a MP3 file, use madplay:
# madplay "path_to_your_mp3_file"
WAV playback
To play a WAV file, use aplay, as shown below:
# aplay -M "path_to_your_wav_file"
Audio recording
WAV recording
To record an audio file, use arecord, as shown below:
# arecord -f cd -t wav -M /Media/USB-A1/my_recording.wav
For a high bit-rate WAV recording, such as 16bit/44.1k format, record the file to a destination with high I/O speed (e.g. USB drive, SD card, or RAM) instead of the on-board flash. Due to the low writing speed of the on-board flash, users will experience sound jittering and buffer overrun if the recorded file is written to the on-board flash. | https://docs.labs.mediatek.com/resource/linkit-smart-7688/en/tutorials/audio-playback-and-recording | 2020-09-18T16:05:31 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.labs.mediatek.com |
Join the MemSQL Community Today
Get expert advice, develop skills, and connect with others.
This topic does not apply to MemSQL Helios.
The Toolbox state file maintains a list of hosts that are part of the user’s MemSQL cluster.
There are two types of hosts: Local and Remote. Remote hosts may optionally contain an
ssh block that overrides default SSH configuration for that host.
${XDG_DATA_HOME}/memsql-toolbox/toolbox-state.hcl
If
XDG_DATA_HOME is not set,
${HOME}/.local/share will be used instead (e.g.,
/home/alice/.local/share/memsql-toolbox/toolbox-state.hcl).
Example (empty)
The following is an empty state file.
version = 1
Example (3 hosts)
The following is a state file that specifies three host machines and their connection information.
version = 1 host { hostname = "memsql-01.example.com" } host { hostname = "memsql-02.example.com" localhost = true } host { hostname = "memsql-03.example.com" ssh { host = "memsql-03.example.com" port = 23 privateKey = "/home/bob/.ssh/id_rsa" user = "bob" } } secure = "dGhpc2tleWlzdGhpcnR5dHdvY2hhcmFjdGVyc2xvbmc=" | https://docs.memsql.com/v7.1/reference/configuration-reference/toolbox-reference/toolbox-state-hcl/ | 2020-09-18T17:57:19 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.memsql.com |
os_info
Overview
Library for detecting the operating system type and version.
Based on os_type. The main difference of
os_info is that this library separates completely incompatible operating
systems by conditional compilation and uses specific system API whenever is
possible.
Usage
To use this crate, add
os_info as a dependency to your project's Cargo.toml:
[dependencies] os_info = "1.3.1"
Example
let info = os_info::get(); // Print full information: println!("OS information: {}", info); // Print information separately: println!("Type: {}", info.os_type()); println!("Version: {}", info.version()); println!("Bitness: {}", info.bitness());
Right now, the following operating system types can be returned:
- Unknown
- Android
- Emscripten
- Linux
- Redhat
- RedHatEnterprise
- Ubuntu
- Debian
- Arch
- CentOS
- Fedora
- Amazon
- Alpine
- Macos
- Redox
- Windows
If you need support for more OS types, I am looking forward to your Pull Request.
Requirements
On Linux based systems this library requires that lsb_release is installed.
License
os_info is licensed under the MIT license. See LICENSE for the details. | https://docs.rs/crate/os_info/1.3.1 | 2020-09-18T17:20:20 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.rs |
Позиція: Графіки
Under the Charts tab, you can create and analyze charts displaying an item’s sales and purchase dynamics.
Charts cannot be generated for newly created items as there is no data to be processed yet. Only when certain sales or purchase transactions for a new item take place, information from them can be retrieved, analyzed and rendered in charts.
Charts can be found only for Goods and Fixed asset item types. Charts are fully customizable along the following parameters:
There are the following three types of charts:
- Balances chart
A balances graph displays the item’s sales and purchase dynamics as well as final balances by months.
- Sales chart
A pie chart showing customers who bought the item along with quantities sold to each customer or amounts paid by each customer.
To see the quantities or amounts, hover a slice of the chart representing a certain customer.
The sales chart is generated based on the Inventory journal.
- Purchases chart
A pie chart that displays vendors from whom the item was purchased along with quantities bought or amounts paid to each vendor.
To see the quantities or amounts, hover a slice of the chart representing certain vendor.
The purchases chart is generated based on the Inventory journal.
Display data for the entire specified period or by months
The Sales and Purchases charts can show the data for the entire indicated period or by months.
By default, the. | https://docs.codejig.com/uk/entity2305843015656313960/view/4611686018427400046 | 2020-09-18T17:09:01 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.codejig.com |
This guide assumes you have a active product already setup. Please see our guide on product for details on how to setup an active product. From cart to checkout is a 4 step process:
Create a cart
Add product to cart by creating line items
Submit customer's information
Pay for the cart by creating a payment
When a customer visit your store you should create a cart for them so they can start shopping. In Freshcom, cart and order is the same resource, cart is simply an order with status
cart. When an order is first created it will always have the status
cart, so creating a initial cart is as simply as creating an empty order.
Create a cart()}).then(function(response) {let cart = response.data// Save the order.id in localStorage or cookies for future use// ...})
Once a cart is created, we recommend you to save its ID in the browser's locale storage or cookies, so that if they refresh the page or opens a new tab their cart does not get lost. As long as you have the cart ID you can retrieve the cart through Freshcom API.
Adding product to cart is simply creating a line item for the cart. This step assume you have a active product for customer to purchase. For details on how to setup product please see our guide on product.
Create a line itemLineItem({orderQuantity: 2order: {id: '9fc8ee53-906f-48bd-a392-8b0709301699',type: 'Order'},product: {id: '74b901b2-32f9-4e3b-8d4d-4eca2360550c',type: 'Product'}}, {// Include these relationships from the returned line item.// rootLineitems is the top level line items of the order.include: 'order.rootLineItems'})}).then(function(response) {let cart = response.data.order// Update the UI accordingly...// ...})
When you create a line item for the cart, the sub total, taxes and grand total related attributes for the cart will be updated automatically.
Submit the customer's information by simply updating the cart with the appropriate information.
import.updateOrder({id: '9fc8ee53-906f-48bd-a392-8b0709301699'}, {name: 'Joe Happy',email: '[email protected]',fulfillmentMethod: 'pickup'})}).then(function(response) {let cart = response.data// Update the UI accordingly...// ...})
Note that non of the attributes of the order resource is required when creating, but when updating the name, email and fulfillment method is required.
Freshcom does not reinvent payment processing, instead we uses Stripe under the hood to process payments. Each Freshcom account is connected to its own Stripe account, and thus have its own keys for Stripe. Using your Stripe keys you can advantage of many Stripe product for collecting payment information like Stripe Element. At the moment Freshcom do not accept credit number directly you must use Stripe.js or Stripe Elements to create a token for the credit card first and use the token to create a payment in Freshcom. In short paying for a cart is a two step process:
Create a credit card token using Stripe.js or Stripe Elements
Create a payment using the token returned from Stripe
Please see the Stripe.js & Stripe Elements documentation on Stripe for how to obtain the token for credit card. The code below assume the token is already obtained.
Create a paymentimport freshcom from 'freshcom-sdk'// Assuming you already obtained a card token from Stripe.let token = '...'// Cart from previous steps.let cart = {...}freshcom.createAndSetAccessToken({fresh_token: 'prt-test-aba1d92c-defb-478e-a031-40b5c27171c7',grant_type: 'refresh_token'}).then(function () {return freshcom.createPayment({source: token,amountCents: cart.grandTotalCentstarget: cart})}).then(function(response) {let order = response.data// Update the UI accordingly...// ...})
Once a payment is created, the status of cart will be automatically be changed to
opened indicating a successful transition from cart to an open order. | https://docs.freshcom.io/from-cart-to-checkout | 2020-09-18T16:07:54 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.freshcom.io |
Incremental content deployment completes successfully with errors mentioning that the name is already is used
I often get the question as to why we have this error when using an incremental Content Deployment: The specified file is already in used.
Simply, that message appears if the authors have created a file, deleted it, and recreated another file with the same name. All that before a single incremental content deployment (or Quick Deploy for that file) has occured. Basically, the incremental will process the "delete" action after the "create" action and that's the standard error for that. Fortunately, the deployment job will keep going as it's not a fault error; unfortunately, you'll have a 404 when accessing the document.
If you do not want that issue, simply overwrite files instead of delete+recreate. Otherwise, you'll be stuck having to use Full Content Deployment all the time.
**UPDATE** : The new WSS post-SP1 rollup, KB941422, may contain the fix as it seems to fix a similar issue when you recreated a deleted list. I'm hoping it works for list items as well. **/UPDATE**
Maxime | https://docs.microsoft.com/en-us/archive/blogs/maximeb/incremental-content-deployment-completes-successfully-with-errors-mentioning-that-the-name-is-already-is-used | 2020-09-18T18:01:09 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
The BasicsThe Basics
What to expect in this guideWhat to expect in this guide
This Basic Guide is the starting point for learning to use our platform: It will lead you through the main steps of text generation. You will learn how to create projects, import and analyze data, write statements and generate text.
On this page you will get an introduction into the topic and you can immediately practice on the platform each step that you have learned. At the end of the seminar you will have a first project based on smartphone data with automatically generated text.
Requirements: For this Basic Guide you don't need any previous knowledge.
Text Projects in SequenceText Projects in Sequence
A text generation project can be divided into individual segments, each of which focuses on a certain task. The tasks in these segments build on each other so that they are usually completed one after the other. In practice however, the tasks are sometimes interlocked, for instance you can switch back to analyze your data after writing some statements.
Data input Text generation is based on structured data. The software can only write about things that can be derived from your data. So the first step is to upload your data.
Data analysis Take a look at the data and check which information you can extract from it.
Text conception You decide which information the text should have, how this information should be formulated and how your text should be structured.
Rule set In the rule set you can access your data and create logical evaluations about it. You can also define which words should be used for the outcomes of those evaluations.
Quality assurance After you finish the configuration of the project, you perform a quality assurance step, where you check the project and its results for correctness.
Text generation Now you are ready to generate your own automated texts.
Start a Text Generation Project and import DataStart a Text Generation Project and import Data
On the NLG Platform the organizational unit is the "Project". A Project contains everything you need to generate texts:
- the data that you import.
- the ruleset you develop for your texts.
- your statements, e.g. the part of text you write.
Assess and prepare Data for the NLG PlatformAssess and prepare Data for the NLG Platform
The requirements
Data is a basic prerequisite for text generation on the NLG platform. It provides the essential input for the content of your texts. In order to get meaningful and useful texts, the data you use should comply with a few conditions.
Structured format of the data
Data must be provided in a structured format. This means that data should be provided in separate data fields, e.g. you cannot use continuous text as data source. Tables as in Microsoft Excel are fine, for more complex data structures the JSON data format is supported.
Example:
Quality criteria for the data
Structure by itself is not sufficient – data must also meet certain quality requirements:
Technical quality criteria require uniform filling (all data sets express the same fact identically: “black” always has to be “black”) and their ability to be processed by a machine. Data can be machine-processed if, for example, lists are clearly separated, the same units are always used or words are always present in the same grammatical form.
Editorial quality criteria are, for example, correct spelling of the content, the significance of the data fields (a field that always has the same value is textually less relevant than a field that has many values), or whether sufficient data records actually have data in the field so that writing a text is worthwhile.
Start your text generation project:Start your text generation project:
- Create the project and the collection that you will work with in this tutorial.
- Learn how to upload data to your collection.
- Take a look at one document stored in JSON format.
From data to nodes
The data you have uploaded has to be analyzed and prepared for being used in your text. This is done by defining which fields will be used as variables and as what kind of values. On the NLG Platform, this procedure is called "adding data nodes". The created data nodes are then available for further processing in other areas of the platform.
To help you decide which fields to use in the text, the software first runs an analysis. Then you choose the categories and create nodes that you can use in the next steps of the text generation.
Take a closer look at your imported data and create the data nodesTake a closer look at your imported data and create the data nodes
- Understand how the software analyzes the data.
- Add some nodes for further use in your project.
Write statements and define the variable partsWrite statements and define the variable parts
Statements
In the next step you design a text concept. That describes what kind of information will appear in your texts. On the NLG platform, a statement is the content unit in this text concept. For example, a headline or an introduction are statements that can be found in most projects.
- One statement can contain one or more sentences.
- You can define different text output for different data values within your statements.
- You can set under which conditions a statement will appear.
Containers
A statement is composed of static parts that you have to formulate and of variable parts that are either transferred directly from your data fields or are derived from them. To define the variable parts in your statements you mark these parts as containers at the correct positions in the statement.
Container settings
Within the containers you have several options to specify these variable parts: The outputs of the containers can be simple data values or more complicated things like the content you define in self-created variables. And they offer many possibilities in their configuration: In the container settings you can manage the content, the formatting, and the grammar of each container.
- In the content settings you determine the associated variables.
- If you name the role of a container within a sentence the text can be grammatically adapted automatically when changes are made.
- Set the formatting of a word (e.g. capitalize) to achieve your desired formatting.
Preview Test Object
When writing on the NLG platform you are not using placeholders, but formulating your statement based on a single real data set: "Samsung presents the smartphone Galaxy Note 4.". For this purpose, you can configure suitable data sets as Preview Test Objects:
- in the Data Sources click on the star icon in the list of the documents.
- in the Composer you can choose between the designated preview test object or switch to the test objects configuration to see all options.
Switching between different Preview Test Objects is a good method to review your statements and check how they vary with different documents.
Write a statement and create containersWrite a statement and create containers
- Get to know the different elements of the Write Tab.
- Add statements and create containers.
- Adjust the grammatical output.
- Check your possible outputs by changing the Preview Test Object.
Define the logics and conditionsDefine the logics and conditions
In order to output more than just data values, it is possible to define your own conditions for outputs. In the NLG platform, this is done with nodes and connections: The Transform Tab of the Composer provides a graphical environment for facilitating the formulation of conditions.
Nodes - anatomy and functions
In the Transform Tab nodes look like small boxes and allow you to modify, evaluate and pass on data. It is possible to connect different nodes via the small yellow and grey plugs, the so-called ports. Use nodes whenever you want to add some individual logic to your project.
The outcomes of all nodes will flow into the corresponding Variable nodes. Variable nodes have only an input port and are the only nodes that can be directly used in the text. In the Write Section a list of all Variable nodes is provided for further use in the text.
In the preview field at the base of each node, you can see the output of the current test object. Each time you change the test object, the changes will be applied automatically. Similar to the statement preview, you also have a Node Preview to check whether your conditions are working correctly. The colored indicator (red or green) of the small circles at the lower edge of the nodes indicates whether a condition for the chosen test object is true or false.
Node Types
Connection ports
Like data nodes, the other node types have ports to connect to other nodes. To use the content of a node in another node, simply drag from the output port of the first node to the input port of the corresponding node. Note that the colors of the ports have to match to make a connection. There's one exception for this rule of matching colors: variable nodes have initially a grey port, but if you connect a yellow port e.g. from a condition node, with it, it will change the color to yellow.
Set up conditions and logicsSet up conditions and logics
- Get familiar with the layout of the Transform Tab.
- Configure conditions for different field values.
- Process conditions for using them in your text.
- Use branches for creating different output.
Branches
When writing a statement, you have the possibility to branch off at certain points in your statement. This facilitates creating more than one way of expressing your information, or activating different variations of your sentence depending on the data. You can create branches that contain one or more words, phrases, or even entire sentences. The branches appear below each other in the Write section, so you can easily keep track of your construction. For each branch you can set different branching modes to decide which branch to render, so you can easily implement logic while writing your text.
What you can do with branches:
- Set synonyms - words, sentence parts or complete sentences for variation in your texts.
- Define a proper text output for different data values.
- Manage variants for your statements.
Use Branches to put different statements into wordsUse Branches to put different statements into words
- Practice working with conditions and triggers.
- Create branches that span single words or parts of sentences.
Compose your textCompose your text
After you have written your statements, you can arrange them: You set the order of their appearance and define under what conditions each statement is triggered or blocked. To create a wider range of different texts, you can also choose to trigger different parts of one statement.
Variable nodes for triggering statements
To switch a statement on or off, you can use a variable node with a condition. This works as a mechanism for activating and deactivating containers, branches, statements or stories. You define your conditions with a condition node in the Transform Tab and pass on the condition to the Write Tab via a variable node for organizing your statements. When creating your stories in the Narrate Tab you can set with a variable node the condition for each story, too.
Tip: When you create a new story check the default trigger setting is "off". So if you want this new story to appear, remember to switch the trigger on.
Manage your composition in the Write and Narrate Tabs
In the Write Tab you organize your statements, name them and set their styles and the conditions when to appear. When you switch to the Narrate Tab, you can put your story together and determine its course by controlling the order of the statements in your texts. To increase the variety of the texts, you can create several stories and vary the order of the statements. In addition, you can assign triggers to the stories that determine the condition under which a story should be used.
Organize your statements and put them into different storiesOrganize your statements and put them into different stories
- Activate and deactivate statements under certain conditions.
- Name your statements and add the associated variable nodes.
- Create different stories.
Review and generate the textsReview and generate the texts
To control the quality of the generating texts it is recommended to check your outcome under different conditions and documents. In the Review Tab you can look at your data sets and possible produced texts and look for spelling or content mistakes or logic errors. In this view you can also find out, how the statements work together.
The actual generating is the final step in your text project. Depending on your usecase and the stage of your project, you might want to produce different amounts and subsets of your texts. Therefore, we allow different modes of text generation. You can decide whether you would like to start generating single texts or the entire text mass.
How to export the generated texts.
If you want to handle your text exports manually, you can download text exports in the web interface. These files are updated automatically and contain a snapshot of your text production, which can be up to an hour old. The file formats are again JSON, CSV and Excel. You can select the format in the settings of every collection. The UID allows you to match the produced text to your database. The exports contain both raw text and HTML.
Review your project and generate one or more texts out of your rulesetReview your project and generate one or more texts out of your ruleset
- Perform a quality assurance in the Review Tab.
- Get an overview over the Results Area.
- Generate your texts.
What's next?What's next?
Congrats! You have successfully completed the AX Seminar, now you are no longer a newbie in NLG. It is like learning a new language or getting a driver’s license: To improve your skills, the best thing you can do is to practice.
With the AX Seminar you have acquired basic knowledge of the principles of text generation and skills to work with the NLG Cloud. Now you are able to take the first steps in your own project. Do you have structured data? Ideas for statements? Then start your projectright now!
Do you have any special requirements or any further questions? The AX Semantics support team can assist you quickly via chat on the NLG platform. | https://docs.ax-semantics.com/guides/seminar.html | 2020-09-18T15:53:34 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['/assets/img/001_overview.8e1bf0ac.png', 'overview'], dtype=object)
array(['/assets/img/006_definition_nodes.45c2c4cc.png', 'nodes'],
dtype=object) ] | docs.ax-semantics.com |
Configuring the product by using the console
After the product installation is complete, you must configure various databases, repositories, and the email server by using the
Before you begin
Before you start configuring the product, ensure that the following requirements are met:
- (Microsoft SQL Server only) Ensure that you have permission to create a database link between
- To access the console, JavaScript must be enabled for your web browser.
- If you want to view the
To enable JavaScript on Microsoft Internet Explorer
- Click Tools > Internet Options.
- On the Security tab, click the Custom level button.
- Navigate to Scripting > Active scripting, and select the Enable option.
To enable JavaScript on Mozilla
- Click Tools > Internet Options.
- On the Content tab, select the Enable JavaScript check box.
To configure the product by using the BMC Decision Support for Server Automation Console
- If you did not configure the product immediately after installation, launch the BMC Decision Support for Server Automation Console.
Log on to the console with the BDSAdmin user credentials.
Click the Proceed with the Configuration button.
The postinstallation configuration wizard starts.
Note
Context-sensitive help is available for the GUI elements on the Console. By default, offline (local) context-sensitive help is available. To see the latest documentation, access the online Help. For more information, see Configuring the online Help.
- On the Database Type tab, view the database type that you are using.
- Click Next.
- Provide information on the Database Prerequisites tab (see (Oracle) Database Prerequisites tab or (SQL Server) Database Prerequisites tab).
If you have already set up the database, select the check box and go to step 8. If you have not created the users and tablespaces (for Oracle) or databases and users (for SQL Server), click the Download Database Scripts button and go to step 7.
- Set up the database:
- If you are using Oracle, create users and tablespaces using the downloaded script and start the configuration process again. For instructions for creating users and tablespaces, see Setting up the Oracle database.
- If you are using SQL Server, create users and databases using the downloaded script and start the configuration process again. For instructions for creating users and databases, see Setting up the SQL Server database.
Click Next.
- Provide information on the Warehouse Database tab (see Warehouse Database tab), and click Next.
- Provide information on the ETL Master Repository tab (see ETL Master Repository tab), and click Next.
- Provide information on the ETL Work Repository tab (see ETL Work Repository tab), and click Next.
- Provide information on the Primary Site tab (see Primary Site tab), and click Next.
Provide information on the Portal Database tab (see Portal Database tab), and click Next.
Provide information on the Email Server tab (see Email Server tab).
- Click Finish.
The Last Task Details page lists the tasks and their status that are getting executed during the configuration process.
- When the product is configured successfully, click either Home or Configuration to activate the ETL Management menu.
Where to go from here
Restarting the RSCD agent
Related topics
Changing the password for the BDSAdmin user ID
Managing databases
Modifying email server details
Modifying Apache Web Server details | https://docs.bmc.com/docs/decisionsupportserverautomation/88/configuring-the-product-by-using-the-console-629184514.html | 2020-09-18T18:17:38 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
Django 3.1 release notes¶
August 4, 2020
Welcome to Django 3.1!
These release notes cover the new features, as well as some backwards incompatible changes you’ll want to be aware of when upgrading from Django 3.0 or earlier. 3.1 supports Python 3.6, 3.7, and 3.8. We highly recommend and only officially support the latest release of each series.
What’s new in Django 3.1¶
Asynchronous views and middleware support¶
Django now supports a fully asynchronous request path, including:
To get started with async views, you need to declare a view using
async def:
async def my_view(request): await asyncio.sleep(0.5) return HttpResponse('Hello, async world!')
All asynchronous features are supported whether you are running under WSGI or ASGI mode. However, there will be performance penalties using async code in WSGI mode. You can read more about the specifics in Asynchronous support documentation.
You are free to mix async and sync views, middleware, and tests as much as you want. Django will ensure that you always end up with the right execution context. We expect most projects will keep the majority of their views synchronous, and only have a select few running in async mode - but it is entirely your choice.
Django’s ORM, cache layer, and other pieces of code that do long-running network calls do not yet support async access. We expect to add support for them in upcoming releases. Async views are ideal, however, if you are doing a lot of API or HTTP calls inside your view, you can now natively do all those HTTP calls in parallel to considerably speed up your view’s execution.
Asynchronous support should be entirely backwards-compatible and we have tried to ensure that it has no speed regressions for your existing, synchronous code. It should have no noticeable effect on any existing Django projects.
JSONField for all supported database backends¶
Django now includes
models.JSONField and
forms.JSONField that can be used on all
supported database backends. Both fields support the use of custom JSON
encoders and decoders. The model field supports the introspection,
lookups, and transforms that were previously
PostgreSQL-only:
from django.db import models class ContactInfo(models.Model): data = models.JSONField() ContactInfo.objects.create(data={ 'name': 'John', 'cities': ['London', 'Cambridge'], 'pets': {'dogs': ['Rufus', 'Meg']}, }) ContactInfo.objects.filter( data__name='John', data__pets__has_key='dogs', data__cities__contains='London', ).delete()
If your project uses
django.contrib.postgres.fields.JSONField, plus the
related form field and transforms, you should adjust to use the new fields,
and generate and apply a database migration. For now, the old fields and
transforms are left as a reference to the new ones and are deprecated as
of this release.
DEFAULT_HASHING_ALGORITHM settings¶
The new
DEFAULT_HASHING_ALGORITHM transitional setting allows
specifying the default hashing algorithm to use for encoding cookies, password
reset tokens in the admin site, user sessions, and signatures created by
django.core.signing.Signer and
django.core.signing.dumps().
Support for SHA-256 was added in Django 3.1. If you are upgrading multiple
instances of the same project to Django 3.1, you should set
DEFAULT_HASHING_ALGORITHM to
'sha1' during the transition, in
order to allow compatibility with the older versions of Django. Note that this
requires Django 3.1.1+. Once the transition to 3.1 is complete you can stop
overriding
DEFAULT_HASHING_ALGORITHM.
This setting is deprecated as of this release, because support for tokens, cookies, sessions, and signatures that use SHA-1 algorithm will be removed in Django 4.0.
Minor features¶
django.contrib.admin¶
The new
django.contrib.admin.EmptyFieldListFilterfor
ModelAdmin.list_filterallows filtering on empty values (empty strings and nulls) in the admin changelist view.
Filters in the right sidebar of the admin changelist view now contain a link to clear all filters.
The admin now has a sidebar on larger screens for easier navigation. It is enabled by default but can be disabled by using a custom
AdminSiteand setting
AdminSite.enable_nav_sidebarto
False.
Rendering the sidebar requires access to the current request in order to set CSS and ARIA role affordances. This requires using
'django.template.context_processors.request'in the
'context_processors'option of
OPTIONS.
XRegExpis upgraded from version 2.0.0 to 3.2.0.
jQuery is upgraded from version 3.4.1 to 3.5.1.
Select2 library is upgraded from version 4.0.7 to 4.0.13.
django.contrib.auth¶
- The default iteration count for the PBKDF2 password hasher is increased from 180,000 to 216,000.
- The new
PASSWORD_RESET_TIMEOUTsetting allows defining the number of seconds a password reset link is valid for. This is encouraged instead of the deprecated
PASSWORD_RESET_TIMEOUT_DAYSsetting, which will be removed in Django 4.0.
- The password reset mechanism now uses the SHA-256 hashing algorithm. Support for tokens that use the old hashing algorithm remains until Django 4.0.
AbstractBaseUser.get_session_auth_hash()now uses the SHA-256 hashing algorithm. Support for user sessions that use the old hashing algorithm remains until Django 4.0.
django.contrib.contenttypes¶
- The new
remove_stale_contenttypes --include-stale-appsoption allows removing stale content types from previously installed apps that have been removed from
INSTALLED_APPS.
django.contrib.gis¶
relatelookup is now supported on MariaDB.
- Added the
LinearRing.is_counterclockwiseproperty.
AsGeoJSONis now supported on Oracle.
- Added the
AsWKBand
AsWKTfunctions.
- Added support for PostGIS 3 and GDAL 3.
django.contrib.humanize¶
django.contrib.postgres¶
- The new
BloomIndexclass allows creating
bloomindexes in the database. The new
BloomExtensionmigration operation installs the
bloomextension to add support for this index.
get_FOO_display()now supports
ArrayFieldand
RangeField.
- The new
rangefield.lower_inc,
rangefield.lower_inf,
rangefield.upper_inc, and
rangefield.upper_inflookups allow querying
RangeFieldby a bound type.
rangefield.contained_bynow supports
SmallAutoField,
AutoField,
BigAutoField,
SmallIntegerField, and
DecimalField.
SearchQuerynow supports
'websearch'search type on PostgreSQL 11+.
SearchQuery.valuenow supports query expressions.
- The new
SearchHeadlineclass allows highlighting search results.
searchlookup now supports query expressions.
- The new
cover_densityparameter of
SearchRankallows ranking by cover density.
- The new
normalizationparameter of
SearchRankallows rank normalization.
- The new
ExclusionConstraint.deferrableattribute allows creating deferrable exclusion constraints.
django.contrib.sessions¶
- The
SESSION_COOKIE_SAMESITEsetting now allows
'None'(string) value to explicitly state that the cookie is sent with all same-site and cross-site requests.
django.contrib.staticfiles¶
- The
STATICFILES_DIRSsetting now supports
pathlib.Path.
Cache¶
- The
cache_control()decorator and
patch_cache_control()method now support multiple field names in the
no-cachedirective for the
Cache-Controlheader, according to RFC 7234#section-5.2.2.2.
delete()now returns
Trueif the key was successfully deleted,
Falseotherwise.
CSRF¶
- The
CSRF_COOKIE_SAMESITEsetting now allows
'None'(string) value to explicitly state that the cookie is sent with all same-site and cross-site requests.
- The
pathlib.Path.
Error Reporting¶
django.views.debug.SafeExceptionReporterFilternow filters sensitive values from
request.METAin exception reports.
- The new
SafeExceptionReporterFilter.cleansed_substituteand
SafeExceptionReporterFilter.hidden_settingsattributes allow customization of sensitive settings and
request.METAfiltering in exception reports.
- The technical 404 debug view now respects
DEFAULT_EXCEPTION_REPORTER_FILTERwhen applying settings filtering.
- The new
DEFAULT_EXCEPTION_REPORTERallows providing a
django.views.debug.ExceptionReportersubclass to customize exception report generation. See Custom error reports for details.
File Storage¶
FileSystemStorage.save()method now supports
pathlib.Path.
FileFieldand
ImageFieldnow accept a callable for
storage. This allows you to modify the used storage at runtime, selecting different storages for different environments, for example.
Forms¶
ModelChoiceIterator, used by
ModelChoiceFieldand
ModelMultipleChoiceField, now uses
ModelChoiceIteratorValuethat can be used by widgets to access model instances. See Iterating relationship choices for details.
django.forms.DateTimeFieldnow accepts dates in a subset of ISO 8601 datetime formats, including optional timezone, e.g.
2019-10-10T06:47,
2019-10-10T06:47:23+04:00, or
2019-10-10T06:47:23Z. The timezone will always be retained if provided, with timezone-aware datetimes being returned even when
USE_TZis
False.
Additionally,
DateTimeFieldnow uses
DATE_INPUT_FORMATSin addition to
DATETIME_INPUT_FORMATSwhen converting a field input to a
datetimevalue.
MultiWidget.widgetsnow accepts a dictionary which allows customizing subwidget
nameattributes.
The new
BoundField.widget_typeproperty can be used to dynamically adjust form rendering based upon the widget type.
Internationalization¶
- The
LANGUAGE_COOKIE_SAMESITEsetting now allows
'None'(string) value to explicitly state that the cookie is sent with all same-site and cross-site requests.
- Added support and translations for the Algerian Arabic, Igbo, Kyrgyz, Tajik, and Turkmen languages.
Management Commands¶
- The new
check --databaseoption allows specifying database aliases for running the
databasesystem checks. Previously these checks were enabled for all configured
DATABASESby passing the
databasetag to the command.
- The new
migrate --checkoption makes the command exit with a non-zero status when unapplied migrations are detected.
- The new
returncodeargument for
CommandErrorallows customizing the exit status for management commands.
- The new
dbshell -- ARGUMENTSoption allows passing extra arguments to the command-line client for the database.
- The
flushand
sqlflushcommands now include SQL to reset sequences on SQLite.
Models¶
- The new
ExtractIsoWeekDayfunction extracts ISO-8601 week days from
DateFieldand
DateTimeField, and the new
iso_week_daylookup allows querying by an ISO-8601 day of week.
QuerySet.explain()now supports:
TREEformat on MySQL 8.0.16+,
analyzeoption on MySQL 8.0.18+ and MariaDB.
- Added
PositiveBigIntegerFieldwhich acts much like a
PositiveIntegerFieldexcept that it only allows values under a certain (database-dependent) limit. Values from
0to
9223372036854775807are safe in all databases supported by Django.
- The new
RESTRICToption for
on_deleteargument of
ForeignKeyand
OneToOneFieldemulates the behavior of the SQL constraint
ON DELETE RESTRICT.
CheckConstraint.checknow supports boolean expressions.
- The
RelatedManager.add(),
create(), and
set()methods now accept callables as values in the
through_defaultsargument.
- The new
is_dstparameter of the
QuerySet.datetimes()determines the treatment of nonexistent and ambiguous datetimes.
- The new
Fexpression
bitxor()method allows bitwise XOR operation.
QuerySet.bulk_create()now sets the primary key on objects when using MariaDB 10.5+.
- The
DatabaseOperations.sql_flush()method now generates more efficient SQL on MySQL by using
DELETEinstead of
TRUNCATEstatements for tables which don’t require resetting sequences.
- SQLite functions are now marked as
deterministicon Python 3.8+. This allows using them in check constraints and partial indexes.
- The new
UniqueConstraint.deferrableattribute allows creating deferrable unique constraints.
Requests and Responses¶
- If
ALLOWED_HOSTSis empty and
DEBUG=True, subdomains of localhost are now allowed in the
Hostheader, e.g.
static.localhost.
HttpResponse.set_cookie()and
HttpResponse.set_signed_cookie()now allow using
samesite='None'(string) to explicitly state that the cookie is sent with all same-site and cross-site requests.
- The new
HttpRequest.accepts()method returns whether the request accepts the given MIME type according to the
AcceptHTTP header.
Security¶
The
SECURE_REFERRER_POLICYsetting now defaults to
'same-origin'. With this configured,
SecurityMiddlewaresets the Referrer Policy header to
same-originon all responses that do not already have it. This prevents the
Refererheader being sent to other origins. If you need the previous behavior, explicitly set
SECURE_REFERRER_POLICYto
None.
The default algorithm of
django.core.signing.Signer,
django.core.signing.loads(), and
django.core.signing.dumps()is changed to the SHA-256. Support for signatures made with the old SHA-1 algorithm remains until Django 4.0.
Also, the new
algorithmparameter of the
Signerallows customizing the hashing algorithm.
Templates¶
- The renamed
translateand
blocktranslatetemplate tags are introduced for internationalization in template code. The older
transand
blocktranstemplate tags aliases continue to work, and will be retained for the foreseeable future.
- The
includetemplate tag now accepts iterables of template names.
Tests¶
SimpleTestCasenow implements the
debug()method to allow running a test without collecting the result and catching exceptions. This can be used to support running tests under a debugger.
- The new
MIGRATEtest database setting allows disabling of migrations during a test database creation.
- Django test runner now supports a
test --bufferoption to discard output for passing tests.
DiscoverRunnernow skips running the system checks on databases not referenced by tests.
TransactionTestCaseteardown is now faster on MySQL due to
flushcommand improvements. As a side effect the latter doesn’t automatically reset sequences on teardown anymore. Enable
TransactionTestCase.reset_sequencesif your tests require this feature.
URLs¶
- Path converters can now raise
ValueErrorin
to_url()to indicate no match when reversing URLs.
Utilities¶
filepath_to_uri()now supports
pathlib.Path.
parse_duration()now supports comma separators for decimal fractions in the ISO 8601 format.
parse_datetime(),
parse_duration(), and
parse_time()now support comma separators for milliseconds.
Miscellaneous¶
- The SQLite backend now supports
pathlib.Pathfor the
NAMEsetting.
- The
settings.pygenerated by the
startprojectcommand now uses
pathlib.Pathinstead of
os.pathfor building filesystem paths.
- The
TIME_ZONEsetting is now allowed on databases that support time zones.
Backwards incompatible changes in 3.1¶
Database backend API¶
This section describes changes that may be needed in third-party database backends.
DatabaseOperations.fetch_returned_insert_columns()now requires an additional
returning_paramsargument.
connection.timezoneproperty is now
'UTC'by default, or the
TIME_ZONEwhen
USE_TZis
Trueon databases that support time zones. Previously, it was
Noneon databases that support time zones.
connection._nodb_connectionproperty is changed to the
connection._nodb_cursor()method and now returns a context manager that yields a cursor and automatically closes the cursor and connection upon exiting the
withstatement.
DatabaseClient.runshell()now requires an additional
parametersargument as a list of extra arguments to pass on to the command-line client.
- The
sequencespositional argument of
DatabaseOperations.sql_flush()is replaced by the boolean keyword-only argument
reset_sequences. If
True, the sequences of the truncated tables will be reset.
- The
allow_cascadeargument of
DatabaseOperations.sql_flush()is now a keyword-only argument.
- The
usingpositional argument of
DatabaseOperations.execute_sql_flush()is removed. The method now uses the database of the called instance.
- Third-party database backends must implement support for
JSONFieldor set
DatabaseFeatures.supports_json_fieldto
False. If storing primitives is not supported, set
DatabaseFeatures.supports_primitives_in_json_fieldto
False. If there is a true datatype for JSON, set
DatabaseFeatures.has_native_json_fieldto
True. If
jsonfield.containsand
jsonfield.contained_byare not supported, set
DatabaseFeatures.supports_json_field_containsto
False.
- Third party database backends must implement introspection for
JSONFieldor set
can_introspect_json_fieldto
False.
Dropped support for MariaDB 10.1¶
Upstream support for MariaDB 10.1 ends in October 2020. Django 3.1 supports MariaDB 10.2 and higher.
contrib.admin browser support¶
The admin no longer supports the legacy Internet Explorer browser. See the admin FAQ for details on supported browsers.
AbstractUser.first_name
max_length increased to 150¶
A migration for
django.contrib.auth.models.User.first_name is included.
If you have a custom user model inheriting from
AbstractUser, you’ll need
to generate and apply a database migration for your user model.
If you want to preserve the 30 character limit for first names, use a custom form:
from django import forms from django.contrib.auth.forms import UserChangeForm class MyUserChangeForm(UserChangeForm): first)
Miscellaneous¶
- The cache keys used by
cacheand generated by
make_template_fragment_key()are different from the keys generated by older versions of Django. After upgrading to Django 3.1, the first request to any previously cached template fragment will be a cache miss.
- The logic behind the decision to return a redirection fallback or a 204 HTTP response from the
set_language()view is now based on the
AcceptHTTP header instead of the
X-Requested-WithHTTP header presence.
- The compatibility imports of
django.core.exceptions.EmptyResultSetin
django.db.models.query,
django.db.models.sql, and
django.db.models.sql.datastructuresare removed.
- The compatibility import of
django.core.exceptions.FieldDoesNotExistin
django.db.models.fieldsis removed.
- The compatibility imports of
django.forms.utils.pretty_name()and
django.forms.boundfield.BoundFieldin
django.forms.formsare removed.
- The compatibility imports of
Context,
ContextPopException, and
RequestContextin
django.template.baseare removed.
- The compatibility import of
django.contrib.admin.helpers.ACTION_CHECKBOX_NAMEin
django.contrib.adminis removed.
- The.
django.utils.decorators.classproperty()decorator is made public and moved to
django.utils.functional.classproperty().
floatformattemplate filter now outputs (positive)
0for negative numbers which round to zero.
Meta.orderingand
Meta.unique_togetheroptions on models in
django.contribmodules that were formerly tuples are now lists.
- The admin calendar widget now handles two-digit years according to the Open Group Specification, i.e. values between 69 and 99 are mapped to the previous century, and values between 0 and 68 are mapped to the current century.
- Date-only formats are removed from the default list for
DATETIME_INPUT_FORMATS.
- The
FileInputwidget no longer renders with the
requiredHTML attribute when initial data exists.
- The undocumented
django.views.debug.ExceptionReporterFilterclass is removed. As per the Custom error reports documentation, classes to be used with
DEFAULT_EXCEPTION_REPORTER_FILTERneed to inherit from
django.views.debug.SafeExceptionReporterFilter.
- The cache timeout set by
cache_page()decorator now takes precedence over the
max-agedirective from the
Cache-Controlheader.
- Providing a non-local remote field in the
ForeignKey.to_fieldargument now raises
FieldError.
SECURE_REFERRER_POLICYnow defaults to
'same-origin'. See the What’s New Security section above for more details.
checkmanagement command now runs the
databasesystem checks only for database aliases specified using
check --databaseoption.
migratemanagement command now runs the
databasesystem checks only for a database to migrate.
- The admin CSS classes
row1and
row2are removed in favor of
:nth-child(odd)and
:nth-child(even)pseudo-classes.
- The
make_password()function now requires its argument to be a string or bytes. Other types should be explicitly cast to one of these.
- The undocumented
versionparameter to the
AsKMLfunction is removed.
- JSON and YAML serializers, used by
dumpdata, now dump all data with Unicode by default. If you need the previous behavior, pass
ensure_ascii=Trueto JSON serializer, or
allow_unicode=Falseto YAML serializer.
- The auto-reloader no longer monitors changes in built-in Django translation files.
- The minimum supported version of
mysqlclientis increased from 1.3.13 to 1.4.0.
- The undocumented
django.contrib.postgres.forms.InvalidJSONInputand
django.contrib.postgres.forms.JSONStringare moved to
django.forms.fields.
- The undocumented
django.contrib.postgres.fields.jsonb.JsonAdapterclass is removed.
- The
{% localize off %}tag and
unlocalizefilter no longer respect
DECIMAL_SEPARATORsetting.
- The minimum supported version of
asgirefis increased from 3.2 to 3.2.10.
- The Media class now renders
<script>tags without the
typeattribute to follow WHATWG recommendations.
Features deprecated in 3.1¶
PostgreSQL
JSONField¶
django.contrib.postgres.fields.JSONField and
django.contrib.postgres.forms.JSONField are deprecated in favor of
models.JSONField and
forms.JSONField.
The undocumented
django.contrib.postgres.fields.jsonb.KeyTransform and
django.contrib.postgres.fields.jsonb.KeyTextTransform are also deprecated
in favor of the transforms in
django.db.models.fields.json.
The new
JSONFields,
KeyTransform, and
KeyTextTransform can be used
on all supported database backends.
Miscellaneous¶
PASSWORD_RESET_TIMEOUT_DAYSsetting is deprecated in favor of
PASSWORD_RESET_TIMEOUT.
The undocumented usage of the
isnulllookup with non-boolean values as the right-hand side is deprecated, use
Trueor
Falseinstead.
The barely documented
django.db.models.query_utils.InvalidQueryexception class is deprecated in favor of
FieldDoesNotExistand
FieldError.
The
django-admin.pyentry point is deprecated in favor of
django-admin.
The
HttpRequest.is_ajax()method is deprecated as it relied on a jQuery-specific way of signifying AJAX calls, while current usage tends to use the JavaScript Fetch API. Depending on your use case, you can either write your own AJAX detection method, or use the new
HttpRequest.accepts()method if your code depends on the client
AcceptHTTP header.
If you are writing your own AJAX detection method,
request.is_ajax()can be reproduced exactly as
request.headers.get('x-requested-with') == 'XMLHttpRequest'.
Passing
Noneas the first argument to
django.utils.deprecation.MiddlewareMixin.__init__()is deprecated.
The encoding format of cookies values used by
CookieStorageis different from the format generated by older versions of Django. Support for the old format remains until Django 4.0.
The encoding format of sessions is different from the format generated by older versions of Django. Support for the old format remains until Django 4.0.
The purely documentational
providing_argsargument for
Signalis deprecated. If you rely on this argument as documentation, you can move the text to a code comment or docstring.
Calling
django.utils.crypto.get_random_string()without a
lengthargument is deprecated.
The
listmessage for
ModelMultipleChoiceFieldis deprecated in favor of
invalid_list.
Passing raw column aliases to
QuerySet.order_by()is deprecated. The same result can be achieved by passing aliases in a
RawSQLinstead beforehand.
The
NullBooleanFieldmodel field is deprecated in favor of
BooleanField(null=True).
django.conf.urls.url()alias of
django.urls.re_path()is deprecated.
The
{% ifequal %}and
{% ifnotequal %}template tags are deprecated in favor of
{% if %}.
{% if %}covers all use cases, but if you need to continue using these tags, they can be extracted from Django to a module and included as a built-in tag in the
'builtins'option in
OPTIONS.
DEFAULT_HASHING_ALGORITHMtransitional setting is deprecated.
Features removed in 3.1¶.
- A model’s
Meta.orderingdoesn’t affect
GROUP BYqueries.
django.contrib.postgres.fields.FloatRangeFieldand
django.contrib.postgres.forms.FloatRangeFieldare removed.
- The
FILE_CHARSETsetting is removed.
django.contrib.staticfiles.storage.CachedStaticFilesStorageis removed.
- The
RemoteUserBackend.configure_user()method requires
requestas the first positional argument.
- Support for
SimpleTestCase.allow_database_queriesand
TransactionTestCase.multi_dbis removed. | https://docs.djangoproject.com/en/3.1/releases/3.1/ | 2020-09-18T18:12:47 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.djangoproject.com |
Upload STIX files through the REST API
Threat collections enable your Reveal(x) system to identify suspicious IP addresses, hostnames, and URIs found in your network activity. While an ExtraHop-curated threat collection is available) print(r.status_code), verify=False)? | https://docs.extrahop.com/7.8/rest-upload-stix/ | 2020-09-18T17:10:24 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.extrahop.com |
Join the MemSQL Community Today
Get expert advice, develop skills, and connect with others.
Constructor function. This function takes two floats or doubles and returns a
GeographyPoint type. Since all of MemSQL’s topological and measurement functions can equally understand WKT strings and geospatial objects, this constructor is mainly for convenience.
GEOGRAPHY_POINT ( longitude, latitude )
A
GeographyPoint object.
In this example, we use a “persisted” computed column to create an indexed GeographyPoint from a pair of floats. This technique is useful for bulk-loading geospatial data.
memsql> create table foo ( -> id int unsigned primary key, -> lon float, -> lat float, -> location as geography_point(lon, lat) persisted geographypoint, -> index(location) -> ); memsql> insert into foo values (1, 50.01, 40.01); memsql> select * from foo; +----+-------+-------+--------------------------------+ | id | lon | lat | location | +----+-------+-------+--------------------------------+ | 1 | 50.01 | 40.01 | POINT(50.00999831 40.00999832) | +----+-------+-------+--------------------------------+ 1 row in set (0.00 sec) | https://docs.memsql.com/v7.1/reference/sql-reference/geospatial-functions/geography_point/ | 2020-09-18T16:27:21 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.memsql.com |
Welcome to the Django Dynamic Fixtures documentation!¶
Django Dynamic Fixtures is a Django app which gives you the ability to setup fixture-data in a more dynamic way. Static fixtures are sometimes too static in a way that for example even the primary keys are static defined, this can be very hard to maintain especially in bigger projects. Another example; when your application depends on data with a recent timestamp your static fixtures can get ‘outdated’.
For all these issues Django Dynamic Fixtures has a solution and even more!
- Features:
- Write fixtures in Python;
- Load fixtures which are required for your task;
- Manage fixture Dependencies.
Changelog¶
0.2.1
- Added some docs about dry-run mode
- Fixed Django versions in setup.py
0.2.0
- Added time elapsed per fixture
- Dry-run mode
- List available fixtures
- Run all fixtures in an transaction
- Removed support for Django 1.7
- Added support for Django 2.0
Installation¶
First install the package:
$ pip install django-dynamic-fixtures
Add the app to your project’s settings.py file:
# settings.py INSTALLED_APPS = [ ..., 'dynamic_fixtures' ]
Or make sure the app is not loaded on production:
# settings.py if DEBUG: INSTALLED_APPS = INSTALLED_APPS + ['dynamic_fixtures']
Write fixtures¶
All fixtures are written in .py files the fixtures-module of your app.
Recommended is to prefix the fixture files with numbers just like you probably already know from the Django migrations.:
Inside the fixture file you have to create a class called Fixture. This
class should extend from
dynamic_fixtures.fixtures.basefixture.BaseFixture.
In this class you define at least the load-method. In this method your are free to setup your fixture data in a way you like:
#my_django_project/my_app/fixtures/0001_create_example_author.py from dynamic_fixtures.fixtures import BaseFixture from my_app.models import Author class Fixture(BaseFixture): def load(self): Author.objects.create(name="John Doe")
List fixtures¶
To list all existing fixtures you can call the management command load_dynamic_fixtures with an argument –list:
$ ./manage.py load_dynamic_fixtures --list
The output may help to find out the reason why a fixture wasn’t loaded.
Load fixtures¶
To load the fixtures you can call the management command load_dynamic_fixtures:
$ ./manage.py load_dynamic_fixtures
You can also specify which fixtures you want to load. In this case the requested fixture will be loaded plus all depending fixtures. This ensures that you always have a valid data-set:
$ ./manage.py load_dynamic_fixtures my_app 0001_create_example_author
Or load all fixtures for a given app:
$ ./manage.py load_dynamic_fixtures my_app
Dry-run¶
You can test your fixtures in dry-run mode. Add the –dry-run argument to the management command. Fixtures will loaded as without dry-run enabled however the transaction will be rolled back at the end:
$ ./manage.py load_dynamic_fixtures --dry-run
Dependencies¶
It’s also possible to maintain dependencies between fixtures. This can be accomplished in the same way as Django migrations:
#my_django_project/my_app/fixtures/0002_create_example_books.py from dynamic_fixtures.fixtures import BaseFixture from my_app.models import Book class Fixture(BaseFixture): dependencies = ( ('my_app', '0001_create_example_author'), ) def load(self): author = Author.objects.get(name='John Doe') Book.objects.create(title="About roses and gladiolus", author=author) Book.objects.create(title="The green smurf", author=author)
The library take care that the depending fixture is loaded before this one, so you know for sure that the entity is available in the database.
Gotcha’s¶
A really powerful combination is a combination of this library and Factory Boy. In the example below 50 authors will get created from factories.:
#my_django_project/my_app/fixtures/0001_create_example_author.py from dynamic_fixtures.fixtures import BaseFixture from my_app.factories import AuthorFactory class Fixture(BaseFixture): def load(self): AuthorFactory.create_batch(size=50) | https://django-dynamic-fixtures.readthedocs.io/en/latest/ | 2020-09-18T18:06:26 | CC-MAIN-2020-40 | 1600400188049.8 | [] | django-dynamic-fixtures.readthedocs.io |
Planning the initial ETL run
You will need to run ETL after you successfully install and configure the product by using the
Plan the first ETL run carefully because it can be resource intensive and time consuming. The amount of time and database resources needed for the initial data load depends on the data volume in the
To prepare for the first ETL run, perform the reports data warehouse sizing exercise by using sizing tools recommended by BMC. After the sizing exercise is complete, ensure the following requirements:
- The required disk space is allocated for the reports data warehouse.
- CPUs are allocated for the database server.
- Database parameters are set to the correct values (see Recommendations for the Oracle processes parameter and the Microsoft SQL Server documentation for SQL Server parameter recommendations).
The amount of data in your
When loading data from the | https://docs.bmc.com/docs/decisionsupportserverautomation/88/planning-the-initial-etl-run-629184452.html | 2020-09-18T17:31:58 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
What’s the Shade Cloth Rotary Clothesline Cover like in the wind?
This cover is excellent in windy conditions, being a shade cloth it allows the majority of wind to pass straight through it.
Still Got Questions or Want to Speak to a Real Person?
To see photo's, video's and reviews please visit the Shade Cloth Rotary Clothesline Cover product page here | https://docs.lifestyleclotheslines.com.au/article/458-what-s-the-shade-cloth-rotary-clothesline-cover-like-in-the-wind | 2020-09-18T17:23:48 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.lifestyleclotheslines.com.au |
1 Introduction
This activity can only be used in Nanoflows.
The Call nanoflow activity can be used to call another nanoflow.
Arguments can be passed to the nanoflow and the result can be stored in a variable.
2 Properties
There are two sets of properties for this activity, those in the dialog box on the left, and those in the properties pane on the right:
The Nanoflow Nanoflow
The nanoflow nanoflow, you have to supply an argument of the same type. The values of the arguments are expressed using expressions.
3.3 Return Type
This read-only property indicates whether you will retrieve a variable, object or list.
3.4 Use Return Value
This property determines if the returned value from the called nanoflow should be available in the rest of the current. | https://docs.mendix.com/refguide/nanoflow-call | 2020-09-18T18:02:29 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['attachments/action-call-activities/nanoflow-call.png',
'Nanoflow Call'], dtype=object)
array(['attachments/action-call-activities/nanoflow-call-properties.png',
'Nanoflow Call Properties'], dtype=object) ] | docs.mendix.com |
Site Layout
The site layout is concerned with the layout design of this Cream Maganize theme.
Click on Dashboard>Appearance> Customize>Site Layout to customize the site layout.
This theme has two layouts. They are as follows
1. Full width layout
The full-width layout is the default layout of this theme and has a white background. It focuses on your content and showcases them.
It has full background of white color.
2. Boxed layout
Boxed layout are fixed layouts and they have to define padding boundaries. They have a boxed design and you can notice the padding.
It has a visible different background colour and you can change the background colour. | https://docs.themebeez.com/kb/cream-magazine/global-option-customization/site-layout/ | 2020-09-18T15:59:56 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['https://docs.themebeez.com/wp-content/uploads/2020/04/SSLL-Cream-Magazine-Screen_Shot-1024x580.png',
'SSLL Cream Magazine Screen_Shot.png'], dtype=object)
array(['https://docs.themebeez.com/wp-content/uploads/2020/04/SSLL-02-Cream-Magazine-Screen_Shot-1024x507.png',
'SSLL 02 Cream Magazine Screen_Shot.png'], dtype=object) ] | docs.themebeez.com |
Unable to upgrade to the latest version
From Secure Web Gateway
Issues
Unable to upgrade to the latest SafeSquid version
Prerequisites
Monit service should be up.
Solution
When you are not able to upgrade latest version of SafeSquid from SafeSquid Interface follow the link How To Upgrade SafeSquid To A Newer Version
Follow below steps if you face upgradation issue
- Go to SafeSquid console (putty or Linux box)
- Login using the root access.
- Check disk space by using the command : df -kh
/dev/ram1 62M 1.3M 58M 3% /tmp/safesquid
If the file system /dev/ram1 is full, then remove all files from /tmp/safesquid folder.
Command:
root@sabproxy:/tmp/safesquid# rm -rf * | https://docs.safesquid.com/index.php?title=Unable_to_upgrade_to_the_latest_version&printable=yes | 2020-09-18T17:06:10 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.safesquid.com |
To assist you during the focusing, in this mode PI CropVIEW Focus filters the edges of the image and converts the
total amount of edges into a figure that we call “detail level”.
An image with a higher number of edges is sharper or more focused. Actual detail level is an indicator of the
focus accuracy. The higher the value, the more focused the image.
Actual detail level: Displays the detail level of the most recent image.
Last detail level: Displays the detail level of the previously received image, indicates whether you are turning the
camera lens in the right direction.
Best detail level: Displays the highest of all detail levels measured in the process
Reset Detail: Resets the detail level fields.
B/W: Option to display the image in Black & White or in colour.
Connect to Camera: Starts the USB communication between PC and the camera.
Start: Starts the image capturing process.
Stop: Stops the image capturing process.
Status: Here you can see the actual action that the system is performing. | http://docs.metos.at/Pi+CropVIEW+focus+application%3A+focus+tab?structure=CropVIEW | 2020-09-18T18:13:06 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Audit Logs
The Audit tab lets you view and search audit logs for security and compliance purpose. These logs contain information of activities recorded from every ip address plugged to the node.
Searching Audit LogsSearching Audit Logs
You can search for audit logs associated services installed in your cluster. These logs are displayed within the timeline you select.
To search for audit logs, do the following.
- From the left panel, you can apply the following criteria for searching a log file.
- Services: Displays the list of services used in the integrated applications. These services display the time stamp at which the associated logs were last captured at.
- Filters: You can filter with the following criteria. host.name, cmd, proto, allowed.
To search a host name or cmd from the available list, type the hostname or cmd.
Grouping LogsGrouping Logs
You can group the audit. | https://docs.acceldata.dev/docs/logs/auditlog/ | 2020-09-18T16:19:47 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.acceldata.dev |
This page provides the account-level permissions available in AppDynamics. You can set account permissions for custom roles from the Account tab in the Controller Administration UI. Most installations have one account per Controller. Usually, only very large installations or installations that have very distinct sets of users may require multiple accounts.
Multiple accounts are part of the multi-tenant mode of installing the Controller. See Controller Deployment
The following table lists the permissions that can be set at the Account level. | https://docs.appdynamics.com/plugins/viewsource/viewpagesrc.action?pageId=45485047 | 2020-09-18T16:40:14 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.appdynamics.com |
Communication¶
We use IRC as our primary communication during summer training and it also a main communication medium for many FOSS projects.
How to use IRC?¶
Please read the What is IRC? chapter to learn in details.
Register your nickname¶
Remember to register your nickname, you can follow this guide.
Rules to follow¶
Be nice to others.
Always type full English words, no sms speak in any FOSS communications. That means no more ‘u’ or ‘r’, instead type ‘you’ or ‘are’.
Though are few short forms which are acceptable in IRC.
For more Abbreviations Commonly Used on IRC.
Do not write HTML emails¶
Please avoid sending HTML emails, in private emails, or to any mailing list. dgplug mailing list will automatically delete any HTML email sent to the list.
We are sure you want to why is it bad? We will discuss more about it during the training.
For now, we give you this tweet.
Mailing list¶
Please join in our mailing list. Remember not to do top post but only reply inline. To avoid top post use E-mail client (Thunderbird, Evolution).
Top post reply:
Hello, Please refer to <> for yesterday's training logs. The timing of the today's class is 06:30 P.M. (IST). Bar -- Foo<foo at gmail.com> wrote: > i have missed the yesterday's training class. > Where can i get the yesterday class's log? > What is the timing of the today's class?
Inline reply:
Hello, -- Foo<foo at gmail.com> wrote: > i have missed the yesterday's training class. > Where can i get the yesterday class's log? Please refer to <> for yesterday's training logs. > What is the timing of the today's class? The timing of the today's class is 06:30 P.M. (IST). Bar
Rules for the sessions¶
- Do not speak when the session is going on.
- If you have a question type ! and wait for your turn.
- Try to come online 5 minutes before the session starts.
- Address people by their IRC nick.
- Do not use sir and madam.
- Do not use SMS language, write full English words.
How to ask a question?¶
First read this document. Remember to search in DuckDuckGo and then only ask. | https://summertraining.readthedocs.io/en/latest/communication.html | 2020-09-18T17:40:01 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['_images/matt_blaze_html.png', '_images/matt_blaze_html.png'],
dtype=object) ] | summertraining.readthedocs.io |
Simulate far field optical haze enhancement due to nano-texturing of ZnO coated glass through HCL etching (1)
We consider a 600nm layer of ZnO coated on a glass. Textures generated by HCL etching can be emulated through a random set of overlapping conical features etched on the surface. Size of cones are controlled by normally distributed geometrical parameters.
Contents
Simulator(s) Used
REMS is a frequency domain wave optics simulator. REMS solves Maxwell equations in frequency domain for both space and time variables. REMS is a heavily multi-threaded, stable and efficient implementation various formulations of Rigorous Coupled Wave Analysis (RCWA), Fourier Modal Method (FMM), and the S-matrix algorithm.
For more details see Category:REMS.
Model File Tree.
References | https://docs2.kogence.com/docs/Simulate_far_field_optical_haze_enhancement_due_to_nano-texturing_of_ZnO_coated_glass_through_HCL_etching_(1) | 2017-12-11T06:07:59 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs2.kogence.com |
Business metrics capture data from a method's parameters or return values to report on the performance of the business.
For example:
- What is the average value of the credit card total?
- How many credit cards did my application process in a certain time period, regardless of the business transaction?
- What was the average time spent processing a credit card transaction?
- What is the current rate of rejected credit cards?
- Which items were best-sellers in a certain time period?
AppDynamics gathers business metrics using information points. Information points instrument methods in your application code outside the context of a particular business transaction and extract data from code. When the method configured for the information point is invoked, the information point metric is captured.
Accessing Business Metrics
To access the Information Points List, in the left navigation pane click Analyze -> Information Points.
The Information Point List summarizes metrics for all the information points in an application. Business metrics show a value in the # of Custom Metrics column.
From the Information Points List you can:
- Filter the list
- Select an information point
- View one of its metrics in the Metrics Browser
- View a graph all its metrics on a dashboard
Each information point has its own dashboard, which reports KPIs (average response time, load, and errors) for the information point, as well as any custom business metrics defined for the information point.
Business metrics appear in the Information Points tree of the Metric Browser.
Business metrics can be accessed from the AppDynamics REST API, just like any other metric. See To Copy the REST URL for a Metric.
Business Metrics in Call Graphs
You can configure snapshots to display information point invocations in call graphs by setting the enable-info-point-data-in-snapshots node property to true. By default the property is false. See App Agent Node Properties.
When the enable-info-point-data-in-snapshots node property is set, information point calls appear in the User Data section of the call graph.
Configuring Business Metrics Using Information Points
You define an information point based on the class and method signature of the method being monitored. See Configure Business Metric Information Points. For PHP agents see Configure Information Points for PHP.
Business Metrics and Health Rules
You can use any parameter or return value of the method to generate a custom business metric across multiple business transactions. You can then create a custom health rule based on the performance of such a custom metric. See Health Rules.
Metrics for Ignored Errors
Exceptions are expensive to create even in modern JVMs. If a particular exception is ignored but still happens often, it can contribute to a CPU spike in the JVMs. The metrics for ignored errors can provide insights into such situations.
Example
{ // AppDynamics Exit Point interceptor is applied on this method, so we will automatically track this exception service.connect(URL); } catch(ServiceException sEx) { service.connect(backupURL); }
In the above code the 'ServiceException' is not really fatal since the code has logic to handle it, so you may decide to add it to 'ignore' list. But if this exception is not truly an exceptional case and happens every time the code executes, having the metric for ignored errors will help you become aware of the additional overhead. | https://docs.appdynamics.com/display/PRO39/Business+Metrics | 2017-12-11T06:07:48 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.appdynamics.com |
Document Type
Article
Abstract
This study examined the use of social media at work. Undergraduate students and professors were surveyed to try to find a generational relationship between the younger generation’s view of using sites such as Facebook while working and how some participants from an older generation perceived it. We also examined the effects of Facebook outside of work and whether or not postings made there could jeopardize a position at work. The results from our survey and research conclude that social media is an increasing problem because it serves as a distraction and predict that with increasing individual use of social media it will become more of a problem at work if it is not properly managed by the employer.
Recommended Citation
Diercksen, Michael, Matthew DiPlacido, Diane M. Harvey, & Susan M. Bosco. 2013. "Generational Differences in Use of Social Media in Today’s Workplace." Psychology Research, 3 (12): 762-771.
Published in: Psychology Research, Vol. 3, No. 12, 2013. | https://docs.rwu.edu/gsb_fp/13/ | 2017-12-11T05:51:27 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.rwu.edu |
Cloud Discovery enrichment
Cloud Discovery data can now be enriched with Azure Active Directory username data. When you enable this feature, the username received in the discovery traffic logs is matched and replaced by the Azure AD username enabling the following new Cloud Discovery settings.
In the User enrichment tab, to enable Cloud App Security to use Azure Active Directory data to enrich usernames by default, select Enrich discovered user identifiers with Azure Active Directory usernames.
Click Save.
See Also
Control cloud apps with policies
For technical support, visit the Cloud App Security assisted support page.
Premier customers can also choose Cloud App Security directly from the Premier Portal. | https://docs.microsoft.com/en-us/cloud-app-security/cloud-discovery-aad-enrichment | 2017-12-11T06:01:14 | CC-MAIN-2017-51 | 1512948512208.1 | [array(['media/discovery-enrichment.png',
'Enrich Cloud App Security Discovery with Azure AD usernames'],
dtype=object) ] | docs.microsoft.com |
environements. Along with the BMC Digital Workplace data, the tool also transfers the BMC Remedy with Smart IT (Smart IT) data stored in the same database schemas.
Scenario for using the BMC Digital Workplace Data Transfer tool
Allen, an administrator at Calbro Services, is working on deploying BMC Digital Workplace in his organization. In a preproduction environment, he has configured the BMC Digital Workplace application according to the organization's needs. The validation testing is successful and a Go-Live is approved. Allen now wants to configure MyIT in the production environment exactly like the preproduction environment. Instead of manually configuring BMC Digital Workplace in the production environment or manually copying all data between environments, Allen chooses to use the BMC Digital Workplace Data Transfer tool to perform the task. By using the tool, Allen is able to seamlessly transfer all data to the production environment.
Note
The following video shows an older version of BMC Digital Workplace. The previous product name was MyIT. Although there might be minor changes in the UI, the overall functionality remains the same.
The following video (4:25 min) shows the process of exporting and importing data from aBMC Digital Workplace source tenant to a target tenant.
Before you begin
- Ensure that you have permissions to access the databases on the source and target MyIT environments.
- Locate the BMC Digital Workplace Data Transfer tool scripts in the MyIT_Home\SmartIT_MyIT\data-transfer\scripts directory, where MyIT_Home is the BMC Digital Workplace installation directory. The default BMC Digital Workplace installation location is as follows:
- (Windows) C:\Program Files\BMC Software\Smart_IT_MyIT\
- (Linux) /opt/bmc/Smart_IT_MyIT/
- If you are running the data transfer from a computer other than the MyIT host computer, copy the entire contents of the \data-transfer directory to your local computer.
- Ensure that the Social and MyITTomcat services are not being used during the data transfer.
- Ensure that users do not create new data in the source database when the
set_env.bator
set_env.shscripts.
Using a text editor such as Notepad, edit the
set_env.bator
set_env.shscript and enter the values for the different variables.Variables to be set in the set_env.bat or set_env.sh file
The
set_env.bator
set_env.shscripts include a brief description of each variable.
Save and close the
set_env.bator
set_env.shscript.
Edit the \SmartIT_MyIT\data-transfer\connection\db_type.connection.properties file and specifyor
data_dump.batscript.
To export only selected data, as specified in the
export_typevariable in the
set_env.bator
set_env.shscript, run the
run_export.bator
run_export.shscript.
Note
If you run the command again with the
--overwriteoption, it will overwrite the existing output file, or create a new output file if not already present.
Depending on the file name you set in the
output_file_pathvariable, an output file, file_name.zip file is created. The tool uses this file for importing the data in the target database.
Step 2: Import the data to the target tenant database
Tips
For faster data import, run the data import step on the target database server.
In the \scripts directory, locate the the
set_env.bator
set_env.shscript.
Using a text editor such as Notepad, edit the
set_env.bator
set_env.shscript and enter the values for the different variables.Variables to be set in the set_env.bat or set_env.sh file
Save and close the
set_env.bator
set_env.shscript..bator
run_import.shcommand.
The tool transfers the data to the target database.
Step 3: Verify the data transfer
- Run SQL queries to verify that the data in the migrated tables in the source and target databases match.
- Login define: | https://docs.bmc.com/docs/digitalworkplacebasic/1802/transferring-data-between-environments-788654365.html | 2021-04-10T22:13:26 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['/docs/digitalworkplacebasic/1802/files/788654365/788654368/1/1516824107040/Data+transfer+steps.png',
'Data transfer steps'], dtype=object) ] | docs.bmc.com |
The LITA/Ex Libris Student Writing Award is given for the best unpublished manuscript on a topic in the area of libraries and information technology written by a student or students enrolled in an ALA-accredited library and information studies graduate program.
The purpose of this award, established July 2000, is February 28 of each year. The winner will be notified by April submission guidelines and fill out an application form [pdf form]. At the time the article is submitted, the applicant(s) must be currently enrolled in an ALA-accredited program of library and information studies at the Masters or Ph.D. level.
The LITA/Ex Libris Student Writing Awards Committee is not required to select a recipient if, in the opinion of the Committee, no nomination merits the award in a given year.
Submit completed application form(s) [pdf] and the manuscript to the current committee chair, Eric Phetteplace at phette23 at gmail.com.
The award will be presented at the LITA President’s Program at the ALA Annual Conference if the winner(s) is present.
From 2001 through 2006 the award was sponsored by Endeavor and was known as the LITA/Endeavor Student Writing Award. The following distinguished people have received the award to date:
- Peter Murray, Simmons College, 2001
- Rachel Mendez, Emporia State University, 2002
- Joyce Friedlander, Syracuse University, 2003
- Judy Jeng, Rutgers, The State University of New Jersey, 2004
- Kristin Yiotis, San Jose State University, 2005
- Yi Shen, University of Wisconsin-Madison, 2006
- Timothy Dickey, Kent State University, Ohio, 2007
- Robin Sease, University of Washington, 2008
- T. Michael Silver, University of Alberta, 2009
- Andromeda Yelton, Simmons College, 2010
- Abigail McDermott, University of Maryland, 2011
- Cynthia Cohen, San José State University, 2012
- Karen Doerksen, University Alberta, 2013
- Brighid Mooney Gonzales, San Jose State University, 2014
- Heather Terrell, San Jose State University, 2015
- Tanya Johnson, Rutgers School of Communication and Information, 2016
- Megan Ozeran, San Jose State University, 2017
- Colby Lewis, University of Michigan, 2018
Visit the LITA Awards & Scholarships page for information on additional LITA awards. | https://docs.lita.org/development-home-page/committee-roles/lita-ex-libris-student-writing-award/ | 2021-04-10T23:01:22 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.lita.org |
MapProxy server is available for installation along side TileDB Cloud Enterprise in Kubernetes clusters though a helm chart. Installation of MapProxy assumes you have already configured and have a running instance of TileDB Cloud Enterprise. If you do not have TileDB Cloud Enterprise running please see Installation Instructions.
In order to use MapProxy server with TileDB Cloud Enterprise Edition you will need to get access to the private docker registry and the private helm registry. Please contacts your TileDB, Inc account representative for credentials to these services.
Before you install MapProxy server it is important to setup and customize your installation. This involves creating a custom values file for helm. Below is a sample file you can save and edit.
Save this value as
values.yaml . There are several required changes, all sections which require changes are prefixed with a comment of
# REQUIRED:. Examples of the changes needed including setting access token for TileDB Cloud Enterprise, and setting the array URI.
values.yaml# Default values for mapproxy# This is a YAML-formatted file.replicaCount: 1serviceAccount:# The name of the service account to use.# If not set and create is true, a name is generated using the fullname templatename: "default"# REQUIRED: set the TileDB Cloud Configuration. This should be a API token# created in the TileDB Cloud UI and the hostname where TileDB Cloud REST API# is exposedtiledbConfig:token: ""host: "tiledb-cloud-rest"# REQUIRED: Configure mapproxy settings, including setting array_urimapProxConfig:services:demo:wms:wmts:layers:- name: tiledb_prodtitle: TileDB Production Sourcesources: [tiledb_cache]caches:tiledb_cache:sources: []grids: [GLOBAL_WEBMERCATOR]cache:type: tiledb# REQUIRED: Set the array_uri herearray_uri: tiledb_cacheingress:enabled: true# Configure any needed annotations. For instance if you are using a different ingress besides nginx set that hereannotations:kubernetes.io/ingress.class: nginx# REQUIRED: set the mapproxy hostnamehosts:- mapproxy.tiledb.example.com# tls: []# - secretName: chart-example-tls# hosts:# - chart-example.local# By default we set limits to 2 CPUs and 2GB Memory# resources:# limits:# cpu: 2000m# memory: 2Gi# requests:# cpu: 2000m# memory: 2Gi
Once you have created the
values.yaml file you can install MapProxy by running the following helm command.
helm install \--namespace tiledb-cloud \--values values.yaml \tiledb-cloud-mapproxy \tiledb/mapproxy
When new releases of MapProxy server are announced you can easily upgrade your installation by first updating the helm repository:
helm repo update tiledb
After the repository is updated you can run the helm upgrade:
helm upgrade --install \--namespace tiledb-cloud \--values values.yaml \tiledb-cloud-mapproxy \tiledb/mapproxy | https://docs.tiledb.com/main/solutions/tiledb-cloud-enterprise/mapproxy-installation | 2021-04-10T23:01:38 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.tiledb.com |
Using Greenplum Command Center
VMware Tanzu Greenplum Command Center is a management and monitoring tool for VMware Tanzu Greenplum. This topic describes how to install and configure Command Center with VMware Tanzu Greenplum.
About Command Center in VMware Tanzu Greenplum for Kubernetes
Unlike with other VMware Tanzu Greenplum distributions, VMware Tanzu Greenplum for Kubernetes automatically includes the Command Center installation file as part of the Greenplum Docker image. You can easily execute the installer at
/home/gpadmin/tools/installGPCC/gpccinstall-<version> from the master-0 pod, following the instructions at Installing and Upgrading VMware Tanzu Greenplum Command Center
in the Command Center documentation.
The procedure that follows shows how to quickly install Command Center in a new VMware Tanzu Greenplum for Kubernetes installation, including how to open the default Command Center port to access Command Center outside of the Kubernetes cluster.
Prerequisites
The Prerequisites topic in the Command Center documentation describes the full list of prerequisites for running Command Center. The following prerequisites are especially relevant for VMware Tanzu Greenplum for Kubernetes deployments:
- The directory where Greenplum Command Center will be installed must be writable by the
gpadminuser on all Greenplum Database hosts. The default directory,
/usr/local/is not writable by
gpadminin new VMware Tanzu Greenplum for Kubernetes deployments. You can install Command Center to
/greenplumto meet this prerequisite.
Note: Do not install Command Center to
/home/gpadmin, because files in that location are removed when the Greenplum deployment is deleted and restarted.
To install to a different directory, follow the instructions in Selecting and Preparing an Installation Directory for Command Center in the Command Center documentation.
- Use the
PGPASSWORDenvironment variable before invoking the installer, to prevent the
.pgpassfile from being created.
.pgpassis always created in
/home/gpadminand would be removed after the Greenplum instance is deleted and restarted.
- The default Command Center port number, 28080, is not routed by default using the Greenplum service’s external IP address. The procedure that follows shows example instructions for how to route the default port number in a VMware Tanzu Greenplum for Kubernetes deployment.
Limitations
Using the
.pgpassfile is not recommended for VMware Tanzu Greenplum for Kubernetes deployments. The
.pgpassfile is always created in
/home/gpadmin, and the contents of this directory are removed and recreated when you delete and redeploy VMware Tanzu Greenplum for Kubernetes. Instead, set the
PGPASSWORDbefore you install Command Center, and before you run the
gpcccommand to perform actions such as starting, stopping, or displaying the status of Command Center.
After you delete and redeploy a VMware Tanzu Greenplum for Kubernetes deployment, you must manually restart Command Center. You must also reconfigure the Greenplum service to allow the Command Center port number to be accessed from an external IP address. Follow steps 4 and 5 in Installing Command Center to complete these steps.
Installing Command Center
To install Command Center in a new VMware Tanzu Greenplum for Kubernetes deployment:
Set the
PGPASSWORDenvironment variable and execute the Command Center installer,
/home/gpadmin/tools/installGPCC/gpccinstall-<version>, where
<version>is the Command Center version that was installed. The following example commands creates an input file for the installer to specify the
/greenpluminstallation directory, and uses default values for all other installation options:
$ kubectl exec master-0 -- bash -c 'echo -e "path = /greenplum" > /tmp/gpcc-config.txt && PGPASSWORD=changeme /tools/installGPCC/gpccinstall-* -c /tmp/gpcc-config.txt'
If you want to specify a non-default port, language, or security setting, either add those options to the input file or run the installer in interactive mode. The following command runs the interactive installer, which prompts you for all installation options:
$ kubectl exec -it master-0 -- bash -c "PGPASSWORD=changeme /home/gpadmin/tools/installGPCC/gpccinstall-*"
Execute the
gpcc_path.shscript and set the
PGPASSWORDenvironment before starting Command Center. For example:
$ kubectl exec -it master-0 -- bash -c 'source /greenplum/greenplum-cc/gpcc_path.sh && PGPASSWORD=changeme gpcc start'
2020-10-08 18:39:15 Starting the gpcc agents and webserver... 2020-10-08 18:39:18 Agent successfully started on 2/2 hosts 2020-10-08 18:39:18 View Greenplum Command Center at
At this point, Command Center is running on the indicated port, but the port is not mapped to the external IP address for the service. Follow these steps to add the port mapping:
Load the Greenplum service configuration into your text editor:
$ kubectl edit svc greenplum
In the
ports:section of the configuration, add a
name: gpdbentry to identify the existing port configuration for Greenplum Database. For example, change the following stanza:
ports: - nodePort: 32731 port: 5432 protocol: TCP targetPort: 5432
to read:
ports: - name: gpdb nodePort: 32731 port: 5432 protocol: TCP targetPort: 5432
Add a new stanza to the
ports:section to add a port configuration for GPCC. The following shows the completed
ports:section with the new
gpccstanza that configured the default port, 28080:
ports: - name: gpdb nodePort: 32731 port: 5432 protocol: TCP targetPort: 5432 - name: gpcc port: 28080 protocol: TCP targetPort: 28080
Save your changes to the service configuration file.
The service configuration should now show the available port number (28080 in the example above). For example:
$ kubectl get svc greenplum
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greenplum LoadBalancer 10.102.197.205 10.102.197.205 5432:32731/TCP,28080:30684/TCP 20h
Access Command Center from your browser, using the external IP address and port number you have configured. In the above example, Command Center would be accessed from For a new installation, log in using the
gpmonrole and the password that you set in the
PGPASSWORDenvironment variable.
Getting More Information
For more information about using Command Center, see the VMware Tanzu Greenplum Command Center documentation. | https://greenplum-kubernetes.docs.pivotal.io/2-3/gpcc.html | 2021-04-10T22:09:17 | CC-MAIN-2021-17 | 1618038059348.9 | [] | greenplum-kubernetes.docs.pivotal.io |
The group met 1 time this quarter.
Recent Activities
First Activity
Committee met on 04/17/2019 and discussed committee composition of subcommittees and:
⢠Number of days (keep at 1 day pre conference, 2 full days of forum)
⢠Timing of Sessions (Beginning/End Time each day): Preconference: 1-5pm, Friday: 9am-5:30pm, Saturday: 9am – 3pm
⢠Number of concurrent tracks/rooms** – 6
⢠Number of sessions â 60 sessions (+/- depending on length of selected sessions), 2 Keynotes
** in addition, we may also need a Quiet Room and may need a space for potential sponsors
Meets LITA’s strategic goals for Member Engagement
What will your group be working on for the next three months?
This is still to be determined.
Sponsorship subcommittee (Galen, Joanna, past-chair Melody, Berika) will meet on Friday June 7th at 1pm Central/2pm Eastern to discuss past experiences with planning timeline and ideas for getting sponsors.
Also we will be actively recruiting additional committee members (incl. at least 1 LLAMA/and 1 ALCTS member).
There will definitely be a meeting for the entire committee again after Annual to get them energized about their respective subcommittee charges so that they can begin the planning process.
Is there anything LITA could have provided during this time that would have helped your group with its work?
Appreciated the organization surrounding a new connect space and online roster for 2020 committee!
We still need a Splash page for Forum 2020 as a way to communicate to membership that Forum is happening in Fall 2020.
Please provide suggestions for future education topics, initiatives, publications, resources, or other activities that could be developed based on your group’s work.
(same as last report â would like to create documentation on the LITA Docs site with important milestones/benchmarks for planning Forum so that future chairs know what to expect)
Additional comments or concerns:
Not at this time.
Submitted by Berika Williams on 05/31/2019 | https://docs.lita.org/2019/05/forum-planning-committee-may-2019-report/ | 2021-04-10T21:07:30 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.lita.org |
PrairieLearn Elements for use in
question.html
When writing questions, there exists a core pool of elements that provides common structures associated with assessment items. These elements can be split into three distinct groups: submission, decorative, and conditional. Within this document, all of PrairieLearn's elements are displayed alongside links to sample elements within the example course. To build your own PrairieLearn element, please see Question Element Writing documentation.
Submission elements act as a way to receive a response or input from the student. These elements are traditionally referred to as form input fields. PrairieLearn presently provides the following templated input field elements:
pl-multiple-choice: Selecting only one option from a list.
pl-checkbox: Selecting multiple options from a list.
pl-dropdown: Select an answer from answers in a drop-down box.
pl-order-blocks: Select and arrange given blocks of code or text.
pl-number-input: Fill in a numerical value within a specific tolerance level such as 3.14, -1.921, and so on.
pl-integer-input: Fill in an integer value such as -71, 0, 5, 21, and so on.
pl-symbolic-input: Fill in a symbolic value such as
x^2,
sin(z),
mc^2, and so on.
pl-string-input: Fill in a string value such as "Illinois", "GATTACA", "computer", and so on.
pl-matrix-component-input: Fill in a matrix using grid that has an input area for each element.
pl-matrix-input: Supply a matrix in a supported programming language format.
pl-file-editor: Provide an in-browser code editor for writing and submitting code.
pl-file-upload: Provide a submission area to obtain a file with a specific naming scheme.
pl-threejs: Enables 3D scene display and problem submission.
Decorative elements are meant to improve how the question is displayed to students. Elements under this category include ways to specify question markup, images, files, and code display. The following decorative elements are available:
pl-code: Displays code rendered with the appropriate syntax highlighting.
pl-figure: Embed an image file in the question.
pl-file-download: Enable file downloads for data-centric questions.
pl-variable-output: Displays matrices in code form for supported programming languages.
pl-matrix-latex: Displays matrices using appropriate LaTeX commands for use in a mathematical expression.
pl-python-variable: Display formatted output of Python variables and pandas data frames.
pl-graph: Displays graphs, either using GraphViz DOT notation or with an adjacency matrix.
pl-drawing: Creates an image from pre-defined collection of graphic objects
pl-overlay: Allows layering existing elements on top of one another in specified positions.
pl-external-grader-variables: Displays expected and given variables for externally graded questions.
Conditional elements are meant to improve the feedback and question structure. These elements conditionally render their content depending on the question state. The following Conditional elements are available:
pl-question-panel: Displays the text of a question.
pl-submission-panel: Displays the answer given by the student.
pl-answer-panel: Displays the correct answer to a given question.
pl-hide-in-panel: Hides content in one or more display panels.
pl-external-grader-results: Displays results from questions that are externally graded.
Note: PrairieLearn Elements listed next have been deprecated. These elements are still supported for backwards compatibility, but they should not be used in new questions.
pl-variable-score: Displays a partial score for a submitted element.
- Deprecated as submission elements in
v3all have score display options.
pl-prairiedraw-figure: Show a PrairieDraw figure.
- Deprecated: use
pl-drawinginstead.
Submission Elements
pl-multiple-choice element
A
pl-multiple-choice element selects one correct answer and zero or more
incorrect answers and displays them in a random order as radio buttons.
Sample element
<pl-multiple-choice <pl-answerpositive</pl-answer> <pl-answernegative</pl-answer> <pl-answerzero</pl-answer> </pl-multiple-choice>
Customizations
Inside the
pl-multiple-choice element, each choice must be specified with
a
pl-answer that has attributes:
Example implementations
See also
pl-checkbox element
A
pl-checkbox element displays a subset of the answers in a random order
as checkboxes.
Sample element
<pl-checkbox <pl-answerA-B</pl-answer> <pl-answerB-C</pl-answer> <pl-answer> C-D</pl-answer> <pl-answerD-E</pl-answer> <pl-answer> E-F</pl-answer> <pl-answer> F-G</pl-answer> </pl-checkbox>
Customizations
Inside the
pl-checkbox element, each choice must be specified with
a
pl-answer that has attributes:
Details
Two grading methods are available when using
partial-credit="true":
'EDC'(Every Decision Counts): in this method, the checkbox answers are considered as a list of true/false answers. If
nis the total number of answers, each answer is assigned
1/npoints. The total score is the summation of the points for every correct answer selected and every incorrect answer left unselected.
'PC'(Percent Correct): in this method, 1 point is added for each correct answer that is marked as correct and 1 point is subtracted for each incorrect answer that is marked as correct. The final score is the resulting summation of points divided by the total number of correct answers. The minimum final score is set to zero.
Example implementations
See also
pl-number-input element
Fill in the blank field that allows for numeric value input within specific tolerances.
Sample element
question.html
<pl-number-input </pl-number-input>
server.py
import random def generate(data): # Generate a random value x = random.uniform(1, 2) # Answer to fill in the blank input data["correct_answers"]["ans_rtol"] = x
question.html
<pl-number-input </pl-number-input>
server.py
import random def generate(data): # Generate a random value x = random.uniform(1, 2) # Answer to fill in the blank input data["correct_answers"]["ans_sig"] = round(x, 2)
Customizations
Example implementations
See also
pl-integer-inputfor integer input
pl-symbolic-inputfor mathematical expression input
pl-string-inputfor string input
pl-dropdown element
Select the correct answer from a drop-down select menu list of potential answers. The potential options are listed in the inner HTML of a
Sample element
question.html
<p> Select the correct word in the following quotes:</p> The <pl-dropdown {{#params.aristotle}} <pl-answer{{ans}}</pl-answer> {{/params.aristotle}} </pl-dropdown> is more than the sum of its parts. <p></p> A <pl-dropdown <pl-answerwise</pl-answer> <pl-answerclumsy</pl-answer> <pl-answerreckless</pl-answer> </pl-dropdown> man proportions his belief to the evidence. <p></p>
server.py
def generate(data): QUESTION1 = 'aristotle' data['params'][QUESTION1] = [ {'tag': 'true', 'ans': 'whole'}, {'tag': 'false', 'ans': 'part'}, {'tag': 'false', 'ans': 'inverse'} ] return data
Customizations
Example implementation
pl-order-blocks element
Element to arrange given blocks of code or text that are displayed initially in the source area. The blocks can be moved to the solution area to construct the solution of the problem. In the example below, the source area is denoted by the header "Drag from here" and the solution area is denoted with the header "Construct your solution here".
Sample element
question.html
<p> List all the even numbers in order:</p> <pl-order-blocks <pl-answer1</pl-answer> <pl-answer2</pl-answer> <pl-answer3</pl-answer> <pl-answer4</pl-answer> </pl-order-blocks>
Customizations
Within the
pl-order-blocks element, each answer block must be specified with a
pl-answer that has the following attributes:
Details
Different grading options are defined via the attribute
grading-method:
ordered: in this method, the correct ordering of the blocks is defined by the ordering in which the correct answers (defined in
pl-answer) appear in the HTML file. There is no partial credit for this option.
unordered: in this method, if
nis the total number of correct blocks, each correct block moved to the solution area is given
1/npoints, and each incorrect block moved to the solution area is subtracted by
1/npoints. The final score will be at least 0 (the student cannot earn a negative score by only moving incorrect answers). Note the ordering of the blocks does not matter. That is, any permutation of the answers within the solution area is accepted. There is partial credit for this option.
ranking: in this method, the
rankingattribute of the
pl-answeroptions are used to check answer ordering. Every answer block X should have a
rankinginteger that is less than or equal to the answer block immediately below X. That is, the sequence of
rankingintegers of all the answer blocks should form a nonstrictly increasing sequence. If
nis the total number of answers, each correctly ordered answer is worth
1/n, up to the first incorrectly ordered answer. There is partial credit for this option.
external: in this method, the blocks moved to the solution area will be saved in the file
user_code.py, and the correctness of the code will be checked using the external grader. Depending on the external grader grading code logic, it may be possible to enable or disable partial credit. The attribute
correctfor
pl-answercan still be used in conjunction with
min-incorrectand
max-incorrectfor display purposes only, but not used for grading purposes. The attributes
rankingand
indentare not allowed for this grading method.
Different ordering of the blocks in the source area defined via the attribute
source-blocks-order:
ordered: the blocks appear in the source area in the same order they appear in the HTML file.
random: the blocks are shuffled.
Example implementations
- element/orderBlocks
- demo/autograder/python/orderBlocksRandomParams
- demo/autograder/python/orderBlocksAddNumpy
pl-integer-input element
Fill in the blank field that requires an integer input.
Sample element
question.html
<pl-integer-input</pl-integer-input>
server.py
import random def generate(data): # Generate a random whole number x = random.randint(1, 10) # Answer to fill in the blank input data["correct_answers"]["int_value"] = x
Customizations
Specifying a non-trivial base
By default, the values are interpreted in base 10. The
base argument may also be used, with a value between 2 and 36, to indicate a different base to interpret the student input, as well as to print the final result.
The
base argument can also accept a special value of 0. In this case, the values will by default be interpreted in base 10, however the student has the option of using different prefixes to indicate a value in a different format:
- The prefixes
0xand
0Xcan be used for base-16 values (e.g.,
0x1a);
- The prefixes
0band
0Bcan be used for base-2 values (e.g.,
0b1101);
- The prefixes
0oand
0Ocan be used for base-8 values (e.g.,
0o777).
Integer range
The valid range of values accepted by pl-integer-input is between -9007199254740991 and +9007199254740991 (between -(2^53 - 1) and +(2^53 - 1)). If you need a larger input, one option you can consider is a
pl-string-input with a custom grade function.
Example implementations
See also
pl-number-inputfor numeric input
pl-symbolic-inputfor mathematical expression input
pl-string-inputfor string input
pl-symbolic-input element
Fill in the blank field that allows for mathematical symbol input.
Sample element
question.html
<pl-symbolic-input</pl-symbolic-input>
server.py
import prairielearn as pl import sympy def generate(data): # Declare math symbols x, y = sympy.symbols('x y') # Describe the equation z = x + y + 1 # Answer to fill in the blank input stored as JSON. data['correct_answer']['symbolic_math'] = pl.to_json(z)
Customizations
Details
Correct answers are best created as
sympy expressions and converted to json using
pl.to_json(data_here).
It is also possible to specify the correct answer simply as a string, e.g.,
x + y + 1.
Do not include
i or
j in the list of
variables if
allow-complex="true". Do not include any other reserved name in your list of
variables (
e,
pi,
cos,
sin, etc.) The element code will check for (and disallow) conflicts between your list of
variables and reserved names.
Example implementations
See also
pl-number-inputfor numeric input
pl-integer-inputfor integer input
pl-string-inputfor string input
pl-string-input element
Fill in the blank field that allows for string value input.
Sample element
question.html
<pl-string-input</pl-string-input>
server.py
def generate(data): # Answer to fill in the blank input data["correct_answers"]["string_value"] = "Learn"
Customizations
Example implementations
See also
pl-symbolic-inputfor mathematical expression input
pl-integer-inputfor integer input
pl-number-inputfor numeric input
pl-matrix-component-input element
A
pl-matrix-component-input element displays a grid of input fields with
the same shape of the variable stored in
answers-name
(only 2D arrays of real numbers can be stored in
answers-name).
Sample element
question.html
<pl-matrix-component-input</pl-matrix-component-input>
server.py
import prairielearn as pl import numpy as np def generate(data): # Generate a random 3x3 matrix mat = np.random.random((3, 3)) # Answer to each matrix entry converted to JSON data['correct_answers']['matrixA'] = pl.to_json(mat)
Customizations
Details
The question will only be graded when all matrix components are entered, unless the
allow-blank attribute is enabled.
Example implementations
See also
pl-matrix-inputfor a matrix formatted in an implemented programming language
pl-number-inputfor a single numeric input
pl-symbolic-inputfor a mathematical expression input
pl-matrix-input element
A
pl-matrix-input element displays an input field that accepts a matrix
(i.e., a 2-D array) expressed in a supported programming language
format (either MATLAB or Python's numpy).
Sample element
question.html
<pl-matrix-input</pl-matrix-input>
server.py
import prairielearn as pl import numpy as np def generate(data): # Randomly generate a 2x2 matrix matrixB = np.random.random((2, 2)) # Answer exported to question. data['correct_answers']['matrixB'] = pl.to_json(matrixB)
Customizations
Details
pl-matrix-input parses a matrix entered in either
MATLAB or
Python formats.
The following are valid input format options:
MATLAB format:
[1.23; 4.56]
Python format:
[[1.23], [4.56]]
Note: A scalar will be accepted either as a matrix of size 1 x 1 (e.g.,
[1.23] or
[[1.23]]) or just as a single number (e.g.,
1.23).
In the answer panel, a
pl-matrix-input element displays the correct answer, allowing the user to switch between matlab and python format.
In the submission panel, a
pl-matrix-input element displays either the submitted answer (in the same format that it was submitted, either MATLAB or Python), or a note that the submitted answer was invalid (with an explanation of why).
Example implementations
See also
pl-matrix-component-inputfor individual input boxes for each element in the matrix
pl-number-inputfor a single numeric input
pl-symbolic-inputfor a mathematical expression input
pl-file-editor element
Provides an in-browser file editor that's compatible with the other file elements and external grading system.
Sample element
<pl-file-editor def fib(n): pass </pl-file-editor>
Customizations
Details
When using
auto-resize, consider specifying a custom
min-lines or pre-populating the code editor window with a code sample.
This will initialize the editor area with a sufficient number of lines to display all of the code simultaneously without the need for scrolling.
The
focus attribute defaults to
"false". Setting this to true will cause the file editor element to automatically capture the cursor focus when the question page is loaded, which may also cause the page to scroll down so that the file editor is in view, bypassing any written introduction. This may have negative implications for accessibility with screen readers, so use caution. If you have multiple file editors on the same question page, only one element should have
focus set to true, or else the behavior may be unpredictable.
Example implementations
See also
pl-file-uploadto receive files as a submission
pl-external-grader-resultsto include output from autograded code
pl-codeto display blocks of code with syntax highlighting
pl-string-inputfor receiving a single string value
pl-file-upload element
Provides a way to accept file uploads as part of an answer. They will be stored in the format expected by externally graded questions.
Sample element
<pl-file-upload</pl-file-upload>
Customizations
Example implementations
See also
pl-file-editorto provide an in-browser code environment
pl-external-grader-resultsto include output from autograded code
pl-codeto display blocks of code with syntax highlighting
pl-string-inputfor receiving a single string value
pl-threejs element
This element displays a 3D scene with objects that the student can (optionally) translate and/or rotate. It can be used only for output (e.g., as part of a question that asks for something else to be submitted). Or, it can be used for input (e.g., comparing a submitted pose of the body-fixed objects to a correct orientation). Information about the current pose can be hidden from the student and, if visible, can be displayed in a variety of formats, so the element can be used for many different types of questions.
Sample element
<pl-threejs <pl-threejs-stl</pl-threejs-stl> <pl-threejs-stl</pl-threejs-stl> <pl-threejs-txtmini-me</pl-threejs-txt> </pl-threejs>
Customizations
A
pl-threejs-stl element inside a
pl-threejs element allows you to add a mesh described by an
stl file to the scene, and has these attributes:
A
pl-threejs-txt element inside a
pl-threejs element allows you to add whatever text appears between the
<pl-threejs-txt> ... </pl-threejs-txt> tags as a mesh to the scene, and has these attributes:
Details
Note that a 3D scene is also created to show each submitted answer. This means that if there are many submitted answers, the page will load slowly.
Example implementations
See also
Decorative Elements
pl-code element
Display an embedded or file-based block of code with syntax highlighting and line callouts.
Sample element
<pl-code def square(x): return x * x </pl-code>
Customizations
Details
The
pl-code element uses the Pygments library for syntax highlighting, a full list of supported languages can be found here.
Common Pitfalls
The HTML specification disallows inserting special characters onto the page (i.e.
<,
>,
&), and using these characters with inline code may break rendering. To fix this, either escape the characters (
<,
>,
&, more here), or load code snippets from external files into
pl-code with
source-file-name attribute.
Example implementations
See also
pl-python-variable element
Displays the value of a Python variable, with formatted display of Pandas DataFrames.
Sample elements
Display Python variable value
question.html
<pl-python-variable</pl-python-variable>
server.py
import prairielearn as pl def generate(data): data_dictionary = { 'a': 1, 'b': 2, 'c': 3 } data['params']['variable'] = pl.to_json(data_dictionary)
Display of a Pandas DataFrame
question.html
<pl-python-variable</pl-python-variable>
server.py
import prairielearn as pl import pandas as pd def generate(data): d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(data=d) data['params']['df'] = pl.to_json(df)
Customizations
Details
As of right now, the element supports displaying either Pandas DataFrames as an HTML table or Python objects via
repr(). When setting a parameter to a DataFrame, use PrairieLearn's built in
pl.to_json().
Example implementations
See also
pl-codeto display blocks of code with syntax highlighting
pl-variable-outputfor displaying a matrix or element in code form.
pl-figure element
Display a statically or dynamically generated image.
Sample element
<!-- show a figure from an existing file --> <pl-figure</pl-figure> <!-- show a figure from a file that is generated by code --> <pl-figure</pl-figure>
Customizations
Dynamically generated figures
If
type="dynamic", then the contents of the image, to generate the
figure.png for the dynamic
pl-figure above, this code might appear in
server.py to generate a "fake"
figure.png:
def file(data): if data['filename']=='figure.png': plt.plot([1,2,3],[3,4,-2]) buf = io.BytesIO() plt.savefig(buf,format='png') return buf
If
file() does not return anything, it will be treated as if
file() returned the empty string.
Example implementations
See also
pl-file-downloadto allow for files to be downloaded.
pl-codeto show code as text with syntax highlighting.
pl-file-download element
Provide a download link to a static or dynamically generated file.
Sample element
<!-- allow students to download an existing file --> <pl-file-download</pl-file-download> <!-- allow students to download a file that is generated by code --> <pl-file-download</pl-file-download> <!-- allow students to open an existing file in a new tab --> <pl-file-download</pl-file-download>
Customizations
Details
If
type="dynamic", then the contents of the, this code might appear in
server.py to generate a file called
data.txt:
def file(data): if data['filename']=='data.txt': return 'This data is generated by code.'
If
file() does not return anything, it will be treated as if
file() returned the empty string.
Example implementations
See also
pl-variable-output element
Displays a list of variables that are formatted for import into the supported programming languages (e.g. MATLAB, Mathematica, Python, or R).
Sample element
question.html
<pl-variable-output <variable params-C</variable> <variable params-D</variable> </pl-variable-output>
server.py
import prairielearn as pl import numpy as np def generate(data): # Create fixed matrix matrixC = np.matrix('5 6; 7 8') matrixD = np.matrix('-1 4; 3 2') # Random matrices can be generated with: # mat = np.random.random((2, 2)) # Export each matrix as a JSON object for the question view. data['params']['matrixC'] = pl.to_json(matrixC) data['params']['matrixD'] = pl.to_json(matrixD)
Customizations
Attributes for
<pl-variable-output>:
Attributes for
<variable> (one of these for each variable to display):
Details
This element displays a list of variables inside
<pre> tags that are formatted for import into
either MATLAB, Mathematica, Python, or R (the user can switch between them). Each variable must be
either a scalar or a 2D numpy array (expressed as a list). Each variable will be prefixed by the
text that appears between the
<variable> and
</variable> tags, followed by
=. Below
are samples of the format displayed under each language tab.
MATLAB format:
A = [1.23; 4.56]; % matrix
Mathematica format:
A = [1.23; 4.56]; (* matrix *)
Python format:
import numpy as np A = np.array([[1.23], [4.56]]) # matrix
R format:
A = c(1.23, 4.56) # vector A = matrix(c(1.23, 4.56, 8.90, 1.23), nrow = 2, ncol = 2, byrow = TRUE) # matrix
If a variable
v is a complex object, you should use
import prairielearn as pl and
data['params'][params-name] = pl.to_json(v).
Example implementations
See also
pl-matrix-latexfor displaying the matrix using LaTeX commands.
pl-matrix-component-inputfor individual input boxes for each element in the matrix
pl-matrix-inputfor input values formatted in a supported programming language.
pl-matrix-latex element
Displays a scalar or 2D numpy array of numbers in LaTeX using mathjax.
Sample element
question.html
$$C = <pl-matrix-latex</pl-matrix-latex>$$
server.py
import prairielearn as pl import numpy as np def generate(data): # Construct a matrix mat = np.matrix('1 2; 3 4') # Export matrix to be displayed in question.html data['params']['matrixC'] = pl.to_json(mat)
Customizations
Details
Depending on whether
data['params'] contains either a scalar or 2D numpy array of numbers,
one of the following will be returned.
- scalar
- a string containing the scalar not wrapped in brackets.
- numpy 2D array
- a string formatted using the
bmatrixLaTeX style.
Sample LaTeX formatting:
\begin{bmatrix} ... & ... \\ ... & ... \end{bmatrix}
As an example, consider the need to display the following matrix operations:
x = [A][b] + [c]
In this case, we would write:
${\bf x} = <pl-matrix-latex</pl-matrix-latex> <pl-matrix-latex</pl-matrix-latex> + <pl-matrix-latex</pl-matrix-latex>$
Example implementations
See also
pl-variable-outputfor displaying the matrix in a supported programming language.
pl-matrix-component-inputfor individual input boxes for each element in the matrix
pl-matrix-inputfor input values formatted in a supported programming language.
pl-graph element
Using the viz.js library, create Graphviz DOT visualizations.
Sample elements
question.html
<pl-graph> digraph G { A -> B } </pl-graph>
question.html
<pl-graph</pl-graph>
server.py
import prairielearn as pl import numpy as np def generate(data): mat = np.random.random((3, 3)) mat = mat / np.linalg.norm(mat, 1, axis=0) data['params']['labels'] = pl.to_json(['A', 'B', 'C']) data['params']['matrix'] = pl.to_json(mat)
Customizations
Example implementations
Extension API
Custom values for
params-type can be added with element extensions. Each custom type is defined as a function that takes as input the
element and
data values and returns processed DOT syntax as output.
A minimal type function can look something like:
def custom_type(element, data): return "graph { a -- b; }"
In order to register these custom types, your extension should define the global
backends dictionary. This will map a value of
params-type to your function above:
backends = { 'my-custom-type': custom_type }
This will automatically get picked up when the extension gets imported. If your extension needs extra attributes to be defined, you may optionally define the global
optional_attribs array that contains a list of attributes that the element may use.
For a full implementation, check out the
edge-inc-matrix extension in the exampleCourse.
See also
- External:
viz.jsgraphing library
pl-figurefor displaying static or dynamically generated graphics.
pl-file-downloadfor allowing either static or dynamically generated files to be downloaded.
pl-drawing element
Creates a canvas (drawing space) that can display images from a collection of pre-defined drawing objects. Users can also add drawing objects to the canvas for grading.
See the
pl-drawing documentation for details.
pl-overlay element
The overlay element allows existing PrairieLearn and HTML elements to be layered on top of one another in arbitrary positions.
Sample element
<pl-overlay <pl-background> <pl-drawing <pl-drawing-initial> <pl-triangle</pl-triangle> </pl-drawing-initial> </pl-drawing> </pl-background> <pl-location $$3$$ </pl-location> <pl-location $$3$$ </pl-location> <pl-location <pl-number-input</pl-number-input> </pl-location> </pl-overlay>
pl-overlay Customizations
pl-location Customizations
pl-background Customizations
The
pl-background child tag does not have any extra attributes that need to be set. All relevant positioning and sizing information is obtained from the tag's contents.
Details
An overlay is pre-defined as a "overlay area" with a static size. By default, elements that exceed these boundaries will get partially or totally cut off. A background can be specified by wrapping HTML in a
<pl-background> tag, in this case the overlay will automatically size itself to fit the background and a
width and
height do not need to be specified. Floating child elements are wrapped with a
<pl-location> tag that specifies the position relative to some defined edge of the overlay area using
left,
right,
top, and
bottom. Anything inside the location tag will be displayed at that position. Children are layered in the order they are specified, with later child elements being displayed on top of those defined earlier.
Example implementations
pl-external-grader-variables element
Displays variables that are given to the student, or expected for the student to define in externally-graded questions. The list of variables should be stored in
data['params'] and has the following format:
data["params"]["names_for_user"] = [ {"name": "var1", "description": "Human-readable description.", "type": "type"}, {"name": "var2", "description": "...", "type": "..."} ] data["params"]["names_from_user"] = [ {"name": "result1", "description": "...", "type": "..."} ]
Sample element
question.html
<p>The setup code gives the following variables:</p> <pl-external-grader-variables</pl-external-grader-variables> <p>Your code snippet should define the following variables:</p> <pl-external-grader-variables</pl-external-grader-variables>
server.py
def generate(data): data["params"]["names_for_user"] = [ {"name": "n", "description": r"Dimensionality of $\mathbf{A}$ and $\mathbf{b}$.", "type": "integer"}, {"name": "A", "description": r"Matrix $\mathbf{A}$.", "type": "numpy array"}, {"name": "b", "description": r"Vector $\mathbf{b}$.", "type": "numpy array"} ] data["params"]["names_from_user"] = [ {"name": "x", "description": r"Solution to $\mathbf{Ax}=\mathbf{b}$.", "type": "numpy array"} ]
Customizations
Example implementations
- demo/autograder/codeEditor
- demo/autograder/codeUpload
- demo/autograder/python/square
- demo/autograder/python/numpy
- demo/autograder/python/pandas
- demo/autograder/python/plots
- demo/autograder/python/random
Conditional Elements
pl-question-panel element
Displays the contents of question directions.
Sample element
<pl-question-panel> This is question-panel text. </pl-question-panel>
Details
Contents are only shown during question input portion. When a student either makes a submission or receives the correct answer, the information between these tags is hidden. If content exists outside of a question panel, then it will be displayed alongside or answer.
Example implementations
See also
pl-submission-panelfor changing how a submitted answer is displayed.
pl-answer-panelfor displaying the question's solution.
pl-hide-in-panelto hide contents in one or more display panels.
pl-submission-panel element
Customizes how information entered by a user is displayed before grading.
Sample element
<pl-submission-panel> This is submission-panel text. </pl-submission-panel>
Details
Contents are only shown after the student has submitted an answer. This answer may be correct, incorrect, or invalid.
Example implementations
See also
pl-question-panelfor displaying the question prompt.
pl-answer-panelfor displaying the question's solution.
pl-hide-in-panelto hide contents in one or more display panels.
pl-external-grader-resultsfor showing the results from an externally graded code question.
pl-answer-panel element
Provide information regarding the question answer after the student is unable to receive further answers for grading.
Sample element
<pl-answer-panel> This is answer-panel text. </pl-answer-panel>
Details
Contents are only displayed when the answer panel is requested. Common reasons that trigger the display of this element are:
- The question is fully correct
- There are no more submission attempts
- The time limit for the assessment has expired.
Example implementations
See also
pl-question-panelfor displaying the question prompt.
pl-submission-panelfor changing how a submitted answer is displayed.
pl-hide-in-panelto hide contents in one or more display panels.
pl-external-grader-resultsfor showing the results from an externally graded code question.
pl-hide-in-panel element
Hide the contents so that it is not displayed in specific panels ("question", "submission", or "answer").
Sample element
<pl-hide-in-panel This text will be hidden in the submission panel and answer panel. </pl-hide-in-panel>
Customizations
Details
Hide the element contents in those panels for which the corresponding
attribute is
true. This is the reverse of
pl-question-panel,
pl-submission-panel, or
pl-answer-panel, all of which explicitly show the
element contents only in a specific panel.
Example implementations
See also
pl-question-panelfor displaying the question prompt.
pl-submission-panelfor changing how a submitted answer is displayed.
pl-answer-panelfor displaying the question's solution.
pl-external-grader-resultsfor showing the results from an externally graded code question.
pl-external-grader-results element
Displays results from externally-graded questions.
Sample element
<pl-external-grader-results></pl-external-grader-results>
Details
It expects results to follow the reference schema for external grading results.
Example Implementations
See also
Deprecated Elements
Note: The following PrairieLearn Elements have been deprecated. These elements are still supported for backwards compatibility, but they should not be used in new questions.
pl-variable-score element
Display the partial score for a specific answer variable.
WARNING: This element is deprecated and should not be used in new questions.
Sample element
<pl-variable-score</pl-variable-score>
Customizations
pl-prairiedraw-figure element
Create and display a prairiedraw image.
WARNING: This element is deprecated and should not be used in new questions.
Sample element
<pl-prairiedraw-figure
Customizations
Details
The provided
script-name corresponds to a file located within the director for the question. Parameter names are keys stored in
data["params"] in
server.py (i.e., those available for templating within
question.html). | https://prairielearn.readthedocs.io/en/latest/elements/ | 2021-04-10T21:57:51 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['pl-multiple-choice.png', None], dtype=object)
array(['pl-checkbox.png', None], dtype=object)
array(['pl-number-input-rtol.png', None], dtype=object)
array(['pl-number-input-sigfig.png', None], dtype=object)
array(['pl-dropdown.png', None], dtype=object)
array(['pl-order-blocks.png', None], dtype=object)
array(['pl-integer-input.png', None], dtype=object)
array(['pl-symbolic-input.png', None], dtype=object)
array(['pl-string-input.png', None], dtype=object)
array(['pl-matrix-component-input.png', None], dtype=object)
array(['pl-matrix-input.png', None], dtype=object)
array(['pl-file-editor.png', None], dtype=object)
array(['pl-file-upload.png', None], dtype=object)
array(['pl-threejs.png', None], dtype=object)
array(['pl-code.png', None], dtype=object)
array(['pl-python-variable.png', None], dtype=object)
array(['pl-python-variable2.png', None], dtype=object)
array(['pl-figure.png', None], dtype=object)
array(['pl-file-download.png', None], dtype=object)
array(['pl-variable-output.png', None], dtype=object)
array(['pl-matrix-latex.png', None], dtype=object)
array(['pl-graph1.png', None], dtype=object)
array(['pl-graph2.png', None], dtype=object)
array(['pl-overlay.png', None], dtype=object)
array(['pl-external-grader-variables.png', None], dtype=object)] | prairielearn.readthedocs.io |
The expanding interest for online blaze games, which are accessible free in the Internet, has seen an ascent in kid web based playing action. Children, particularly young ladies can without much of a stretch be discovered playing spruce up games online for quite a long time. Spruce up games are vivid, fun exercises in which the player is needed to dress the virtual model, typically a superstar or the like. Online spruce up games fill in as an instrument with which little youngsters figure out how to join tones and textures to frame appropriate dressing clothing types, and furthermore share their encounters in bunch exercises with others. Regularly children will get together to play a game just to share their presentation, to play in a gathering or contend with one another, for the purpose of a happy time.
Young lady spruce up games online superslot is likely the most pursued and played arcade game class among more youthful young ladies. These game expect to engage and make benefit, and yet they work on specific regions of the mind that should be created in each youngster. So to the extent benefits, we could list assisting with the advancement of the cerebrum’s scholarly and engine capacities as quite possibly the most significant.
Spruce up games online incorporate an extraordinary number of alternatives for youngsters to browsed, like the plan of the garments, shoes, packs, totes and cosmetics choices. These decisions highlight captivating shading ranges and trendy plans. Young ladies utilize the alternatives recorded above to practice their inventiveness and produce design clothing that is symphonious and of good taste. The offered model to be wearing the game is for all intents and purposes in her clothing; all the attire and frill are set to the side and left to the kid’s creative mind to put on the model to wear. The game depends on the young lady’s capacity and want to utilize her innovativeness and creative mind.
Another part of online spruce up games is their capacity to chip away at the youngster’s transient memory. Some spruce up games are tied in with coordinating with your plan to a preview you got a brief look at previously. The child is needed to remember the depiction, and afterward dress the young lady with the pieces and pieces left in better places. In this interaction the youngster will practice his transient memory, attempting to recall the depiction of the dress he saw previously. As a few of us might know, great momentary memory is basic for addressing numerical conditions, and is particularly valuable in polynomial math and geometry applications.
Different games in this equivalent sort are tied in with dressing the young lady inside a predefined time period. This element requires the player to show mental dexterity and coordination, hence improving the mind’s engine capacities. The youngster should recall the main job and continue to finish the riddle like movement with a period limitation. | http://www.ogi-docs.com/benefits-of-playing-girl-dress-up-games-online/ | 2021-04-10T21:56:44 | CC-MAIN-2021-17 | 1618038059348.9 | [] | www.ogi-docs.com |
Custom open air signs are significant showcasing devices for developing your business. A compelling sign won’t just alarm expected clients about your business, it will create interest in your items and administrations. Quality open air signage is fundamental to producing traffic and boosting deals.
Public mindfulness and promoting are crucial for the accomplishment of your business. To bring in cash, you need to allure clients to come inside and go through cash. There are an assortment of strategies you can use to advise general society about your administration or items and let them know where you are found. Perhaps the most financially savvy approaches to arrive at many potential clients every day is using outside signage.
Open air signage is quite possibly the main speculations Outdoor Sign Holders an entrepreneur can make in promoting and publicizing the organization. Commonly, this might be the main connection your potential clients have with your business. Remember the accompanying attributes as you plan and plan a compelling open air sign for your business.
Clear Message:
The message on your sign should be succinct enough that individuals passing by in vehicles or by foot can understand it. Keep your words and expressions short and the significance clear. Utilize your sign to coordinate or illuminate clients. In a perfect world, your message should start the interest of your intended interest group and allure them to stop in.
Use Images:
Pictures can be utilized to handily convey what administrations or items your business offers. For instance, a toy store may have a picture of a squishy toy or toy train to advise individuals passing by that they can purchase toys at that store.
Simplify it:
Try not to swarm your sign with various pictures and protracted expressions. Utilize barely sufficient content and pictures to viably convey your message. Remember that it should be seen rapidly as individuals don’t for the most part stop to take a gander at signs.
Tempt Shoppers:
Your sign resembles a quiet salesman drawing possible purchasers into your business. It tends to be utilized to feature deals and advancements that create interest and draw swarms. | http://www.ogi-docs.com/outdoor-signs-help-to-grow-your-business/ | 2021-04-10T21:10:10 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['https://static.commerceplatform.services/images/zoom/sfam2333.rw_zoom.jpg',
'Portable Pavement Sign | 22” x 33” Poster Displays with Lenses'],
dtype=object) ] | www.ogi-docs.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.