content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
db.updateUser()
On this page
Definition
db.updateUser( username, update, writeConcern )¶
Updates the user's profile on the database on which you run the method. An update to a field completely replaces the previous field's values. This includes updates to the user's
rolesarray.Warning
When you update the
rolesarray, you completely replace the previous array's values. To add or remove roles without replacing all the user's existing roles, use the
db.grantRolesToUser()or
db.revokeRolesFromUser()methods.
The
db.updateUser()method uses the following syntax:Tip
Starting in version 4.2 of the
mongoshell,ongoshell.
The
db.updateUser()method has the following arguments.
The
updatedocument specifies the fields to update and their new values. All fields in the
updatedocument are optional, but must include at least one field.
The
updatedocument has the following fields:
Roles
In the
roles field, you can specify both
built-in roles and user-defined
roles.
To specify a role that exists in the same database where
db.updateUser():.
The
db.updateUser() method wraps the
updateUser
command.
Behavior
Replica set
If run on a replica set,
db.updateUser() is executed using
"majority" write concern by default.
Encyption
By default,
db.updateUser() sends all specified data to the MongoDB
instance in cleartext, even if using
passwordPrompt(). Use
TLS transport encryption to protect communications between clients
and the server, including the password sent by
db.update.
Required AccessPassword
db.updateUser() method completely replaces the
user's
customData and
roles data:
The user
appClient01 in the
products database now has the following
user information:
Update User to Use
SCRAM-SHA-256 Credentials Only
To use SCRAM-SHA-256, the
featureCompatibilityVersion must be set to
4.0. For more
information on featureCompatibilityVersion, see View FeatureCompatibilityVersion and
setFeatureCompatibilityVersion.
The following operation updates a user who currently have both
SCRAM-SHA-256 and
SCRAM-SHA-1 credentials to have only
SCRAM-SHA-256 credentials.
- If the password is not specified along with the
mechanisms, you can only update the
mechanismsto a subset of the current SCRAM mechanisms for the user.
- If the password is specified along with the
mechanisms, you can specify any supported SCRAM mechanism or mechanisms.
- For
SCRAM-SHA-256, the
passwordDigestormust be the default value
"server". | https://docs.mongodb.com/v5.0/reference/method/db.updateUser/ | 2022-01-16T18:35:10 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)
array(['/v5.0/assets/link.png', 'icons/link.png'], dtype=object)] | docs.mongodb.com |
When working with your data in New Relic, you may want to view aggregated data for an application across clusters, environments, or data centers, while at the same time be able to view each of your application instance's data individually..
Roll up app data
Normally, when two instances report with the same app name, agent language, and license key, New Relic aggregates their data into a single New Relic-monitored app..
Prevent duplicate transaction events
By default, an app with multiple names will generate multiple events for transactions (a duplicate transaction for each name). For example, if you give your app three names, that's three times the number of events for transactions.
To avoid duplicate events, disable collection for each of the duplicate app names:
- Go to one.newrelic.com or one.eu.newrelic.com > More > Manage Insights Data.
- Toggle data collection on/off for duplicate app names, then save.
Roll up browser data
When you use multiple names to report application data, any browser monitoring data from that application will also be grouped into multiple applications using the same configuration.
Important
Session trace data will only report to the first application listed. Other browser data will populate into each of the up to three applications, but session trace data will be limited to the most specific application..
Other options to organize your apps
If you do not want to apply multiple names to your apps, you can organize them with tags.. | https://docs.newrelic.com/docs/apm/agents/manage-apm-agents/app-naming/use-multiple-names-app | 2022-01-16T18:51:55 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.newrelic.com |
Defining Domain-Independent Parsing Grammars in Dataflows
To define domain-independent parsing grammars in a dataflow:
- In Enterprise Designer, add an Open Parser stage to your dataflow.
- Double-click the Open parser stage on the canvas.
- Click Define Domain Independent Grammar on the Rules tab.
- Use the Grammar Editor to create the grammar rules. You can type commands and variables into the text box or use the commands provided in the Commands tab. For more information, see Grammars.
- To cut, copy, paste, and find and replace text strings in your parsing grammar, right-click in the Grammar Editor and select the appropriate command.
- To check the parsing grammar you have created, click Validate.
The validate feature lists any errors in your grammar syntax, including the line and column where the error occurs, a description of the error, and the command name or value that is causing the error.
- Click the Preview tab to test the parsing grammar.
- When you are finished creating your parsing grammar, click OK. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/DNM/source/OpenParser/OpenParser_DomainIndependentParsingGrammar.html | 2022-01-16T20:00:08 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
Writes the value of each of its arguments to the file. The arguments must be Strings or Numbers. To write other values, use
tostring() or string.format() before calling
File:write().
For security reasons, you are not allowed to write files in the
system.ResourceDirectory (the directory where the application is stored). You must specify either
system.DocumentsDirectory,
system.TemporaryDirectory, or
system.CachesDirectory in the system.pathForFile() function when opening the file for writing.
File:write( arg1 [, arg2] [, ...] )
-- | http://docs.coronalabs.com/api/type/File/write.html | 2022-01-16T19:28:01 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.coronalabs.com |
Integrating with the OS¶
- how to redirect output
- executing OS commands from within
cmd2
- editors
- paging
- exit codes
- Automation and calling cmd2 from other CLI/CLU tools via commands at invocation and quit
Invoking With Arguments¶
Typically you would invoke a
cmd2 program by typing:
$ python mycmd2program.py
or:
$ mycmd2program.py
Either of these methods will launch your program and enter the
cmd2 command
loop, which allows the user to enter commands, which are then executed by your
program.
You may want to execute commands in your program without prompting the user for
any input. There are several ways you might accomplish this task. The easiest
one is to pipe commands and their arguments into your program via standard
input. You don’t need to do anything to your program in order to use this
technique. Here’s a demonstration using the
examples/example.py included in
the source code of
cmd2:
$ echo "speak -p some words" | python examples/example.py omesay ordsway
Using this same approach you could create a text file containing the commands
you would like to run, one command per line in the file. Say your file was
called
somecmds.txt. To run the commands in the text file using your
cmd2 program (from a Windows command prompt):
c:\cmd2> type somecmds.txt | python.exe examples/example.py omesay ordsway
By default,
cmd2 programs also look for commands pass as arguments from the
operating system shell, and execute those commands before entering the command
loop:
$ python examples/example.py help Documented commands (type help <topic>): ======================================== alias help macro orate quit run_script set shortcuts edit history mumble py run_pyscript say shell speak (Cmd)
You may need more control over command line arguments passed from the operating
system shell. For example, you might have a command inside your
cmd2
program which itself accepts arguments, and maybe even option strings. Say you
wanted to run the
speak command from the operating system shell, but have
it say it in pig latin:
$ python example/example.py speak -p hello there python example.py speak -p hello there usage: speak [-h] [-p] [-s] [-r REPEAT] words [words ...] speak: error: the following arguments are required: words *** Unknown syntax: -p *** Unknown syntax: hello *** Unknown syntax: there (Cmd)
Uh-oh, that’s not what we wanted.
cmd2 treated
-p,
hello, and
there as commands, which don’t exist in that program, thus the syntax
errors.
There is an easy way around this, which is demonstrated in
examples/cmd_as_argument.py. By setting
allow_cli_args=False you can so
your own argument parsing of the command line:
$ python examples/cmd_as_argument.py speak -p hello there ellohay heretay
Check the source code of this example, especially the
main() function, to
see the technique. | https://cmd2.readthedocs.io/en/0.9.21/features/os.html | 2022-01-16T19:23:18 | CC-MAIN-2022-05 | 1642320300010.26 | [] | cmd2.readthedocs.io |
Creating an account
We make it easy to get started with Aptible Comply. (To learn more about Aptible Comply or to set up a demo, head to our Product Page.)
Once you've been in touch with an Aptible Comply representative, follow the steps below to create an account.
- Visit our signup page
- Enter your name, email address and create a password
Verify your email
When you create a new Aptible account, you'll receive an email from [email protected] to verify your email address. You will not be able to invite teammates to the platform or perform certain actions until you verify your email address.
You can resend the verification email if necessary via the top banner once you log into the app.
Upon creating an account, you'll be prompted to set you your organization and choose a framework.
Updated over 1 year ago
Learn more about the compliance frameworks we support: | https://docs.aptible.com/docs/creating-an-account | 2022-01-16T19:20:22 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.aptible.com |
Change default values
Before you begin configuring Splunk Enterprise for your environment, review the following default settings.
Set or change environment variables
Use operating system environment variables to modify specific default values for the Splunk Enterprise services.
- On *nix, use the
setenvor
exportcommands to set a particular variable. For example:To modify the environment permanently, edit your shell initialization file, and add entries for the variables you want Splunk Enterprise to use when it starts up.
# export SPLUNK_HOME = /opt/splunk02/splunk
- On Windows, use the
setenvironment variable in either a command prompt or PowerShell window:To set the environment permanently, use the "Environment Variables" window, and add an entry to the "User variables" list.
C:\> set SPLUNK_HOME = "C:\Program Files\Splunk"
Several environment variables are available:
Note: You can set these environment variables in the
splunk-launch.conf or
web.conf. This is useful when you run more than one Splunk software instance on a host. See splunk-launch.conf.
Change network ports
Splunk Enterprise configures default TCP ports during installation:
-.
The default network ports are recommendations, and might not represent what your Splunk Enterprise instance is using. During the Splunk Enterprise installation, if any default port is detected as in-use, you are prompted to provide alternative port assignments.
Splunk instances that are receiving data from forwarders must be configured with a receiver port. The receiver port only listens for incoming data from forwarders. Configuration of the receiver port does not occur during installation. For more information, see Enable a receiver in the Forwarding Data Manual.
Use Splunk Web
To change the ports from their installation settings:
- Log into Splunk Web as the admin user.
- Click Settings.
- Click Server settings.
- Click General settings.
- Change the value for either Management port or Web port, and click Save.
Use Splunk CLI
To change the port settings using the Splunk CLI, use the CLI command
set. For example, this that is displayed within Splunk Web, and the name that is sent to other Splunk Servers in a distributed deployment. The name is chosen from either the DNS or IP address of the Splunk Server host by default.
Use Splunk Web
To change the Splunk server name:
- Log into Splunk Web as the admin user.
- Click Settings.
- Click Server settings.
- Click General settings.
- Change the value for Splunk server name, and click Save.
Use Splunk CLI
To change the server name using the CLI, use the
set servername command. For example, this sets the server name to foo:
splunk set servername foo
Set minimum free disk space
The minimum free disk space setting controls how low storage space in the datastore location can fall before Splunk software stops indexing. Splunk software resumes indexing when available space exceeds this threshold.
Use Splunk Web
To set minimum free storage space:
- Log into Splunk Web as the admin user.
- Click Settings.
- Click Server settings.
- Click General settings.
- Change the value for Pause indexing if free disk space (in MB) falls below, and click Save.
Use Splunk CLI
To change the minimum free space value using the CLI, use the
set minfreemb command. For example, this sets the minimum free space to 2000 MB:
splunk set minfreemb 2000
Set the default time range
The default time range for ad hoc searches in the Search & Reporting App is set to Last 24 hours. A Splunk Enterprise administrator can set the default time range globally, across all apps. Splunk Cloud Platform customers cannot configure this setting directly. The setting is stored in
$SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf file in the
[general_default] stanza.
This setting applies to all Search pages in Splunk Apps, not only the Search & Reporting App. This setting applies to all user roles.
This setting does not apply to dashboards.
Use Splunk Web
- Log into Splunk Web as the admin user.
- Click Settings.
- Click Server settings.
- Click Search Preferences.
- From the Default search time range drop-down, select the time that you want to use and click Save.
Time range settings in the
ui_prefs.conf file
You might already have a time range setting defined that you have in the
ui-prefs.conf file. See ui-prefs.conf.
Other default settings
The Settings screen offers additional pages with default settings for you to change. Explore the screen to see the range of! | https://docs.splunk.com/Documentation/Splunk/7.2.5/Admin/Changedefaultvalues | 2022-01-16T19:08:36 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Bot feedback and messages
Feedback from the VIP Code Analysis Bot will be posted on reviewed pull requests based on the results of the automated scans: PHPCS analysis, PHP linting, and SVG analysis. Feedback from the VIP Code Analysis Bot can be handled in several ways:
- Address the feedback by amending the relevant code according to suggestions from the Bot.
- Ignore the feedback, then instruct PHPCS to ignore the issue (in case of PHPCS feedback).
- Dismiss the review using the GitHub pull request interface.
Many issues noted in feedback will be correct and should be addressed, but as with all automated feedback there can be some incorrectly flagged issues that are safe to ignore (false positives). There may also be some issues that the bot feedback misses (false negatives). All feedback provided by the Bot should be carefully evaluated.
A more detailed explanation of errors and warnings for each severity level is available for interpreting PHPCS feedback.
Maximum number of active comments
The Bot is configured to post a maximum number of 18 comments per pull request review. If more than 18 comments are needed for the Bot to report the total issues found, those additional comments will be posted in separate reviews. The Bot is configured to ensure that there are no more than 100 “active” comments in each pull request. “Active” comments are comments made by the Bot and are not outdated.
This comment limitation is in place to limit the number of calls to the GitHub API.
GitHub API communication error
If the Bot has a problem communicating with the GitHub API, it will post a message to pull requests saying that there has been a GitHub API communication error and that a human should be contacted.
In most cases this error occurs due to problems with the GitHub API itself. The message usually disappears when a pull request is scanned again, which happens when new commits are pushed to the pull request. If the problem persists, check the GitHub status page for reported issues with the GitHub API. | https://docs.wpvip.com/technical-references/vip-code-analysis-bot/feedback/ | 2022-01-16T19:59:46 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.wpvip.com/wp-content/uploads/sites/2/2021/10/vipgoci-about-feedback-and-messages.png',
None], dtype=object) ] | docs.wpvip.com |
OpenPGP Commands
Acronyms and their definitions are listed at the bottom of the Base Commands page.
ykman openpgp [OPTIONS] COMMAND [ARGS]…
Examples
Set the retries for PIN, Reset Code and Admin PIN to 10:
$ ykman openpgp set-retries 10 10 10
Require touch to use the authentication key:
$ ykman openpgp set-touch aut on
ykman openpgp access set-retries [OPTIONS] PIN-RETRIES RESET-CODE-RETRIES ADMIN-PIN-RETRIES
ykman openpgp keys set-touch [OPTIONS] KEY POLICY
Arguments
The touch policy is used to require user interaction for all operations using the private key on the YubiKey. The touch policy is set individually for each key slot. To see the current touch policy, run:
$ ykman openpgp info
ykman openpgp reset [OPTIONS]
Options
To get in touch with Yubico Support, go to.
To get in touch with Yubico Support, click here. | https://docs.yubico.com/software/yubikey/tools/ykman/OpenPGP_Commands.html | 2022-01-16T18:09:43 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.yubico.com |
AuthenticateDecryptResponse ClassNamespace: Yubico.YubiKey.Piv.Commands Assembly: Yubico.YubiKey.dll
The response to the authenticate: decrypt command, containing the plaintext result of the YubiKey's private key operation.
public sealed class AuthenticateDecryptResponse : AuthenticateResponse, IYubiKeyResponseWithData<byte[]>, IYubiKeyResponse
Implements
Remarks
This is the partner Response class to AuthenticateDecryptCommand.
The data returned by
GetData is a byte array,
containing the decrypted data. The data will be the same size as the key.
That is, for a 1024-bit RSA key, the decrypted data is 128 bytes, and for
a 2048-bit key, the decrypted data is 256 bytes.
The data returned is almost certainly formatted, either using PKCS 1 v. 1.5 or OAEP. It is the responsibility of the caller to extract the actual plaintext from the formatted data. For example, if the data to encrypt had originally been 32 bytes (possibly a 256-bit AES key) formatted using PKCS 1 v.1.5, and the RSA key is 1024 bits, then the decrypted data will look like this:
00 02 <93 random, non-zero bytes> 00 <32-byte plaintext>
OAEP is much more complicated. To learn about this formatting, see RFC 8017.(); | https://docs.yubico.com/yesdk/yubikey-api/Yubico.YubiKey.Piv.Commands.AuthenticateDecryptResponse.html | 2022-01-16T18:31:07 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.yubico.com |
Tutorial: Build an API Gateway REST API with AWS integration
Both the Tutorial: Build a Hello
World REST API with Lambda proxy integration and Build an API Gateway REST API with Lambda
integration topics.
All AWS services support dedicated APIs to expose their features. However, the
application protocols or programming interfaces are likely to differ from service to
service. An API Gateway API with the
AWS integration has the advantage of providing a
consistent application protocol for your client to access different AWS services.
In this walkthrough, we create an API to expose Amazon SNS. For more examples of integrating an API with other AWS services, see Amazon API Gateway tutorials and workshops.
Unlike the Lambda proxy integration, there is no corresponding proxy integration for other AWS services. Hence, an API method is integrated with a single AWS action. For more flexibility, similar to the proxy integration, you can set up a Lambda proxy integration. The Lambda function then parses and processes requests for other AWS actions.
API Gateway does not retry when the endpoint times out. The API caller must implement retry logic to handle endpoint timeouts.
This walkthrough builds on the instructions and concepts in Build an API Gateway REST API with Lambda integration.If you have not yet completed that walkthrough, we suggest that you do it first.
Topics
Prerequisites
Before you begin this walkthrough, do the following:
Complete the steps in Prerequisites for getting started with API Gateway.
Ensure that the IAM user has access to create policies and roles in IAM. You need to create an IAM policy and role in this walkthrough.
Create a new API named
MyDemoAPI. For more information, see Tutorial: Build a REST API with HTTP non-proxy integration.
Deploy the API at least once to a stage named
. For more information, see Deploy the API in Build an API Gateway REST API with Lambda integration.
test
Complete the rest of the steps in Build an API Gateway REST API with Lambda integration.
Create at least one topic in Amazon Simple Notification Service (Amazon SNS). You will use the deployed API to get a list of topics in Amazon SNS that are associated with your AWS account. To learn how to create a topic in Amazon SNS, see Create a Topic. (You do not need to copy the topic ARN mentioned in step 5.)
Step 1: Create the resource
In this step, you create a resource that enables the AWS service proxy to interact with the AWS service.
To create the resource
.
Choose MyDemoAPI.
In the Resources pane, choose the resource root, represented by a single forward slash (
/), and then choose Create Resource.
For Resource Name, enter
MyDemoAWSProxy, and then choose Create Resource.
Step 2: Create the GET method
In this step, you create a GET method that enables the AWS service proxy to interact with the AWS service.
To create the GET method
In the Resources pane, choose /mydemoawsproxy, and then choose Create Method.
For the HTTP method, choose GET, and then save your choice.
Step 3: Create the AWS service proxy execution role
In this step,.
To create the AWS service proxy execution role and its policy
Sign in to the AWS Management Console and open the IAM console at
.
Choose Policies.
Do one of the following:
If the Welcome to Managed Policies page appears, choose Get Started, and then choose Create Policy.
If a list of policies appears, choose Create policy.
Choose JSON and then enter the following text.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Resource": [ "*" ], "Action": [ "sns:ListTopics" ] } ] }
Choose Review policy.
Enter a name and description for the policy.
Choose Create policy.
Choose Roles.
Choose Create Role.
Choose AWS Service under Select type of trusted entity and then choose API Gateway.
Choose Next: Permissions.
Choose Next: Tags.
Choose Next: Review.
For Role Name, enter a name for the execution role (for example,
APIGatewayAWSProxyExecRole), optionally enter a description for this role, and then choose Create role.
In the Roles list, choose the role you just created. You may need to scroll down the list.
For the selected role, choose Attach policies.
Select the check box next to the policy you created earlier (for example,
APIGatewayAWSProxyExecPolicy) and choose Attach policy.
The role you just created has the following trust relationship that enables API Gateway assume to role for any actions permitted by the attached policies:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "apigateway.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
For Role ARN, note of the Amazon Resource Name (ARN) for the execution role. You need it later. The ARN should look similar to:
arn:aws:iam::123456789012:role/APIGatewayAWSProxyExecRole, where
123456789012is your AWS account ID.
Step 4: Specify method settings and test the method
In this step, you specify the settings for the GET method so that it can interact with an AWS service through an AWS service proxy. You then test the method.
To specify settings for the GET method and then test it
In the API Gateway console, in the Resources pane for the API named
MyDemoAPI, in /mydemoawsproxy, choose GET.
Choose Integration Request, and then choose AWS Service.
For AWS Region, choose the name of the AWS Region where you want to get the Amazon SNS topics.
For AWS Service, choose SNS.
For HTTP method, choose GET.
For Action, enter
ListTopics.
For Execution Role, enter the ARN for the execution role.
Leave Path Override blank.
Choose Save.
In the Method Execution pane, in the Client box, choose TEST, and then choose Test. If successful, Response Body displays a response similar to the following:
{ 5: Deploy the API
In this step, you deploy the API so that you can call it from outside of the API Gateway console.
To deploy the API
In the Resources pane, choose Deploy API.
For Deployment stage, choose
test.
For Deployment description, enter
Calling AWS service proxy walkthrough.
Choose Deploy.
Step 6: Test the API
In this step, you go outside of the API Gateway console and use your AWS service proxy to interact with the Amazon SNS service.
In the Stage Editor pane, next to Invoke URL, copy the URL to the clipboard. It should look like this:
https://
my-api-id.execute-api.
region-id.amazonaws.com/
test
Paste the URL into the address box of a new browser tab.
Append
/mydemoawsproxyso that it looks like this:
https://
my-api-id.execute-api.
region-id.amazonaws.com/
test/mydemoawsproxy
Browse to the URL. Information similar to the following should be displayed:
{ 7: Clean up
You can delete the IAM resources the AWS service proxy needs to work.
If you delete an IAM resource an AWS service proxy relies on, that AWS service proxy and any APIs that rely on it will no longer work. Deleting an IAM resource cannot be undone. If you want to use the IAM resource again, you must re-create it.
To delete the associated IAM resources
Open the IAM console at
.
In the Details area, choose Roles.
Select APIGatewayAWSProxyExecRole, and then choose Role Actions, Delete Role. When prompted, choose Yes, Delete.
In the Details area, choose Policies.
Select APIGatewayAWSProxyExecPolicy, and then choose Policy Actions, Delete. When prompted, choose Delete.
You have reached the end of this walkthrough. For more detailed discussions about creating API as an AWS service proxy, see Tutorial: Create a REST API as an Amazon S3 proxy in API Gateway, Tutorial: Create a Calc REST API with two AWS service integrations and one Lambda non-proxy integration, or Tutorial: Create a REST API as an Amazon Kinesis proxy in API Gateway. | https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-aws-proxy.html | 2022-01-16T20:23:12 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.aws.amazon.com |
Locoia
Website
Search…
Overview
Account and User Settings
Reset your Password
Invoices and Payments
Automation
Flow Builder
Flow Debugger
Flow Marketplace
Connectors & Helpers
Connectors
Helpers
Authentication Types Available
Building Connectors
Building Connector Actions
Webhook Trigger for Connectors
Data Mapping and Env Variables
Embed - White Label Portal
1. Embed Setup
2. Embed Flow Building
Data and Dashboards
Dashboards & Insights
Charts
Forecasts
Best Practice Guides
Integration Best Practices
Integration Check List
CSV Files in Excel
Multi-Tenant Flows
On-Premise Integrations
Database Connection Setup
Data and General Security
Using Tags
API
Locoia API Authentication
Create Connector Authentication
Status of Services
Data Privacy
Imprint
GitBook
Data and General Security
Locoia allows a wide array of data privacy settings and is not only built to satisfy GDPR and DSGVO, but to delivery exceptional data privacy handling features. Some features at a glance:
Encryption of all data, authentication and tokens
All traffic SSL / https secured
Custom log data retention policies per account and flow
Virtual Private cloud
Two factor authentication at login
Login activity monitoring and notification of suspicious activity
Deletion of Flow data and Flow debugging data output
All Flow Run data that you can find in flow debugging is system-wide deleted from any logging after 10 days. You can customize this on the account-level (your company) for all flows of your account if you have more strict requirements or need to keep the data a little bit longer. The range is from 0 to 90 days (0 (zero) meaning not stored at all). Additionally, you can set the data deletion time on each flow individually to any number of days from 1 to 90. You can also set the number of days to 0 (zero), which essentially means that data will not stored at all.
Flow User Input data
All data that is manually inout by a user is stored only in encrypted form in the database. The database itself is encrypted as well and can only be access in a VPC (Virtual Private Cloud) and cannot be accessed otherwise from the outside.
Tokens, secrets and API keys
All tokens, secrets, API keys and the like that are entered in the
Connector Auth
section are stored only in encrypted form using an additional secret key and cannot be extracted or called in any form other than for the purpose of flow execution at runtime. Sensitive keys and tokens are removed from all logs by default.
Cache or data stored in environment at runtime
For each execution and access, a separate server instance is spun up and afterwards destroyed. Therefore, data is only present in the environment for the time needed and not a second longer.
Login behavior monitoring and notification
To provide additional security, we send you emails once you login from a new device or browser we haven't seen you logging in from.
User activity monitoring security
If we see any suspicious user activity, random execution of functions as well as repeated wrong password entry within only a few seconds, we send you notification messages and lock your account for security reasons. In those case, please raise a ticket so that we can investigate, maintain secure usage and give you access again.
Best Practice Guides - Previous
Database Connection Setup
Next - Best Practice Guides
Using Tags
Last modified
3mo ago
Copy link
Contents
Deletion of Flow data and Flow debugging data output
Flow User Input data
Tokens, secrets and API keys
Cache or data stored in environment at runtime
Login behavior monitoring and notification
User activity monitoring security | https://docs.locoia.com/best-practice-guides/untitled-1 | 2022-01-16T19:51:56 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.locoia.com |
–
Exporting reports
To automatically export the report results and send them out by email, schedule a recurring task.
- In the Pega Customer Decision Hub portal, click .
- In the Public categories list, select Interaction History.
- Click Take action on report, as in the following figure:
The Take action on report icon
- Click Schedule.
- In the Task Scheduling section, configure when the report should be sent out, as in the following figure:
Sample report schedule
- In the Task Output Processing section, select the file format and recipients for the report.You can only send the report to users who have operator accounts in Pega Customer Decision Hub.
- Click Submit. | https://docs.pega.com/pega-customer-decision-hub-user-guide/86/exporting-reports | 2022-01-16T20:08:25 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Accessibility and Pega Platform
Pega Platform uses the Accessible Rich Internet Application (ARIA) roles to support accessibility. WAI-ARIA roles are a World Wide Web Consortium (W3C) ontology that specifies roles, states, or properties for each element. WAI-ARIA roles provide semantic information about features, structures, and behaviors allowing assistive technologies to convey information to users.
There are two concepts that apply to Pega application development: the main content area and the skip to content link. These concepts affect how assistive technologies interact with the content of an application web page.
Main content area
The main content area is where the application displays the most important content. When the user uses a tab key to navigate the page, this is where the focus lands when the Skip to main content link is selected.
When designing an application, a Adding WAI-ARIA roles to a Dynamic Layout may be specified as the main content area.
By default, the dynamic container or the center panel of a screen layout are marked as the main content area if either of those elements is included in the interface. This behavior cannot be altered in development.
Content links
A Adding a main content link to a dynamic layout enables application users to tab to the main content area of a page. When the user presses the tab key when the page loads, the Go to main content link appears. If the user presses the Enter key, the application will skip directly to the main content area of the page. Alternatively, the user can tab through the link and continue on to those application elements between the Go to main content link and the main content area.
- Adding WAI-ARIA roles to a Dynamic Layout
WAI-ARIA roles are added to a dynamic layout to provide semantic information about the role of the dynamic layout on the page. The settings for WAI-ARIA roles appear on the General tab of the dynamic layout properties modal dialog. To add an ARIA role to a dynamic layout:
- Adding a main content link to a dynamic layout
A main content link allows a user navigating an application with the keyboard to tab through the interface to pass over non-essential elements and move directly to the most important area of the page. Tabbing past navigation, banners, and non-essential content saves the user time in reaching the main content area. | https://docs.pega.com/user-experience/84/accessibility-and-pega-platform | 2022-01-16T20:28:52 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
This is an old revision of the document!
Installing Slackware AArch64 on the Pinebook Pro
Requirements
Needs organising
the most important thing is to install the nvme disk, enable the serial console (head phone jack), and
install slackware to the new driver FIRST
after that you can wipe your internal eMMC
the default uboot is looking for Arm Trusted Firmware
*drive, not driver haha
*** COLD BOOT - hold down for about 10 seconds after installer
* Disconnect any USB devices that aren't required for the OS installation
you just need the installer on an sd card and the nvme disk installed into the pinebook case it might be a good idea to disable the eMMC module while you have the case open if you plan to use the serial cable, you need to switch it on (same switch to reenable the head phone jack) #2 nvme connection #9 headphone / UART switch #24 switch for eMMC | https://docs.slackware.com/doku.php?id=slackwarearm:inst_sa64_cur_rk3399_pinebookpro&rev=1633020382 | 2022-01-16T18:31:39 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.slackware.com |
Code blocks with syntax highlighting¶
Quick Reference¶
- To insert a snippet as code with syntax highlighting, use the
code-blockdirective or the shorthand
::.
- You can explicitly set the language in the
code-block, you cannot in the shorthand.
- If you do not explicitly set the language, the default language (as set with the Highlight directive) is used. If no highlight directive was used, the default set in /Includes.rst.txt is used.
- It is recommended to use the short form (
::) instead of code-block explicitly.
- Always use syntactically correct code in a code block.
- Use placeholders in angle brackets (
<placeholder-name>) to refer to a place in the code where you don’t care about the exact value.
The following examples all do the same thing:
Use the shorthand
::(recommended):
See following example:: $a = 'b';
- How this looks:
See following example:
$a = 'b';
You can use this, if the default language is already set to PHP with the highlight directive in the current file (or in Includes.rst.txt).
Set the language (PHP) in the
code-block:
See following example: .. code-block:: php $a = 'b';
Use
code-blockwithout setting the language:
See following example: .. code-block:: $a = 'b';
You can use this, if you already set the language PHP with the highlight directive in the current file (or in Includes.rst.txt).
Using the ‘::’ notation (recommended)¶
It’s nice to use this notation and the preferred way to create a code block in case the highlighting is preset as desired (with the Highlight directive) and you don’t need the special options of the Code block directive.
However, the behavior of the ‘::’ notation is “sort of intelligent”. So let’s explain it here. Background: “Sphinx” is based on “Docutils”. Docutils handles the basic parse, transform and create output process for single files. Sphinx builds on this and adds the ability to handle multi file documentation projects. The ‘::’ notation is already part of Docutil’s reST specification for creating “literal blocks”.
Quoting the important part of the specification:).
Example of form 1: Expanded¶
Source:
Paragraph ... :: Literal Block
Result:
Paragraph …
Literal Block
In words: The paragraph will appear as written. The code block just follows. Both colons and one empty line will be removed and not lead to any special output.
Example of form 2: Partially minimized¶
Source:
Paragraph ... :: Literal Block
Result:
Paragraph …
Literal Block
In words: The paragraph will appear as written after both colons together with the preceding whitespace (!) have been removed from the end of the line. The code block then just follows.
Code block directive¶
Use codeblock with language PHP:
.. code-block:: php $a = 'b';
Use codeblock without specifying language:
.. code-block:: $a = 'b';
This uses whatever language has last been set with the Highlight directive
in the current file or in
Includes.rst.txt.
Use syntactically correct code¶
Attention
Please: No syntax errors!
Syntax highlighting only works if the lexer can parse the code without errors. In other words: If there’s a syntax error in the code the highlighting will not work.
Wrong:
.. code-block:: php $a = array( 'one' => 1, ... );
Correct:
.. code-block:: php $a = array( 'one' => 1, // ... );
Sphinx uses Pygments for highlighting. On a machine that has Pygments
installed the command
pygmentize -L will list all available lexers.
Highlight directive¶
You can set the default language with the
highlight directive. All following
code blocks will use the language as specified in the
highlight directive for
syntax highlighting.
If all of your code blocks in one file have the same language, it is easier to just set this once at the beginning of the file.
This way, you don’t need to set the language for each code-block (
..
code-block:: LANG) explicitly and can use the shorthand notation.
Use reStructuredText highlighting:
.. highlight:: rst
Use PHP highlighting:
.. highlight:: php
For TYPO3 we have adopted the convention that each reStructuredText source file
imports the
Documentation/Includes.rst.txt file at the top. And in the
included file - in general - we set PHP as default language for highlighting.
Exception: In the TypoScript manuals we are using
typoscript as default.
You can use the
..highlight:: LANG directive as often as you want. Each one
remains valid up to the next or up to the end of the single file it is used
in.
Highlight language ‘guess’¶
Note that there is a - pseudo - language ‘guess’ as well. This should use the highlighter for the first language that Pygments finds to have no syntax error.
Some more examples¶
Add line numbers to code snippet¶
Source¶
.. code-block:: php :linenos: $GLOBALS['TYPO3_CONF_VARS']['FE']['addRootLineFields'] .= ',tx_realurl_pathsegment'; // Adjust to your needs $domain = 'example.org'; $rootPageUid = 123; $rssFeedPageType = 9818; // pageType of your RSS feed page
Turn off highlighting: Method 1¶
Source:¶
A description: .. code-block:: none $ tree vendor/composer ├── ClassLoader.php ├── LICENSE ├── autoload_classmap.php ├── ... └── installed.json
Turn off highlighting: Method 2¶
Source:¶
.. highlight:: none A description:: $ tree vendor/composer ├── ClassLoader.php ├── LICENSE ├── autoload_classmap.php ├── ... └── installed.json
Available lexers¶
You can use any of the following names of lexers:
| bash, sh, ksh, shell | for example all mean the same lexer:
abap | abnf | ada, ada95, ada2005 | adl | agda | ahk, autohotkey | alloy | ampl | antlr-as, antlr-actionscript | antlr-cpp | antlr-csharp, antlr-c# | antlr-java | antlr-objc | antlr-perl | antlr-python | antlr-ruby, antlr-rb | antlr | apacheconf, aconf, apache | apl | applescript | arduino | as, actionscript | as3, actionscript3 | aspectj | aspx-cs | aspx-vb | asy, asymptote | at, ambienttalk, ambienttalk/2 | autoit | awk, gawk, mawk, nawk | basemake | bash, sh, ksh, shell | bat, batch, dosbatch, winbatch | bbcode | bc | befunge | blitzbasic, b3d, bplus | blitzmax, bmax | bnf | boo | boogie | brainfuck, bf | bro | bugs, winbugs, openbugs | c-objdump | c | ca65 | cadl | camkes, idl4 | cbmbas | ceylon | cfc | cfengine3, cf3 | cfm | cfs | chai, chaiscript | chapel, chpl | cheetah, spitfire | cirru | clay | clean | clojure, clj | clojurescript, cljs | cmake | cobol | cobolfree | coffee-script, coffeescript, coffee | common-lisp, cl, lisp | componentpascal, cp | console, shell-session | control, debcontrol | coq | cpp, c++ | cpp-objdump, c++-objdumb, cxx-objdump | cpsa | crmsh, pcmk | croc | cryptol, cry | csharp, c# | csound, csound-orc | csound-document, csound-csd | csound-score, csound-sco | css+django, css+jinja | css+erb, css+ruby | css+genshitext, css+genshi | css+lasso | css+mako | css+mako | css+mozpreproc | css+myghty | css+php | css+smarty | css | cucumber, gherkin | cuda, cu | cypher | cython, pyx, pyrex | d-objdump | d | dart | delphi, pas, pascal, objectpascal | dg | diff, udiff | django, jinja | docker, dockerfile | doscon | dpatch | dtd | duel, jbst, jsonml+bst | dylan-console, dylan-repl | dylan-lid, lid | dylan | earl-grey, earlgrey, eg | easytrieve | ebnf | ec | ecl | eiffel | elixir, ex, exs | elm | emacs, elisp, emacs-lisp | erb | erl | erlang | evoque | extempore | ezhil | factor | fan | fancy, fy | felix, flx | fish, fishshell | flatline | fortran | fortranfixed | foxpro, vfp, clipper, xbase | fsharp | gap | gas, asm | genshi, kid, xml+genshi, xml+kid | genshitext | glsl | gnuplot | go | golo | gooddata-cl | gosu | groff, nroff, man | groovy | gst | haml | handlebars | haskell, hs | haxeml, hxml | hexdump | hsail, hsa | html+cheetah, html+spitfire, htmlcheetah | html+django, html+jinja, htmldjango | html+evoque | html+genshi, html+kid | html+handlebars | html+lasso | html+mako | html+mako | html+myghty | html+php | html+smarty | html+twig | html+velocity | html | http | hx, haxe, hxsl | hybris, hy | hylang | i6t | idl | idris, idr | iex | igor, igorpro | inform6, i6 | inform7, i7 | ini, cfg, dosini | io | ioke, ik | irc | isabelle | j | jade | jags | jasmin, jasminxt | java | javascript+mozpreproc | jcl | jlcon | js+cheetah, javascript+cheetah, js+spitfire, javascript+spitfire | js+django, javascript+django, js+jinja, javascript+jinja | js+erb, javascript+erb, js+ruby, javascript+ruby | js+genshitext, js+genshi, javascript+genshitext, javascript+genshi | js+lasso, javascript+lasso | js+mako, javascript+mako | js+mako, javascript+mako | js+myghty, javascript+myghty | js+php, javascript+php | js+smarty, javascript+smarty | js, javascript | jsgf | json | jsonld, json-ld | jsp | julia, jl | kal | kconfig, menuconfig, linux-config, kernel-config | koka | kotlin | lagda, literate-agda | lasso, lassoscript | lcry, literate-cryptol, lcryptol | lean | less | lhs, literate-haskell, lhaskell | lidr, literate-idris, lidris | lighty, lighttpd | limbo | liquid | live-script, livescript | llvm | logos | logtalk | lsl | lua | make, makefile, mf, bsdmake | mako | mako | maql | mask | mason | mathematica, mma, nb | matlab | matlabsession | minid | modelica | modula2, m2 | monkey | moocode, moo | moon, moonscript | mozhashpreproc | mozpercentpreproc | mql, mq4, mq5, mql4, mql5 | mscgen, msc | mupad | mxml | myghty | mysql | nasm | ncl | nemerle | nesc | newlisp | newspeak | nginx | nimrod, nim | nit | nixos, nix | nsis, nsi, nsh | numpy | objdump-nasm | objdump | objective-c++, objectivec++, obj-c++, objc++ | objective-c, objectivec, obj-c, objc | objective-j, objectivej, obj-j, objj | ocaml | octave | odin | ooc | opa | openedge, abl, progress | pacmanconf | pan | parasail | pawn | perl, pl | perl6, pl6 | php, php3, php4, php5 | pig | pike | pkgconfig | plpgsql | postgresql, postgres | postscript, postscr | pot, po | pov | powershell, posh, ps1, psm1 | praat | prolog | properties, jproperties | protobuf, proto | ps1con | psql, postgresql-console, postgres-console | puppet | py3tb | pycon | pypylog, pypy | pytb | python, py, sage | python3, py3 | qbasic, basic | qml, qbs | qvto, qvt | racket, rkt | ragel-c | ragel-cpp | ragel-d | ragel-em | ragel-java | ragel-objc | ragel-ruby, ragel-rb | ragel | raw | rb, ruby, duby | rbcon, irb | rconsole, rout | rd | rebol | red, red/system | redcode | registry | resource, resourcebundle | rexx, arexx | rhtml, html+erb, html+ruby | roboconf-graph | roboconf-instances | robotframework | rql | rsl | rst, rest, restructuredtext | rts, trafficscript | rust | sass | sc, supercollider | scala | scaml | scheme, scm | scilab | scss | shen | silver | slim | smali | smalltalk, squeak, st | smarty | sml | snobol | sourceslist, sources.list, debsources | sp | sparql | spec | splus, s, r | sql | sqlite3 | squidconf, squid.conf, squid | ssp | stan | swift | swig | systemverilog, sv | tads3 | tap | tcl | tcsh, csh | tcshcon | tea | termcap | terminfo | terraform, tf | tex, latex | text | thrift | todotxt | trac-wiki, moin | treetop | ts, typescript | turtle | twig | typoscript | typoscriptcssdata | typoscripthtmldata | urbiscript | vala, vapi | vb.net, vbnet | vcl | vclsnippets, vclsnippet | vctreestatus | velocity | verilog, v | vgl | vhdl | vim | wdiff | x10, xten | xml+cheetah, xml+spitfire | xml+django, xml+jinja | xml+erb, xml+ruby | xml+evoque | xml+lasso | xml+mako | xml+mako | xml+myghty | xml+php | xml+smarty | xml+velocity | xml | xquery, xqy, xq, xql, xqm | xslt | xtend | xul+mozpreproc | yaml+jinja, salt, sls | yaml | zephir |
Tip: Try the Pygments Demo at
Literalinclude¶
There also is a literalinclude directive.
Placeholders¶
Placeholders in this context are named tags in code and
example URLs where the exact value does not matter,
but is referenced in the surrounding documentation.
Use the Backus-Naur form
<placeholder-name> for placeholders in code and
URLs, i.e. use angle brackets to encapsulate the placeholder name.
For example in PHP
Set up a controller class to handle user interaction with the entity data model: .. code-block:: php class <Entity>Controller extends ActionController { .. } where `<Entity>` corresponds to the entity data model class name.
or on the command line
Importing a TYPO3 dump file is as simple as running: .. code-block:: bash typo3/sysext/core/bin/typo3 impexp:import <file> where `<file>` can be the absolute path on the server or the relative path in the TYPO3 project folder.
or in an example URL
The TYPO3 backend normally appends the session token ID to the URL as follows: :samp:`<token-id>`.
In the XML and HTML markup languages, which make extensive use of angle
brackets, the comment tag
<!-- placeholder-name --> is used to insert
placeholders. A
<placeholder-name> looks like a regular element and would lead to confusion. | https://docs.typo3.org/m/typo3/docs-how-to-document/main/en-us/WritingReST/Codeblocks.html | 2022-01-16T18:29:12 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.typo3.org |
Managing translations for backend¶
This sections highlights the different ways to translate and manage XLIFF files.
Fetching translations getLabelsPath().
The Languages module with some active languages and status of extensions language packs
Language packs can also be fetched using the command line.
/path/to/typo3/bin/typo3 language:update
Local translations:frontend/Resources/Private/Language getLabelsPath() or next to the base translation file in extensions, for example in
typo3conf/ext/myext/Resources/Private/Language/.
Custom languages¶
Supported languages describes the languages which are supported by default.- ”.
See also
Configure
typo3Language for using custom languages in the frontend,
see Adding Languages for details. | https://docs.typo3.org/m/typo3/reference-coreapi/main/en-us/ApiOverview/Internationalization/ManagingTranslations.html | 2022-01-16T19:25:58 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['../../_images/ManageLanguagePacks.png',
'../../_images/ManageLanguagePacks.png'], dtype=object)
array(['../../_images/InternationalizationXliffWithVirtaal.png',
'Virtaal screenshot'], dtype=object)
array(['../../_images/InternationalizationLabelOverride.png',
'Custom label'], dtype=object)
array(['../../_images/CustomLanguage.png.png',
'../../_images/CustomLanguage.png.png'], dtype=object)] | docs.typo3.org |
Object2Vec¶
The Amazon SageMaker Object2Vec algorithm.
- class
sagemaker.
Object2Vec(role, instance_count=None, instance_type=None, epochs=None, enc0_max_seq_len=None, enc0_vocab_size=None, enc_dim=None, mini_batch_size=None, early_stopping_patience=None, early_stopping_tolerance=None, dropout=None, weight_decay=None, bucket_width=None, num_classes=None, mlp_layers=None, mlp_dim=None, mlp_activation=None, output_layer=None, optimizer=None, learning_rate=None, negative_sampling_rate=None, comparator_list=None, tied_token_embedding_weight=None, token_embedding_storage_type=None, enc0_network=None, enc1_network=None, enc0_cnn_filter_width=None, enc1_cnn_filter_width=None, enc1_max_seq_len=None, enc0_token_embedding_dim=None, enc1_token_embedding_dim=None, enc1_vocab_size=None, enc0_layers=None, enc1_layers=None, enc0_freeze_pretrained_embedding=None, enc1_freeze_pretrained_embedding=None, **kwargs)¶
Bases:
sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase
A general-purpose neural embedding algorithm that is highly customizable.
It can learn low-dimensional dense embeddings of high-dimensional objects. The embeddings are learned in a way that preserves the semantics of the relationship between pairs of objects in the original space in the embedding space.
Object2Vec is
Estimatorused for anomaly detection.
This Estimator may be fit via calls to
fit(). There is an utility
record_set()that can be used to upload data to S3 and creates
RecordSetto be passed to the fit call.
After this Estimator is fit, model data is stored in S3. The model may be deployed to an Amazon SageMaker Endpoint by invoking
deploy(). As well as deploying an Endpoint, deploy returns a
Predictorobject that can be used for inference calls using the trained model hosted in the SageMaker Endpoint.
Object2Vec Estimators can be configured by setting hyperparameters. The available hyperparameters for Object2Vec are documented below.
For further information on the AWS Object2Vec’.
epochs (int) – Total number of epochs for SGD training
enc0_max_seq_len (int) – Maximum sequence length
enc0_vocab_size (int) – Vocabulary size of tokens
enc_dim (int) – Optional. Dimension of the output of the embedding layer
mini_batch_size (int) – Optional. mini batch size for SGD training
early_stopping_patience (int) – Optional. The allowed number of consecutive epochs without improvement before early stopping is applied
early_stopping_tolerance (float) – Optional. The value used to determine whether the algorithm has made improvement between two consecutive epochs for early stopping
dropout (float) – Optional. Dropout probability on network layers
weight_decay (float) – Optional. Weight decay parameter during optimization
bucket_width (int) – Optional. The allowed difference between data sequence length when bucketing is enabled
num_classes (int) – Optional. Number of classes for classification training (ignored for regression problems)
mlp_layers (int) – Optional. Number of MLP layers in the network
mlp_dim (int) – Optional. Dimension of the output of MLP layer
mlp_activation (str) – Optional. Type of activation function for the MLP layer
output_layer (str) – Optional. Type of output layer
optimizer (str) – Optional. Type of optimizer for training
learning_rate (float) – Optional. Learning rate for SGD training
negative_sampling_rate (int) – Optional. Negative sampling rate
comparator_list (str) – Optional. Customization of comparator operator
tied_token_embedding_weight (bool) – Optional. Tying of token embedding layer weight
token_embedding_storage_type (str) – Optional. Type of token embedding storage
enc0_network (str) – Optional. Network model of encoder “enc0”
enc1_network (str) – Optional. Network model of encoder “enc1”
enc0_cnn_filter_width (int) – Optional. CNN filter width
enc1_cnn_filter_width (int) – Optional. CNN filter width
enc1_max_seq_len (int) – Optional. Maximum sequence length
enc0_token_embedding_dim (int) – Optional. Output dimension of token embedding layer
enc1_token_embedding_dim (int) – Optional. Output dimension of token embedding layer
enc1_vocab_size (int) – Optional. Vocabulary size of tokens
enc0_layers (int) – Optional. Number of layers in encoder
enc1_layers (int) – Optional. Number of layers in encoder
enc0_freeze_pretrained_embedding (bool) – Optional. Freeze pretrained embedding weights
enc1_freeze_pretrained_embedding (bool) – Optional. Freeze pretrained embedding weights
*.
negative_sampling_rate¶
An algorithm hyperparameter with optional validation.
Implemented as a python descriptor object.
comparator_list¶
An algorithm hyperparameter with optional validation.
Implemented as a python descriptor object.
tied_token_embedding_weight¶
An algorithm hyperparameter with optional validation.
Implemented as a python descriptor object.
token_embedding_storage_type¶
An algorithm hyperparameter with optional validation.
Implemented as a python descriptor object.
create_model(vpc_config_override='VPC_CONFIG_DEFAULT', **kwargs)¶
Return a
Object2Vec Object2VecModel constructor.
- class
sagemaker.
Object2VecModel(model_data, role, sagemaker_session=None, **kwargs)¶
Bases:
sagemaker.model.Model
Reference Object2Vec s3 model data.
Calling
deploy()creates an Endpoint and returns a Predictor that calculates anomaly scores for datapoints.
Initialization for Object2Vec. | https://sagemaker.readthedocs.io/en/stable/algorithms/object2vec.html | 2022-01-16T18:48:59 | CC-MAIN-2022-05 | 1642320300010.26 | [] | sagemaker.readthedocs.io |
Edit One API Key's Access List
>
4
Edit the API Access List.
You cannot modify an existing API Key access list entry. You must delete and re-create it.
- Click to the right of the IP address to remove it.
Add the new IP address from which you want Atlas to accept API requests for this API Key. Use one of the two options:
- Click Add access list Entry and type an IP address, or
- Click Use Current IP Address if the host you are using to access Atlas will also make API requests using this API Key.
- Click Save. | https://docs.atlas.mongodb.com/tutorial/configure-api-access/project/change-one-api-key-access-list/ | 2022-01-16T19:13:53 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)] | docs.atlas.mongodb.com |
Support for Full-Content QR Codes (e.g. Vendor Data) for the Swiss Market
Feature details
It's now possible to capture the full content of QR codes, which effectively means that you can import the content of QR codes containing vendor data and other long sections of text into Microsoft Dynamics 365 Business Central using Continia Document Capture. If the captured value contains more than the Business Central limit of 250 characters, only the first 250 characters will be displayed in the user interface, but you can retrieve the full value using GetFullText on the CDC Document Value table.
The feature has been deployed for the Swiss market but can also be made available to other markets. For more information, please contact your dedicated Continia partner.
Note
Note that in order to use this feature in the on-premises version of Business Central, you must update the on-premises Document Capture service to the latest version. | https://docs.continia.com/en-us/continia-document-capture/new-and-planned/documents-and-templates/support-for-full-content-qr-codes-eg-vendor-data-for-the-swiss-market | 2022-01-16T19:42:10 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.continia.com |
Virtual Users and Virtual User Scripts
It is important to understand the distinction between a virtual user (VU) and a virtual user scripts.
A virtual user scripts defines a series of actions that a user performs when interacting with a client/server application; whereas a VU runs virtual user scripts during test execution. | https://docs.eggplantsoftware.com/epp/9.4.0/ePP/vuvusandvus.htm | 2022-01-16T19:54:13 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.eggplantsoftware.com |
Connect
Learn how to connect K8ssandra deployments with clients and Apache Cassandra® interfaces.
For information about connecting clients and accessing Cassandra interfaces in K8ssandra:
See the quickstarts for expedited installation and configuration steps, including port-forward and introductory Ingress details.
Explore the Stargate APIs for access to Cassandra resources. Stargate is installed with K8ssandra and provides CQL, REST, GraphQL, and document-based API endpoints.
Learn about the Traefik ingress deployment details for K3d, Kind, and Minikube environments, plus Traefik ingress deployment details for the monitoring and repair features provided with K8ssandra.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified May 3, 2021: Documentation Information Architecture Updates (#700) (99ae1dc) | https://docs.k8ssandra.io/tasks/connect/ | 2022-01-16T18:14:54 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.k8ssandra.io |
Here you will learn how to manage an incident in the SiteConnect Web Portal
Index:
Course/Corrective Actions
Please watch the following brief video on how to manage incidents in the SiteConnect Web Portal
Managing Incidents (4:19)
If you still require assistance after watching then please keep on reading...
For information on how to record an incident, please refer to the following article
How to Report an Incident
Incidents Menu
Click the Incidents tab on the left hand side.
You will see a list of all of your existing incidents. Click Manage on the incident that you want to manage.
This will bring up the following Overview dialog box where you can edit the details of the incident itself.
On this page you will also see the following tabs:
- LTI- Lost Injury
- Investigation
- Causes/Corrections
- Witnesses
- Notes
Click on each one to bring up the menu for them
Overview:
In the Overview section you can change any of the details entered when the incident was initially Reported.
This includes:
- Summary of the incident
- Date of the incident
- Types of treatment and incident
- Drug/Alcohol testing requirements
- Worksafe notification requirements
- Status of incident (Completed etc.)
- Investigation requirement and notes
- LTI status
- Casual Findings
You can also add any necessary files and the users affected from the incident from here.
LTI- Lost Time Injury
A Lost Time Injury is an injury that leads to an employee taking extended medical leave from work duties.
Once you click on the LTI- Lost Time Injury tab on the top of the Incident manager, click New LTI record to record a new LTI injury.
You will then see the following tabs on the top of the dialog box that appears where you can fill in the following details:
- Affected Person- person who has been injured, their contact number and their email address
- Health Provider- Health providers name, main contact, description, contact number and email address
- ACC Case Manager- Office name, managers name, description, contact number and email address
- Return to Work- Estimated date of return to work, actual date employee returns and description
To select a estimated date and an actual date, click the calendar icon on each of the text boxes, select a date then click the tick.
When you Save a Return to Work timeframe a new option will appear on the top entitled Time Records.
When clicked, this will bring up a new menu whereby you can create a New Time Record.
Once clicked, a dialog box will appear where you can select a date for a new LTI Time Record. To set a date, click the calendar icon to bring up a series of dates to choose from.
Once you select the right date, click the tick to save this date.
You can then select the LTI days taken from the date you have selected and the amount of hours.
PLEASE NOTE- this is only recorded on a monthly basis. This means that you will only be able to select the remainder of the days in the month that the date you have selected is in.
If the LTI days and hours for this injury expand over this month then you will need to add another Time Record/date by Saving then adding another Time Record.
Once you have created all necessary Time Records as well as any other changes you can click Save and this will be added to the LTI for the incident.
You can also Manage/add any Files for the LTI using the Manage Files box.
Investigation
Click the Investigation tab on the top of the Incident Management menu to bring up the following menu:
From here you can fill in the following items.
- What happened before the incident
- How did the incident occur
- What happened after the incident
You can also select an Investigator from your team and a Reviewer.
For both options, this will bring up a box whereby you can search for or select your employees from a list for this role.
Once your employee has been selected, your investigators name will appear in the Investigation menu.
This will also happen for the Reviewer that is selected.
Courses/Corrective Actions
Click the Courses/Corrective Actions tab to bring up the following screen.
To add a new action, click Add Cause. This will bring up the Incident Cause Form.
From here you can fill in the Cause, Type (choose from Primary or Secondary from drop down menu), Corrective Action to be taken and a description (what specifically makes this a cause).
You can then add another Corrective Action to take for this incident if required by clicking Add Corrective Action.
This will add another action to this list.
You can also select an Assignee for each corrective action and appropriate Files if necessary.
Witnesses
Click the Witnesses tab to bring up the following screen. Click Add Witness to create a new witness for this incident.
This will bring up the Incident Witness Form where you can fill in the following:
- Witness Full Name
- Company Name
- Phone Number
- Statement
You can also Add any appropriate Files by clicking Add File and Select the Witness from your networked Users by clicking Select Witness. This also does not have to be a networked user and you can fill in the name of anybody to Save this.
Once you have filled in the name of the Witness you can click Save to save this witness.
You can also delete the Witness at any time by clicking the Bin icon in the bottom right hand corner.
Notes
Click the Notes tab to bring up the following screen. Click Add Note to create a new witness for this incident.
This will bring up the Incident Note Form
You can write in your note in the Value text box.
Once you have filled in the text box you can either Save your note for this incident or Delete the note using the Bin icon on the lower right hand corner of the form.
NOTE- for more information on how to create a incident, please see the following article:
If you need any further help or have any questions please contact the support team by email [email protected] or Ph: 0800 748 763 | https://docs.sitesoft.com/how-to-manage-an-incident | 2022-01-16T20:20:50 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://play.vidyard.com/mT83Kqmwzyi85hgUSoFrHC.jpg',
'Managing Incidents'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Incidens%20tab.png?width=195&name=Incidens%20tab.png',
'Incidens tab'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Manage%20Incident.png?width=688&name=Manage%20Incident.png',
'Manage Incident'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Incident%20tabs.png?width=688&name=Incident%20tabs.png',
'Incident tabs'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Files%20and%20Affected%20Users.png?width=688&name=Files%20and%20Affected%20Users.png',
'Files and Affected Users'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/New%20LTI%20Record.png?width=688&name=New%20LTI%20Record.png',
'New LTI Record'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/LTI%20forms.png?width=688&name=LTI%20forms.png',
'LTI forms'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Health%20Provider.png?width=688&name=Health%20Provider.png',
'Health Provider'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/ACC%20Case%20Manager.png?width=688&name=ACC%20Case%20Manager.png',
'ACC Case Manager'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Return%20to%20work%20notes.png?width=688&name=Return%20to%20work%20notes.png',
'Return to work notes'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Calander%20Return%20to%20work.png?width=560&name=Calander%20Return%20to%20work.png',
'Calander Return to work'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Time%20Records%20appears.png?width=688&name=Time%20Records%20appears.png',
'Time Records appears'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Time%20Records%20menu.png?width=688&name=Time%20Records%20menu.png',
'Time Records menu'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/LTI%20Time%20Records%20Set%20up.png?width=688&name=LTI%20Time%20Records%20Set%20up.png',
'LTI Time Records Set up'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/tick.png?width=545&name=tick.png',
'tick'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/LTI%20Days%20and%20Hours.png?width=647&name=LTI%20Days%20and%20Hours.png',
'LTI Days and Hours'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Save%20LTI%20Time%20Record.png?width=688&name=Save%20LTI%20Time%20Record.png',
'Save LTI Time Record'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Save%20Incident-1.png?width=688&name=Save%20Incident-1.png',
'Save Incident-1'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Manage%20files%20LTI.png?width=688&name=Manage%20files%20LTI.png',
'Manage files LTI'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Investigation%20menu.png?width=688&name=Investigation%20menu.png',
'Investigation menu'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Investogator%20and%20Reviewer.png?width=688&name=Investogator%20and%20Reviewer.png',
'Investogator and Reviewer'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Employee%20list%20Investigator.png?width=688&name=Employee%20list%20Investigator.png',
'Employee list Investigator'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Investigator%20name.png?width=515&name=Investigator%20name.png',
'Investigator name'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Reviewer%20add.png?width=411&name=Reviewer%20add.png',
'Reviewer add'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20cause.png?width=688&name=Add%20cause.png',
'Add cause'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Incident%20Causes%20Form.png?width=589&name=Incident%20Causes%20Form.png',
'Incident Causes Form'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20Corrective%20Actipn.png?width=197&name=Add%20Corrective%20Actipn.png',
'Add Corrective Actipn'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Two%20actions.png?width=582&name=Two%20actions.png',
'Two actions'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Select%20Assignee.png?width=506&name=Select%20Assignee.png',
'Select Assignee'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20Witness.png?width=688&name=Add%20Witness.png',
'Add Witness'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Incident%20Witness%20Form.png?width=685&name=Incident%20Witness%20Form.png',
'Incident Witness Form'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Ssledct%20Witness.png?width=530&name=Ssledct%20Witness.png',
'Ssledct Witness'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Save%20witness.png?width=509&name=Save%20witness.png',
'Save witness'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Delete%20witness.png?width=674&name=Delete%20witness.png',
'Delete witness'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20NOte.png?width=688&name=Add%20NOte.png',
'Add NOte'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Incident%20Note%20Form.png?width=688&name=Incident%20Note%20Form.png',
'Incident Note Form'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Save%20or%20deletge%20note.png?width=688&name=Save%20or%20deletge%20note.png',
'Save or deletge note'], dtype=object) ] | docs.sitesoft.com |
!
Vagrant Cloud Authentication Features Guide
1. Log in to Vagrant Cloud1. Log in to Vagrant Cloud
Log in to your Vagrant Cloud Account
2. Account Settings2. Account Settings
Once logged in, please navigate to your Account Settings page to access security features
3. Enabling Two Factor Authentication3. Enabling Two Factor Authentication
Vagrant cloud gives you the option to enable Two Factor Authentication either using SMS or TOTP single use code after you enter your username & password during login.
Generally speaking, using TOTP is safer since it does not require sending data over a network to use Select that option and make sure to have the Trusona App installed on your mobile device<<
5. Finalize5. Finalize
Enter the code from the app into the screen, then click to continue It should now show that Two Factor Authentication is enabled
Setup complete! The next time you log in to Vagrant Cloud and are prompted for a One-time passcode, you can use the Trusona app to log in.
You will also be prompted to save backup codes for account access should you not have access to the app. Make sure to store them someplace securely. | https://docs.trusona.com/totp/vagrant_cloud/ | 2022-01-16T18:46:39 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/totp-integration-images/vagrant_cloud/scan.png',
'Scanning the code'], dtype=object) ] | docs.trusona.com |
public class BatchStatement extends Statement
Statementso they get executed as a batch.
Note: BatchStatement is not supported with the native protocol version 1: you will get an
UnsupportedFeatureException whenHost, getLastHost, getNowInSeconds, getOutgoingPayload, getPartitioner, getReadTimeoutMillis, getRetryPolicy, getSerialConsistencyLevel, isBatchIdempotent, isLWT, isTracing, setConsistencyLevel, setDefaultTimestamp, setFetchSize, setHost, setIdempotent, setLastHost, setNowInSeconds,)
This is a shortcut method that calls
add(com.datastax.driver.core.Statement) on all the statements from
statements.
statements- the statements to add.
public Collection<Statement> getStatements()
public BatchStatement clear()
BatchStatement.
public int size() BatchStatement setSerialConsistencyLevel(ConsistencyLevel serialConsistency)
This is only supported with version 3 or higher of the native protocol. If you call this
method when version 2 is in use, you will get an
UnsupportedFeatureException when)(). | https://java-driver.docs.scylladb.com/stable/api/com/datastax/driver/core/BatchStatement.html | 2022-01-16T18:16:04 | CC-MAIN-2022-05 | 1642320300010.26 | [] | java-driver.docs.scylladb.com |
Manage publishing time and deadline¶
In the dashboard for an assignment you can see and edit the publish time and the deadline time. When creating a new assignment the publish time is by default 6 hours from the time of creation. On the dashboard you can chose to publish an assignment now, or set the time to be sometime in the future.
From the dashboard you can also manage the general deadline for the assignment. When setting this if affects all groups unless you have given a group another deadline as described in Manage deadline
| https://devilry.readthedocs.io/en/master/user/admin/assignment/admin_manage_publish_deadline_assignment.html | 2022-01-16T18:18:57 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['../../../_images/admin-manage-publish-deadline-assignment-page.png',
'../../../_images/admin-manage-publish-deadline-assignment-page.png'],
dtype=object) ] | devilry.readthedocs.io |
Outline Properties
Microsoft Project has an outline structure that lets users get a quick overview of a project. Aspose.Tasks for .NET supports this functionality and lets developers control the outline number - where the task appears in a hierarchy - and the outline level - which level of the hierarchy the task is in.
Working with Outline Properties
The Tsk class exposes the OutlineNumber and OutlineLevel properties for managing outlines associated with a class:
- OutlineNumber (string).
- OutlineLevel (integer).
Outlines in Microsoft Project
In Microsoft Project, outline number and outline level properties can be viewed on the Task Entry form by adding the columns:
- On the Insert menu, select columns.
- Add the OutlineNumber and OutlineLevel columns.
Getting Outline Properties in Aspose.Tasks
The following example shows how to get the outline level and outline number information about a task using Aspose.Tasks. | https://docs.aspose.com/tasks/net/outline-properties/ | 2022-01-16T19:34:13 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.aspose.com |
Metrics¶
The Basics¶
Resources within OpenStack create numerous metrics that can be used to monitor the state of these resources. These metrics are stored in a scale-out metrics platform, providing for high-speed, scalable storage for massive metrics collection.
In the Genesis Public Cloud, most metrics are captured every 30 seconds which are then aggregated over time. 30 second measures are kept for 35 days, 5 minute measures are kept for 105 days, and 1 hour metrics are kept for 750 days.
The storage hierarchy is:
- Resources - OpenStack objects such as servers, networks, routers, load balancers, etc.
- Metrics - Object metrics such as a server’s cpu usage, memory usage, and vcpu count
- Measures - Metric values measured every 30 seconds
Resources¶
The resource ID in the metrics database is the same as the OpenStack resource ID. For example, if a server’s ID is 9d2810dd-63d2-4bef-a19f-2e38e8d4f925, the metric’s resource ID is also 9d2810dd-63d2-4bef-a19f-2e38e8d4f925.
To view a list of all resources in a project, run the following:
openstack metric resource list
The types of resources that are collected can be found using:
openstack metric resource-type list
The most common resource types include:
ceph_account floating_ip image instance instance_disk instance_network_interface network volume router
To filter the list of resources to a particular resource type (such as a VM, also called an instance), run the following:
openstack metric resource list --type instance
Metrics¶
Resources have metrics whose values are measured every 30 seconds. To view the metrics associated with a resource use the following:
openstack metric resource show <resource id>
For example, if you have a server with id 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe, list the respective metrics using:
openstack metric resource show 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe
which will produce:
+-----------------------+-------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------+ | created_by_project_id | f0a94de0d6184e10814660d470721c31 | | created_by_user_id | 39ef552637e74e32abdf25dd7454231f | | creator | 39ef552637e74e32abdf25dd7454231f:f0a94de0d6184e10814660d470721c31 | | ended_at | None | | id | 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe | | metrics | cpu: 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 | | | disk.root.size: 6ce00ba2-0c6a-4bb3-9d24-23b4ee291bb9 | | | memory.usage: 18316275-0a54-48da-8362-dd1c3be48a1b | | | powered_on_instance: 8e684330-bc4a-482f-aaf1-4e1d0607619a | | | vcpus: 8c4fe067-f2ca-4e18-9549-f1bfe06a5ed4 | | original_resource_id | 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe | | project_id | 5e79a78de75c4cbba82bd26d60119ccf | | revision_end | None | | revision_start | 2019-11-17T13:00:30.690133+00:00 | | started_at | 2019-11-17T12:34:41.869975+00:00 | | type | instance | | user_id | abae2cd0ddc642c8b05ab95c4ff0697c | +-----------------------+-------------------------------------------------------------------+
To view the properties of a metric, such as the “cpu” metric above, run:
openstack metric show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2
This will produce:
+--------------------------------+-------------------------------------------------------------------+ | Field | Value | +--------------------------------+-------------------------------------------------------------------+ | archive_policy/name | ceilometer-high | | creator | 39ef552637e74e32abdf25dd7454231f:f0a94de0d6184e10814660d470721c31 | | id | 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 | | name | cpu | | resource/created_by_project_id | f0a94de0d6184e10814660d470721c31 | | resource/created_by_user_id | 39ef552637e74e32abdf25dd7454231f | | resource/creator | 39ef552637e74e32abdf25dd7454231f:f0a94de0d6184e10814660d470721c31 | | resource/ended_at | None | | resource/id | 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe | | resource/original_resource_id | 588858cf-cc5a-4a95-8ef1-74dc12f5d2fe | | resource/project_id | 5e79a78de75c4cbba82bd26d60119ccf | | resource/revision_end | None | | resource/revision_start | 2019-11-17T13:00:30.690133+00:00 | | resource/started_at | 2019-11-17T12:34:41.869975+00:00 | | resource/type | instance | | resource/user_id | abae2cd0ddc642c8b05ab95c4ff0697c | | unit | ns | +--------------------------------+-------------------------------------------------------------------+
The “unit” field indicates that this metric’s cpu is measures in nanoseconds, meaning the number of nanoseconds of CPU time that has passed.
Measures¶
To view the measured values for the cpu metric above, run:
openstack metric measures show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2
Depending on how long the server has been powered on, this list may be quite long.
To narrow down the list of measures, specify the start and stop date:
openstack metric measures show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 \ --start "2019-11-18" \ --stop "2019-11-19"
The number of nanoseconds that have passed is not super useful - rather it would be better to retrieve the nanoseconds between each measure. OpenStack’s metric system will return the difference using the –aggregation parameter. For example:
openstack metric measures show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 \ --start "2019-11-18" \ --stop "2019-11-19" \ --aggregation rate:mean
To view less granular measures, you can choose from 300 seconds or 3600 second granularities, such as:
openstack metric measures show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 \ --start "2019-11-18" \ --stop "2019-11-19" \ --granularity 300
The aggregation method works with the granularity, so both can be used:
openstack metric measures show 1ef0ab22-1332-4e7e-b30f-ecb5eb12c7d2 \ --start "2019-11-18" \ --stop "2019-11-19" \ --aggregation rate:mean \ --granularity 300
However, what you get is the “mean” value of the 10 x 30-second intervals over the 300 seconds, which might not be what you are looking for. You may need to multiply the results by 10 since there are 10 x 30-second intervals in 300 seconds to get an accurate result. | https://docs.genesishosting.com/genesis-public-cloud/metrics.html | 2022-01-16T20:10:15 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.genesishosting.com |
Downloads for ThoughtSpot
If you are looking for ThoughtSpot clients or API files, you’ve come to the right place. Click to download the driver you need, or link to the appropriate guide.
JDBC Drivers
ThoughtSpot provides the following JDBC drivers:
JDK 1.8
JDK 1.7
JDK 1.6
See JDBC Driver Overview on instructions for installing and configuring JDBC drivers.
ODBC Drivers
JavaScript API
For the JavaScript API, see the JavaScript API library. | https://docs.thoughtspot.com/software/6.1/downloads.html | 2022-01-16T18:26:40 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.thoughtspot.com |
Set the relay host for SMTP (email)
ThoughtSpot uses emails to send critical notifications to ThoughtSpot Support. A relay host for SMTP traffic routes the alert and notification emails coming from ThoughtSpot through an SMTP email server.
You can configure the relay host using tscli or through the Admin Console..
$ tscli monitoring set-config --email <prod-alerts@thoughtspot.
Additional resources
As you develop your expertise in emails and alerts, we recommend the following ThoughtSpot U course:
See other training resources at: | https://docs.thoughtspot.com/software/6.1/set-up-relay-host.html | 2022-01-16T19:42:28 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['_images/admin-portal-smtp.png', 'Admin Console - SMTP'],
dtype=object)
array(['_images/admin-portal-smtp-configure.png', 'Configure SMTP'],
dtype=object) ] | docs.thoughtspot.com |
Date: Sun, 16 Jan 2022 18:24:44 +0000 (GMT) Message-ID: <226060457.105174.1642357484289@9c5033e110b2> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_105173_2010893370.1642357484289" ------=_Part_105173_2010893370.1642357484289 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This example illustrates how you can apply the following functio= ns to generate new and random data in your dataset:
RANDBETWEEN- Generate a random Integer value between two = specified Integers. See RAND= BETWEEN Function.
PI- Generate the value of pi to 15 decimal points. See PI Function.
ROUND- Round a decimal value to the nearest Integer or to= a specified number of digits. See ROUND.
Transform:
To begin, you can use the following steps to generate the area and circu= mference for each product, rounded to three decimal points:=20
derive type:single value: ROUND(PI()= * (POW(radius_in, 2)), 3) as: 'area_sqin'=20
derive type:single value: ROUND(PI()= * (2 * radius_in), 3) as: 'circumference_in'
For quality purposes, the company needs two tests points along the circu=
mference, which are generated by calculating two separate random locations =
along the circumference. Since the
RANDBETWEEN function only c=
alculates using Integer values, you must first truncate the values from
derive type:single value: TRUNC(circ= umference_in) as: 'trunc_circumference_in'
Then, you can calculate the random points using the following:=20
derive type:single value: RANDBETWEE= N(0, trunc_circumference_in) as: 'testPt01_in'=20
derive type:single value: RANDBETWEE= N(0, trunc_circumference_in) as: 'testPt02_in'
Results:
After the
trunc_circumference_in column is dropped, the dat=
a should look similar to the following: | https://docs.trifacta.com/exportword?pageId=136167568 | 2022-01-16T18:24:44 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.trifacta.com |
The final step is to open a pull request on Bitbucket. Go back to the landing page for our fork and look at the sidebar again. You will see some new options:
This time, click the Create Pull Request link. You’ll be taken to a page that automatically summarises the outgoing changesets and shows you a diff of them (a ‘diff’ shows the differences or changes between two versions). You can also add a description (a good idea!) and any reviewers:
Finally, you’ll be redirected to your pull request. You can share this link with anyone you want, including others on your own team so that they can contribute: | https://docs.unity3d.com/2020.3/Documentation/Manual/ContributingPullRequest.html | 2022-01-16T20:22:16 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.unity3d.com |
Web Preview tool
This button enables you to see a preview of what the form will look like when displayed by the Web client. The preview does not take into account any active DVD on the form.
Click Web Preview to display the form in the Web Preview tab.
Grid view tool
Selection tool | https://docs.microfocus.com/SM/9.60/Codeless/Content/tailor/forms_creation/reference/web_preview.htm | 2022-01-16T19:53:25 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.microfocus.com |
Use cases
Collect payments
Note: Only business accounts have the ability to collect payments. If you’re looking to collect funds from other accounts, this guide covers the flow for your use case.
Facilitate account-to-account transfers
Overview If you’re looking for a way to facilitate a payment between two accounts on your platform (what we describe as an account-to-account transfer), this guide will cover the flow for your use case.
Pay out money
At Moov, we think of payouts as any instance where you need to send funds as compensation (e.g., you need to pay an independent contractor or service provider).
Transfer funds to yourself
When the same person or company is on both sides of a transfer, we call that a self-to-self transfer. Some examples of self-to-self transfers include:
Transfer wallet-to-wallet
Wallet-to-wallet transfers provide flexibility for how and when you access funds. Some examples of when a wallet-to-wallet transfer is useful: | https://docs.moov.io/use-cases/ | 2022-01-16T18:09:04 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.moov.io |
Modifying modify a domain.
- In Enterprise Designer, go to .
- Click the Domains tab.
- Select a domain in the list and then click Modify. The Modify Domain dialog box displays.
- Change the description information.
- If you only want to modify the description of the domain, click OK. If you have made updates to the template domain and now want to add those changes to the domain you are modifying, then continue to the next step.
- Select Use another domain as a template to inherit changes made to the domain template.
- Select a domain pattern template from the list. When you click OK in the next step, the domain pattern will be modified. The modified domain pattern will contain all of the culture-specific parsing grammars defined in the domain pattern template that you selected. Any parsing grammar in the selected domain pattern will be overwritten with the parsing grammar from the domain pattern template.
- Click OK.
To see how this works, do the following:
- Create a domain pattern named NameParsing and define parsing grammars for Global Culture, en, and en-US.
- Create a domain pattern named NameParsing2 and use NameParsing as a domain pattern template. NameParsing2 is created as an exact copy and contains parsing grammars for Global Culture, en, and en-US.
- Modify the culture-specific parsing grammars for NameParsing by changing some of the grammar rules in the Global Culture grammar and add en-CA as a new culture.
- Select NameParsing2 on the Domains tab, click Modify, and again use NameParsing as the domain pattern template.
The results will be:
- The Global Culture parsing grammar will be updated (overwriting your changes if any have been made).
- The cultures en and en-US will remain the same (unless they have been modified in the target domain, in which case they would then revert back to the Name Parsing version).
- A culture-specific grammar for en-CA will be added. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/DNM/source/DomainEditor/DomainEditor_Domains_Modifying.html | 2022-01-16T18:33:55 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
When the defined ESCAPE character is in the pattern string, it must be immediately followed by an underscore, percent sign, or another ESCAPE character.
In a left-to-right scan of the pattern string the following rules apply when ESCAPE is specified:
- Until an instance of the ESCAPE character occurs, characters in the pattern are interpreted at face value.
- When an ESCAPE character immediately follows another ESCAPE character, the two character sequence is treated as though it were a single instance of the ESCAPE character, considered as a normal character.
- When an underscore metacharacter immediately follows an ESCAPE character, the sequence is treated as a single underscore character (not a wildcard character).
- When a percent metacharacter immediately follows an ESCAPE character, the sequence is treated as a single percent character (not a wildcard character).
- When an ESCAPE character is not immediately followed by an underscore metacharacter, a percent metacharacter, or another instance of itself, the scan stops and an error is reported. | https://docs.teradata.com/r/ITFo5Vgf23G87xplzLhWTA/X66B~W2a0xeokbrYtyYrPw | 2022-01-16T18:54:09 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.teradata.com |
Ventures & Adventures
Search…
Help Documentation
UPC Code Manager
Getting Started
Announcements
Poppy - Account Creation Popup
Getting Started
Enabling Customer Accounts
Settings
Poppy - FAQ
Philosophy
GitBook
Enabling Customer Accounts
Poppy is a
Shopify
Account Creation Popup.
In order to get the most value out of Poppy, it is recommended that you enable customer accounts for your Store.
Customer accounts for your store can be set to one of three options:
Accounts are disabled
- Customers will only be able to check out as guests.
Accounts are optional
- Customers will be able to check out with a customer account or as a guest.
Accounts are required
- Customers will only be able to check out if they have a customer account.
In order to get the most value out of Poppy, it is recommended that you set your store to one of either "
Accounts are optional
" or "
Accounts are required
" in your store's Checkout settings.
If you set your store's checkout settings to
Accounts are disabled
Poppy will not send a welcome email to your customers prompting them to finish setting up their account.
You can set your account settings in your store's Checkout settings screen. From within Shopify, navigate to Settings --> Checkout and select either "
Accounts are optional
" or "
Accounts are required
"
To configure account settings in Shopify Admin select "Settings" and then "Checkout"
Poppy - Account Creation Popup - Previous
Getting Started
Next - Poppy - Account Creation Popup
Settings
Last modified
1yr ago
Copy link | https://docs.ventures-adventures.com/poppy-exit-intent-popup/poppy-getting-started/enabling-customer-accounts | 2022-01-16T19:46:11 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.ventures-adventures.com |
Configuration quick reference¶
Setting configuration options¶
You can set an option by:
Passing it on the command line with the switch version (like
--some-option)
Passing it as a keyword argument to the runner constructor, if you are creating the runner programmatically
Putting it in one of the included config files under a runner name, like this:
runners: local: python_bin: python3.6 # only used in local runner emr: python_bin: python3 # only used in Elastic MapReduce runner
See Config file format and location for information on where to put config files.
Options that can’t be set from mrjob.conf (all runners)¶
There are some options that it makes no sense to set in the config file.
These options can be set via command-line switches:
These options can be set by overriding attributes or methods in your job class:
These options can be set by overriding your job’s
configure_args() to call the appropriate method:
All of the above can be passed as keyword arguments to
MRJobRunner.__init__()
(this is what makes them runner options), but you usually don’t want to
instantiate runners directly.
Other options for all runners¶
These options can be passed to any runner without an error, though some runners may ignore some options. See the text after the table for specifics.
LocalMRJobRunner takes no additional options, but:
- bootstrap_mrjob is
Falseby default
- cmdenv uses the local system path separator instead of
:all the time (so
;on Windows, no change elsewhere)
- python_bin defaults to the current Python interpreter
In addition, it ignores hadoop_input_format, hadoop_output_format, hadoop_streaming_jar, and jobconf
InlineMRJobRunner works like
LocalMRJobRunner, only it also ignores
bootstrap_mrjob, cmdenv, python_bin,
upload_archives, and upload_files. | https://mrjob.readthedocs.io/en/stable/guides/configs-reference.html | 2022-01-16T19:40:20 | CC-MAIN-2022-05 | 1642320300010.26 | [] | mrjob.readthedocs.io |
Resolving error creating smart device projects in Visual Studio 2005:
-. | https://docs.microsoft.com/en-us/archive/blogs/astebner/resolving-error-creating-smart-device-projects-in-visual-studio-2005 | 2020-03-28T22:15:52 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
Working with Security Alerts
This article explains the basics of how to work with Azure ATP security alerts.
Review security alerts on the attack timeline
After logging in to the Azure ATP portal, you're automatically taken to the open Security Alerts Timeline. Security alerts are listed in chronological order, with the newest alert on the top of the timeline.
Each security alert has the following information:
Entities involved, including users, computers, servers, domain controllers, and resources.
Times and time frame of the suspicious activities which initiated the security alert.
Severity of the alert: High, Medium, or Low.
Status: Open, closed, or suppressed.
Ability to:
Share the security alert with other people in your organization via email.
Download the security alert in Excel format.
Note
- When you hover your mouse over a user or computer, a mini entity profile is displayed. The mini-profile provides additional information about the entity and includes the number of security alerts that the entity is linked to.
- Clicking on an entity, takes you to the entity profile of the user or computer.
Security alert categories
Azure ATP security alerts are divided into the following categories or phases, like the phases seen in a typical cyber-attack kill chain.
- Reconnaissance alerts
- Compromised credential alerts
- Lateral movement alerts
- Domain dominance alerts
- Exfiltration alerts
Preview detections
The Azure ATP research team constantly works on implementing new detections for newly discovered attacks. Because Azure ATP is a cloud service, new detections are released quickly to enable Azure ATP customers to benefit from new detections as soon as possible.
These detections are tagged with a preview badge, to help you identify the new detections and know that they are new to the product. If you turn off preview detections, they will not be displayed in the Azure ATP console - not in the timeline or in entity profiles - and new alerts won’t be opened.
By default, preview detections are enabled in Azure ATP.
To disable preview detections:
- In the Azure ATP console, click the settings cog.
- In the left menu, under Preview, click Detections.
- Use the slider to turn the preview detections on and off.
Filter security alerts list
To filter the security alert list:
In the Filter by pane on the left side of the screen, select one of the following options: All, Open, Closed, or Suppressed.
To further filter the list, select High, Medium, or Low.
Suspicious activity severity
Low
Indicates activities that can lead to attacks designed for malicious users or software to gain access to organizational data.
Medium
Indicates activities that can put specific identities at risk for more severe attacks that could result in identity theft or privileged escalation
High
Indicates activities that can lead to identity theft, privilege escalation, or other high-impact attacks
Managing security alerts
You can change the status of a security alert by clicking the current status of the security alert and selecting one of the following Open, Suppressed, Closed, or Deleted. To do this, click the three dots at the top right corner of a specific alert to reveal the list of available actions.
Security alert status
Open: All new security alerts appear in this list.
Close: Is used to track security alerts that you identified, researched, and fixed for mitigated.
Suppress: Suppressing an alert means you want to ignore it for now, and only be alerted again if there's a new instance. This means that if there's a similar alert Azure ATP doesn't reopen it. But if the alert stops for seven days, and is then seen again, a new alert is opened.
Delete: If you Delete an alert, it is deleted from the system, from the database and you will NOT be able to restore it. After you click delete, you'll be able to delete all security alerts of the same type.
Exclude: The ability to exclude an entity from raising more of a certain type of alerts. For example, you can set Azure ATP to exclude a specific entity (user or computer) from alerting again for a certain type of activity, such as a specific admin who runs remote code or a security scanner that does DNS reconnaissance. In addition to being able to add exclusions directly on the security alert as it is detected in the time line, you can also go to the Configuration page to Exclusions, and for each security alert you can manually add and remove excluded entities or subnets (for example for Pass-the-Ticket).
Note
The configuration pages can only be modified by Azure ATP admins.
See Also
Feedback | https://docs.microsoft.com/en-us/azure-advanced-threat-protection/working-with-suspicious-activities | 2020-03-28T20:55:42 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['media/atp-sa-timeline.png',
'Azure ATP security alerts timeline image'], dtype=object)
array(['media/preview-detection-vpn.png', 'preview detection vpn'],
dtype=object)
array(['media/preview-detections.png', 'preview detections'], dtype=object)
array(['media/atp-sa-actions.png',
'Azure ATP Actions for security alerts'], dtype=object)] | docs.microsoft.com |
Builtin features¶
This section describes built-in features of the LMIShell.
Configuration file¶
The LMIShell has a tiny configuration file with location ~/.lmishellrc. In configuration file, you can set these properties:
# location of the history used by interactive mode history_file = "~/.lmishell_history" # length of history file, -1 for unlimited history_length = -1 # default value for cache usage use_cache = True # default value for exceptions use_exceptions = False # default value for indication_cert_file indication_cert_file = "" # default value for indication_key_file indication_key_file = ""
NOTE: indication_cert_file and indication_key_file are used by Synchronous methods, if the given method waits for an indication using LMIIndicationListener. Both configuration options may contain path to X509 certificate and private key in PEM format, respectively. If the configuration options are not set, SSL connection will not be used.
Inspecting a script¶
If you want to inspect a script after it has been interpreted by the LMIShell, run this:
$ lmishell -i some_script.lmi # some stuff done >
NOTE: Prefered extension of LMIShell’s scripts is .lmi.
LMI Is Instance¶
LMIShell is able to verify, if a LMIInstance or LMIInstanceName object passed to lmi_isinstance() is a instance of LMIClass.
The function is similar to python’s isinstance():
> lmi_isinstance(inst, cls) True/False >
LMI Associators¶
LMIShell can speed up associated objects’ traversal by manual joining, instead of calling LMIInstance.associators(). The call needs to get a list of association classes, for which the referenced objects will be joined. The list must contain objects of LMIClass.
See following example:
> associators = lmi_associators(list_of_association_classes) > | https://openlmi.readthedocs.io/en/latest/openlmi-tools/shell/builtins.html | 2020-03-28T20:45:20 | CC-MAIN-2020-16 | 1585370493120.15 | [] | openlmi.readthedocs.io |
Release Notes¶
20.0.2 (2020-01-24)¶
20.0.1 (2020-01-21)¶
20.0 (2020-01-21)¶
Deprecations and Removals¶
Remove wheel tag calculation from pip and use
packaging.tags. This should provide more tags ordered better than in prior releases. (#6908)
Deprecate setup.py-based builds that do not generate an
.egg-infodirectory. (#6998)
The pip>=20 wheel cache is not retro-compatible with previous versions. Until pip 21.0, pip will continue to take advantage of existing legacy cache entries. (#7296)
Deprecate undocumented
--skip-requirements-regexoption. (#7297)
Deprecate passing install-location-related options via
--install-option. (#7309)
Use literal “abi3” for wheel tag on CPython 3.x, to align with PEP 384 which only defines it for this platform. (#7327)
Remove interpreter-specific major version tag e.g.
cp3-none-anyfrom consideration. This behavior was not documented strictly, and this tag in particular is not useful. Anyone with a use case can create an issue with pypa/packaging. (#7355)
Wheel processing no longer permits wheels containing more than one top-level .dist-info directory. (#7487)
Support for the
git+git@form of VCS requirement is being deprecated and will be removed in pip 21.0. Switch to
git+
git+ssh://.
git+git://also works but its use is discouraged as it is insecure. (#7543)
Features¶
Default to doing a user install (as if
--userwas passed) when the main site-packages directory is not writeable and user site-packages are enabled. (#1668)
Warn if a path in PATH starts with tilde during
pip install. (#6414)
Cache wheels built from Git requirements that are considered immutable, because they point to a commit hash. (#6640)
Add option
--no-python-version-warningto silence warnings related to deprecation of Python versions. (#6673)
Cache wheels that
pip wheelbuilt locally, matching what
pip installdoes. This particularly helps performance in workflows where
pip wheelis used for building before installing. Users desiring the original behavior can use
pip wheel --no-cache-dir. (#6852)
Display CA information in
pip debug. (#7146)
Show only the filename (instead of full URL), when downloading from PyPI. (#7225)
Suggest a more robust command to upgrade pip itself to avoid confusion when the current pip command is not available as
pip. (#7376)
Define all old pip console script entrypoints to prevent import issues in stale wrapper scripts. (#7498)
The build step of
pip wheelnow builds all wheels to a cache first, then copies them to the wheel directory all at once. Before, it built them to a temporary direcory and moved them to the wheel directory one by one. (#7517)
Expand
~prefix to user directory in path options, configs, and environment variables. Values that may be either URL or path are not currently supported, to avoid ambiguity:
--find-links
--constraint,
-c
--requirement,
-r
-
Bug Fixes¶
Correctly handle system site-packages, in virtual environments created with venv (PEP 405). (#5702, #7155)
Fix case sensitive comparison of pip freeze when used with -r option. (#5716)
Enforce PEP 508 requirement format in
pyproject.toml
build-system.requires. (#6410)
Make
ensure_dir()also ignore
ENOTEMPTYas seen on Windows. (#6426)
Fix building packages which specify
backend-pathin pyproject.toml. (#6599)
Do not attempt to run
setup.py cleanafter a
pep517build error, since a
setup.pymay not exist in that case. (#6642)
Fix passwords being visible in the index-url in “Downloading <url>” message. (#6783)
Change method from shutil.remove to shutil.rmtree in noxfile.py. (#7191)
Skip running tests which require subversion, when svn isn’t installed (#7193)
Fix not sending client certificates when using
--trusted-host. (#7207)
Make sure
pip wheelnever outputs pure python wheels with a python implementation tag. Better fix/workaround for #3025 by using a per-implementation wheel cache instead of caching pure python wheels with an implementation tag in their name. (#7296)
Include
subdirectoryURL fragments in cache keys. (#7333)
Fix typo in warning message when any of
--build-option,
--global-optionand
--install-optionis used in requirements.txt (#7340)
Fix the logging of cached HTTP response shown as downloading. (#7393)
Effectively disable the wheel cache when it is not writable, as is the case with the http cache. (#7488)
Correctly handle relative cache directory provided via --cache-dir. (#7541)
Vendored Libraries¶
Upgrade CacheControl to 0.12.5
Upgrade certifi to 2019.9.11
Upgrade colorama to 0.4.1
Upgrade distlib to 0.2.9.post0
Upgrade ipaddress to 1.0.22
Update packaging to 20.0.
Upgrade pkg_resources (via setuptools) to 44.0.0
Upgrade pyparsing to 2.4.2
Upgrade six to 1.12.0
Upgrade urllib3 to 1.25.6
Improved Documentation¶
Document that “coding: utf-8” is supported in requirements.txt (#7182)
Explain how to get pip’s source code in Getting Started (#7197)
Describe how basic authentication credentials in URLs work. (#7201)
Add more clear installation instructions (#7222)
Fix documentation links for index options (#7347)
Better document the requirements file format (#7385)
19.3.1 (2019-10-17)¶
19.3 (2019-10-14)¶
Deprecations and Removals¶
Features¶
Print a better error message when
--no-binaryor
--only-binaryis given an argument starting with
-. (#3191)
Make
pip showwarn about packages not found. (#6858)
Support including a port number in
--trusted-hostfor both HTTP and HTTPS. (#6886)
Redact single-part login credentials from URLs in log messages. (#6891)
Implement manylinux2014 platform tag support. manylinux2014 is the successor to manylinux2010. It allows carefully compiled binary wheels to be installed on compatible Linux platforms. The manylinux2014 platform tag definition can be found in PEP599. (#7102)
Bug Fixes¶
Abort installation if any archive contains a file which would be placed outside the extraction location. (#3907)
pip’s CLI completion code no longer prints a Traceback if it is interrupted. (#3942)
Correct inconsistency related to the
hg+filescheme. (#4358)
Fix
rmtree_errorhandlerto skip non-existing directories. (#4910)
Ignore errors copying socket files for local source installs (in Python 3). (#5306)
Fix requirement line parser to correctly handle PEP 440 requirements with a URL pointing to an archive file. (#6202)
The
pip-wheel-metadatadirectory does not need to persist between invocations of pip, use a temporary directory instead of the current
setup.pydirectory. (#6213)
Fix
--trusted-hostprocessing under HTTPS to trust any port number used with the host. (#6705)
Switch to new
distlibwheel script template. This should be functionally equivalent for end users. (#6763)
Skip copying .tox and .nox directories to temporary build directories (#6770)
Fix handling of tokens (single part credentials) in URLs. (#6795)
Fix a regression that caused
~expansion not to occur in
--find-linkspaths. (#6804)
Fix bypassed pip upgrade warning on Windows. (#6841)
Fix ‘m’ flag erroneously being appended to ABI tag in Python 3.8 on platforms that do not provide SOABI (#6885)
Hide security-sensitive strings like passwords in log messages related to version control system (aka VCS) command invocations. (#6890)
Correctly uninstall symlinks that were installed in a virtualenv, by tools such as
flit install --symlink. (#6892)
Don’t fail installation using pip.exe on Windows when pip wouldn’t be upgraded. (#6924)
Use canonical distribution names when computing
Required-Byin
pip show. (#6947)
Don’t use hardlinks for locking selfcheck state file. (#6954)
Ignore “require_virtualenv” in
pip config(#6991)
Fix
pip freezenot showing correct entry for mercurial packages that use subdirectories. (#7071)
Fix a crash when
sys.stdinis set to
None, such as on AWS Lambda. (#7118, #7119)
Vendored Libraries¶
Upgrade certifi to 2019.9.11
Add contextlib2 0.6.0 as a vendored dependency.
Remove Lockfile as a vendored dependency.
Upgrade msgpack to 0.6.2
Upgrade packaging to 19.2
Upgrade pep517 to 0.7.0
Upgrade pyparsing to 2.4.2
Upgrade pytoml to 0.1.21
Upgrade setuptools to 41.4.0
Upgrade urllib3 to 1.25.6
19.2.3 (2019-08-25)¶
19.2.2 (2019-08-11)¶
19.2.1 (2019-07-23)¶
19.2 (2019-07-22)¶
Deprecations and Removals¶
Features¶
Credentials will now be loaded using keyring when installed. (#5948)
Fully support using
--trusted-hostinside requirements files. (#3799)
Update timestamps in pip’s
--logfile to include milliseconds. (#6587)
Respect whether a file has been marked as “yanked” from a simple repository (see PEP 592 for details). (#6633)
When choosing candidates to install, prefer candidates with a hash matching one of the user-provided hashes. (#5874)
Improve the error message when
METADATAor
PKG-INFOis None when accessing metadata. (#5082)
Add a new command
pip debugthat can display e.g. the list of compatible tags for the current Python. (#6638)
Display hint on installing with --pre when search results include pre-release versions. (#5169)
Report to Warehouse that pip is running under CI if the
PIP_IS_CIenvironment variable is set. (#5499)
Allow
--python-versionto be passed as a dotted version string (e.g.
3.7or
3.7.3). (#6585)
Log the final filename and SHA256 of a
.whlfile when done building a wheel. (#5908)
Include the wheel’s tags in the log message explanation when a candidate wheel link is found incompatible. (#6121)
Add a
--pathargument to
pip freezeto support
--targetinstallations. (#6404)
Add a
--pathargument to
pip listto support
--targetinstallations. (#6551)
Bug Fixes¶
Set
sys.argv[0]to the underlying
setup.pywhen invoking
setup.pyvia the setuptools shim so setuptools doesn’t think the path is
-c. (#1890)
Update
pip downloadto respect the given
--python-versionwhen checking
"Requires-Python". (#5369)
Respect
--global-optionand
--install-optionwhen installing from a version control url (e.g.
git). (#5518)
Make the “ascii” progress bar really be “ascii” and not Unicode. (#5671)
Fail elegantly when trying to set an incorrectly formatted key in config. (#5963)
Prevent DistutilsOptionError when prefix is indicated in the global environment and --target is used. (#6008)
Fix
pip installto respect
--ignore-requires-pythonwhen evaluating links. (#6371)
Fix a debug log message when freezing an editable, non-version controlled requirement. (#6383)
Extend to Subversion 1.8+ the behavior of calling Subversion in interactive mode when pip is run interactively. (#6386)
Prevent
pip install <url>from permitting directory traversal if e.g. a malicious server sends a
Content-Dispositionheader with a filename containing
../or
..\\. (#6413)
Hide passwords in output when using
--find-links. (#6489)
Include more details in the log message if
pip freezecan’t generate a requirement string for a particular distribution. (#6513)
Add the line number and file location to the error message when reading an invalid requirements file in certain situations. (#6527)
Prefer
os.confstrto
ctypeswhen extracting glibc version info. (#6543, #6675)
Improve error message printed when an invalid editable requirement is provided. (#6648)
Improve error message formatting when a command errors out in a subprocess. (#6651)
Vendored Libraries¶
Upgrade certifi to 2019.6.16
Upgrade distlib to 0.2.9.post0
Upgrade msgpack to 0.6.1
Upgrade requests to 2.22.0
Upgrade urllib3 to 1.25.3
Patch vendored html5lib, to prefer using collections.abc where possible.
Improved Documentation¶
Document how Python 2.7 support will be maintained. (#6726)
Upgrade Sphinx version used to build documentation. (#6471)
Fix generation of subcommand manpages. (#6724)
Mention that pip can install from git refs. (#6512)
Replace a failing example of pip installs with extras with a working one. (#4733)
19.1.1 (2019-05-06)¶
Features¶
19.1 (2019-04-23)¶
Features¶
Configuration files may now also be stored under
sys.prefix(#5060)
Avoid creating an unnecessary local clone of a Bazaar branch when exporting. (#5443)
Include in pip’s User-Agent string whether it looks like pip is running under CI. (#5499)
A custom (JSON-encoded) string can now be added to pip’s User-Agent using the
PIP_USER_AGENT_USER_DATAenvironment variable. (#5549)
For consistency, passing
--no-cache-dirno longer affects whether wheels will be built. In this case, a temporary directory is used. (#5749)
Command arguments in
subprocesslog messages are now quoted using
shlex.quote(). (#6290)
Prefix warning and error messages in log output with WARNING and ERROR. (#6298)
Using
--build-optionsin a PEP 517 build now fails with an error, rather than silently ignoring the option. (#6305)
Error out with an informative message if one tries to install a
pyproject.toml-style (PEP 517) source tree using
--editablemode. (#6314)
When downloading a package, the ETA and average speed now only update once per second for better legibility. (#6319)
Bug Fixes¶
The stdout and stderr from VCS commands run by pip as subprocesses (e.g.
git,
hg, etc.) no longer pollute pip’s stdout. (#1219)
Fix handling of requests exceptions when dependencies are debundled. (#4195)
Make pip’s self version check avoid recommending upgrades to prereleases if the currently-installed version is stable. (#5175)
Fixed crash when installing a requirement from a URL that comes from a dependency without a URL. (#5889)
Improve handling of file URIs: correctly handle… and don’t try to use UNC paths on Unix. (#5892)
Fix
utils.encoding.auto_decode()
LookupErrorwith invalid encodings.
utils.encoding.auto_decode()was broken when decoding Big Endian BOM byte-strings on Little Endian or vice versa. (#6054)
Fix incorrect URL quoting of IPv6 addresses. (#6285)
Redact the password from the extra index URL when using
pip -v. (#6295)
The spinner no longer displays a completion message after subprocess calls not needing a spinner. It also no longer incorrectly reports an error after certain subprocess calls to Git that succeeded. (#6312)
Fix the handling of editable mode during installs when
pyproject.tomlis present but PEP 517 doesn’t require the source tree to be treated as
pyproject.toml-style. (#6370)
Fix
NameErrorwhen handling an invalid requirement. (#6419)
Vendored Libraries¶
Updated certifi to 2019.3.9
Updated distro to 1.4.0
Update progress to 1.5
Updated pyparsing to 2.4.0
Updated pkg_resources to 41.0.1 (via setuptools)
19.0.3 (2019-02-20)¶
19.0.2 (2019-02-09)¶
Bug Fixes¶
Fix a crash where PEP 517-based builds using
--no-cache-dirwould fail in some circumstances with an
AssertionErrordue to not finalizing a build directory internally. (#6197)
Provide a better error message if attempting an editable install of a directory with a
pyproject.tomlbut no
setup.py. (#6170)
The implicit default backend used for projects that provide a
pyproject.tomlfile without explicitly specifying
build-backendnow behaves more like direct execution of
setup.py, and hence should restore compatibility with projects that were unable to be installed with
pip19.0. This raised the minimum required version of
setuptoolsfor such builds to 40.8.0. (#6163)
Allow
RECORDlines with more than three elements, and display a warning. (#6165)
AdjacentTempDirectoryfails on unwritable directory instead of locking up the uninstall command. (#6169)
Make failed uninstalls roll back more reliably and better at avoiding naming conflicts. (#6194)
Ensure the correct wheel file is copied when building PEP 517 distribution is built. (#6196)
The Python 2 end of life warning now only shows on CPython, which is the implementation that has announced end of life plans. (#6207)
19.0.1 (2019-01-23)¶
19.0 (2019-01-22)¶
Deprecations and Removals¶
Deprecate support for Python 3.4 (#6106)
Start printing a warning for Python 2.7 to warn of impending Python 2.7 End-of-life and prompt users to start migrating to Python 3. (#6148)
Remove the deprecated
--process-dependency-linksoption. (#6060)
Remove the deprecated SVN editable detection based on dependency links during freeze. (#5866)
Features¶
Implement PEP 517 (allow projects to specify a build backend via pyproject.toml). (#5743)
Implement manylinux2010 platform tag support. manylinux2010 is the successor to manylinux1. It allows carefully compiled binary wheels to be installed on compatible Linux platforms. (#5008)
Improve build isolation: handle
.pthfiles, so namespace packages are correctly supported under Python 3.2 and earlier. (#5656)
Include the package name in a freeze warning if the package is not installed. (#5943)
Warn when dropping an
--[extra-]index-urlvalue that points to an existing local directory. (#5827)
Prefix pip’s
--logfile lines with their timestamp. (#6141)
Bug Fixes¶
Avoid creating excessively long temporary paths when uninstalling packages. (#3055)
Redact the password from the URL in various log messages. (#4746, #6124)
Avoid creating excessively long temporary paths when uninstalling packages. (#3055)
Avoid printing a stack trace when given an invalid requirement. (#5147)
Present 401 warning if username/password do not work for URL (#4833)
Handle
requests.exceptions.RetryErrorraised in
PackageFinderthat was causing pip to fail silently when some indexes were unreachable. (#5270, #5483)
Handle a broken stdout pipe more gracefully (e.g. when running
pip list | head). (#4170)
Fix crash from setting
PIP_NO_CACHE_DIR=yes. (#5385)
Fix crash from unparseable requirements when checking installed packages. (#5839)
Fix content type detection if a directory named like an archive is used as a package source. (#5838)
Fix listing of outdated packages that are not dependencies of installed packages in
pip list --outdated --not-required(#5737)
Fix sorting
TypeErrorin
move_wheel_files()when installing some packages. (#5868)
Fix support for invoking pip using
python src/pip .... (#5841)
Greatly reduce memory usage when installing wheels containing large files. (#5848)
Editable non-VCS installs now freeze as editable. (#5031)
Editable Git installs without a remote now freeze as editable. (#4759)
Canonicalize sdist file names so they can be matched to a canonicalized package name passed to
pip install. (#5870)
Properly decode special characters in SVN URL credentials. (#5968)
Make
PIP_NO_CACHE_DIRdisable the cache also for truthy values like
"true",
"yes",
"1", etc. (#5735)
Vendored Libraries¶
Include license text of vendored 3rd party libraries. (#5213)
Update certifi to 2018.11.29
Update colorama to 0.4.1
Update distlib to 0.2.8
Update idna to 2.8
Update packaging to 19.0
Update pep517 to 0.5.0
Update pkg_resources to 40.6.3 (via setuptools)
Update pyparsing to 2.3.1
Update pytoml to 0.1.20
Update requests to 2.21.0
Update six to 1.12.0
Update urllib3 to 1.24.1
Improved Documentation¶
18.1 (2018-10-05)¶
Features¶
Allow PEP 508 URL requirements to be used as dependencies.
As a security measure, pip will raise an exception when installing packages from PyPI if those packages depend on packages not also hosted on PyPI. In the future, PyPI will block uploading packages with such external URL dependencies directly. (#4187)
Allows dist options (--abi, --python-version, --platform, --implementation) when installing with --target (#5355)
Support passing
svn+sshURLs with a username to
pip install -e. (#5375)
pip now ensures that the RECORD file is sorted when installing from a wheel file. (#5525)
Add support for Python 3.7. (#5561)
Malformed configuration files now show helpful error messages, instead of tracebacks. (#5798)
Bug Fixes¶
Checkout the correct branch when doing an editable Git install. (#2037)
Run self-version-check only on commands that may access the index, instead of trying on every run and failing to do so due to missing options. (#5433)
Allow a Git ref to be installed over an existing installation. (#5624)
Show a better error message when a configuration option has an invalid value. (#5644)
Always revalidate cached simple API pages instead of blindly caching them for up to 10 minutes. (#5670)
Avoid caching self-version-check information when cache is disabled. (#5679)
Avoid traceback printing on autocomplete after flags in the CLI. (#5751)
Fix incorrect parsing of egg names if pip needs to guess the package name. (#5819)
Vendored Libraries¶
Upgrade certifi to 2018.8.24
Upgrade packaging to 18.0
Upgrade pyparsing to 2.2.1
Add pep517 version 0.2
Upgrade pytoml to 0.1.19
Upgrade pkg_resources to 40.4.3 (via setuptools)
Improved Documentation¶
Fix “Requirements Files” reference in User Guide (#user_guide_fix_requirements_file_ref)
18.0 (2018-07-22)¶
Process¶
Switch to a Calendar based versioning scheme.
Formally document our deprecation process as a minimum of 6 months of deprecation warnings.
Adopt and document NEWS fragment writing style.
Switch to releasing a new, non-bug fix version of pip every 3 months.
Deprecations and Removals¶
Remove the legacy format from pip list. (#3651, #3654)
Dropped support for Python 3.3. (#3796)
Remove support for cleaning up #egg fragment postfixes. (#4174)
Remove the shim for the old get-pip.py location. (#5520)
For the past 2 years, it’s only been redirecting users to use the newer location.
Features¶
Introduce a new --prefer-binary flag, to prefer older wheels over newer source packages. (#3785)
Improve autocompletion function on file name completion after options which have
<file>,
<dir>or
<path>as metavar. (#4842, #5125)
Add support for installing PEP 518 build dependencies from source. (#5229)
Improve status message when upgrade is skipped due to only-if-needed strategy. (#5319)
Bug Fixes¶
Update pip’s self-check logic to not use a virtualenv specific file and honor cache-dir. (#3905)
Remove compiled pyo files for wheel packages. (#4471)
Speed up printing of newly installed package versions. (#5127)
Restrict install time dependency warnings to directly-dependant packages. (#5196, #5457)
Warning about the entire package set has resulted in users getting confused as to why pip is printing these warnings.
Improve handling of PEP 518 build requirements: support environment markers and extras. (#5230, #5265)
Remove username/password from log message when using index with basic auth. (#5249)
Remove trailing os.sep from PATH directories to avoid false negatives. (#5293)
Fix “pip wheel pip” being blocked by the “don’t use pip to modify itself” check. (#5311, #5312)
Disable pip’s version check (and upgrade message) when installed by a different package manager. (#5346)
This works better with Linux distributions where pip’s upgrade message may result in users running pip in a manner that modifies files that should be managed by the OS’s package manager.
Check for file existence and unlink first when clobbering existing files during a wheel install. (#5366)
Improve error message to be more specific when no files are found as listed in as listed in PKG-INFO. (#5381)
Always read
pyproject.tomlas UTF-8. This fixes Unicode handling on Windows and Python 2. (#5482)
Fix a crash that occurs when PATH not set, while generating script location warning. (#5558)
Disallow packages with
pyproject.tomlfiles that have an empty build-system table. (#5627)
Vendored Libraries¶
Update CacheControl to 0.12.5.
Update certifi to 2018.4.16.
Update distro to 1.3.0.
Update idna to 2.7.
Update ipaddress to 1.0.22.
Update pkg_resources to 39.2.0 (via setuptools).
Update progress to 1.4.
Update pytoml to 0.1.16.
Update requests to 2.19.1.
Update urllib3 to 1.23.
10.0.1 (2018-04-19)¶
Features¶
Switch the default repository to the new “PyPI 2.0” running at. (#5214)
Bug Fixes¶
Fix a bug that made get-pip.py unusable on Windows without renaming. (#5219)
Fix a TypeError when loading the cache on older versions of Python 2.7. (#5231)
Fix and improve error message when EnvironmentError occurs during installation. (#5237)
A crash when reinstalling from VCS requirements has been fixed. (#5251)
Fix PEP 518 support when pip is installed in the user site. (#5524)
10.0.0 (2018-04-14)¶
Bug Fixes¶
Prevent false-positive installation warnings due to incomplete name normalization. (#5134)
Fix issue where installing from Git with a short SHA would fail. (#5140)
Accept pre-release versions when checking for conflicts with pip check or pip install. (#5141)
ioctl(fd, termios.TIOCGWINSZ, ...)needs 8 bytes of data (#5150)
Do not warn about script location when installing to the directory containing sys.executable. This is the case when ‘pip install’ing without activating a virtualenv. (#5157)
Fix PEP 518 support. (#5188)
Don’t warn about script locations if
--targetis specified. (#5203)
10.0.0b2 (2018-04-02)¶
10.0.0b1 (2018-03-31)¶
Deprecations and Removals¶
Removed the deprecated
--eggparameter to
pip install. (#1749)
Removed support for uninstalling projects which have been installed using distutils. distutils installed projects do not include metadata indicating what files belong to that install and thus it is impossible to actually uninstall them rather than just remove the metadata saying they’ve been installed while leaving all of the actual files behind. (#2386)
Removed the deprecated
--downloadoption to
pip install. (#2643)
Removed the deprecated --(no-)use-wheel flags to
pip installand
pip wheel. (#2699)
Removed the deprecated
--allow-external,
--allow-all-external, and
--allow-unverifiedoptions. (#3070)
Switch the default for
pip listto the columns format, and deprecate the legacy format. (#3654, #3686)
Deprecate support for Python 3.3. (#3796)
Removed the deprecated
--default-vcsoption. (#4052)
Removed the
setup.py testsupport from our sdist as it wasn’t being maintained as a supported means to run our tests. (#4203)
Dropped support for Python 2.6. (#4343)
Removed the --editable flag from pip download, as it did not make sense (#4362)
Deprecate SVN detection based on dependency links in
pip freeze. (#4449)
Move all of pip’s APIs into the pip._internal package, properly reflecting the fact that pip does not currently have any public APIs. (#4696, #4700)
Features¶
Add --progress-bar <progress_bar> to
pip download,
pip installand
pip wheelcommands, to allow selecting a specific progress indicator or, to completely suppress, (for example in a CI environment) use
--progress-bar off`. (#2369, #2756)
Add --no-color to pip. All colored output is disabled if this flag is detected. (#2449)
pip uninstall now ignores the absence of a requirement and prints a warning. (#3016, #4642)
Improved the memory and disk efficiency of the HTTP cache. (#3515)
Support for packages specifying build dependencies in pyproject.toml (see PEP 518). Packages which specify one or more build dependencies this way will be built into wheels in an isolated environment with those dependencies installed. (#3691)
pip now supports environment variable expansion in requirement files using only
${VARIABLE}syntax on all platforms. (#3728)
Allowed combinations of -q and -v to act sanely. Then we don’t need warnings mentioned in the issue. (#4008)
Add --exclude-editable to
pip freezeand
pip listto exclude editable packages from installed package list. (#4015, #4016)
Improve the error message for the common
pip install ./requirements.txtcase. (#4127)
Add support for the new
@ urlsyntax from PEP 508. (#4175)
Add setuptools version to the statistics sent to BigQuery. (#4209)
Report the line which caused the hash error when using requirement files. (#4227)
Add a pip config command for managing configuration files. (#4240)
Allow
pip downloadto be used with a specific platform when
--no-depsis set. (#4289)
Support build-numbers in wheel versions and support sorting with build-numbers. (#4299)
Change pip outdated to use PackageFinder in order to do the version lookup so that local mirrors in Environments that do not have Internet connections can be used as the Source of Truth for latest version. (#4336)
pip now retries on more HTTP status codes, for intermittent failures. Previously, it only retried on the standard 503. Now, it also retries on 500 (transient failures on AWS S3), 520 and 527 (transient failures on Cloudflare). (#4473)
pip now displays where it is looking for packages, if non-default locations are used. (#4483)
Display a message to run the right command for modifying pip on Windows (#4490)
Add Man Pages for pip (#4491)
Make uninstall command less verbose by default (#4493)
Switch the default upgrade strategy to be ‘only-if-needed’ (#4500)
Installing from a local directory or a VCS URL now builds a wheel to install, rather than running
setup.py install. Wheels from these sources are not cached. (#4501)
Don’t log a warning when installing a dependency from Git if the name looks like a commit hash. (#4507)
pip now displays a warning when it installs scripts from a wheel outside the PATH. These warnings can be suppressed using a new --no-warn-script-location option. (#4553)
Local Packages can now be referenced using forward slashes on Windows. (#4563)
pip show learnt a new Required-by field that lists currently installed packages that depend on the shown package (#4564)
The command-line autocompletion engine
pip shownow autocompletes installed distribution names. (#4749)
Change documentation theme to be in line with Python Documentation (#4758)
Add auto completion of short options. (#4954)
Run ‘setup.py develop’ inside pep518 build environment. (#4999)
pip install now prints an error message when it installs an incompatible version of a dependency. (#5000)
Added a way to distinguish between pip installed packages and those from the system package manager in ‘pip list’. Specifically, ‘pip list -v’ also shows the installer of package if it has that meta data. (#949)
Show install locations when list command ran with “-v” option. (#979)
Bug Fixes¶
Allow pip to work if the
GIT_DIRand
GIT_WORK_TREEenvironment variables are set. (#1130)
Make
pip install --force-reinstallnot require passing
--upgrade. (#1139)
Return a failing exit status when pip install, pip download, or pip wheel is called with no requirements. (#2720)
Interactive setup.py files will no longer hang indefinitely. (#2732, #4982)
Correctly reset the terminal if an exception occurs while a progress bar is being shown. (#3015)
“Support URL-encoded characters in URL credentials.” (#3236)
Don’t assume sys.__stderr__.encoding exists (#3356)
Fix
pip uninstallwhen
easy-install.pthlacks a trailing newline. (#3741)
Keep install options in requirements.txt from leaking. (#3763)
pip no longer passes global options from one package to later packages in the same requirement file. (#3830)
Support installing from Git refs (#3876)
Use pkg_resources to parse the entry points file to allow names with colons. (#3901)
-qspecified once correctly sets logging level to WARNING, instead of CRITICAL. Use -qqq to have the previous behavior back. (#3994)
Shell completion scripts now use correct executable names (e.g.,
pip3instead of
pip) (#3997)
Changed vendored encodings from
utf8to
utf-8. (#4076)
Fixes destination directory of data_files when
pip install --targetis used. (#4092)
Limit the disabling of requests’ pyopenssl to Windows only. Fixes “SNIMissingWarning / InsecurePlatformWarning not fixable with pip 9.0 / 9.0.1” (for non-Windows) (#4098)
Support the installation of wheels with non-PEP 440 version in their filenames. (#4169)
Fall back to sys.getdefaultencoding() if locale.getpreferredencoding() returns None in pip.utils.encoding.auto_decode. (#4184)
Fix a bug where SETUPTOOLS_SHIM got called incorrectly for relative path requirements by converting relative paths to absolute paths prior to calling the shim. (#4208)
Return the latest version number in search results. (#4219)
Improve error message on permission errors (#4233)
Fail gracefully when
/etc/image_version(or another distro version file) appears to exists but is not readable. (#4249)
Avoid importing setuptools in the parent pip process, to avoid a race condition when upgrading one of setuptools dependencies. (#4264)
Fix for an incorrect
freezewarning message due to a package being included in multiple requirements files that were passed to
freeze. Instead of warning incorrectly that the package is not installed, pip now warns that the package was declared multiple times and lists the name of each requirements file that contains the package in question. (#4293)
Generalize help text for
compile/
no-compileflags. (#4316)
Handle the case when
/etcis not readable by the current user by using a hardcoded list of possible names of release files. (#4320)
Fixed a
NameErrorwhen attempting to catch
FileNotFoundErroron Python 2.7. (#4322)
Ensure USER_SITE is correctly initialised. (#4437)
Reinstalling an editable package from Git no longer assumes that the
masterbranch exists. (#4448)
This fixes an issue where when someone who tries to use git with pip but pip can’t because git is not in the path environment variable. This clarifies the error given to suggest to the user what might be wrong. (#4461)
Improve handling of text output from build tools (avoid Unicode errors) (#4486)
Fix a “No such file or directory” error when using --prefix. (#4495)
Allow commands to opt out of --require-venv. This allows pip help to work even when the environment variable PIP_REQUIRE_VIRTUALENV is set. (#4496)
Fix warning message on mismatched versions during installation. (#4655)
pip now records installed files in a deterministic manner improving reproducibility. (#4667)
Fix an issue where
pip install -eon a Git url would fail to update if a branch or tag name is specified that happens to match the prefix of the current
HEADcommit hash. (#4675)
Fix an issue where a variable assigned in a try clause was accessed in the except clause, resulting in an undefined variable error in the except clause. (#4811)
Use log level info instead of warning when ignoring packages due to environment markers. (#4876)
Replaced typo mistake in subversion support. (#4908)
Terminal size is now correctly inferred when using Python 3 on Windows. (#4966)
Abort if reading configuration causes encoding errors. (#4976)
Add a
--no-useroption and use it when installing build dependencies. (#5085)
Vendored Libraries¶
Upgraded appdirs to 1.4.3.
Upgraded CacheControl to 0.12.3.
Vendored certifi at 2017.7.27.1.
Vendored chardet at 3.0.4.
Upgraded colorama to 0.3.9.
Upgraded distlib to 0.2.6.
Upgraded distro to 1.2.0.
Vendored idna at idna==2.6.
Upgraded ipaddress to 1.0.18.
Vendored msgpack-python at 0.4.8.
Removed the vendored ordereddict.
Upgraded progress to 1.3.
Upgraded pyparsing to 2.2.0.
Upgraded pytoml to 0.1.14.
Upgraded requests to 2.18.4.
Upgraded pkg_resources (via setuptools) to 36.6.0.
Upgraded six to 1.11.0.
Vendored urllib3 at 1.22.
Upgraded webencodings to 0.5.1.
9.0.3 (2018-03-21)¶
Fix an error where the vendored requests was not correctly containing itself to only the internal vendored prefix.
Restore compatibility with 2.6.
9.0.2 (2018-03-16)¶
Fallback to using SecureTransport on macOS when the linked OpenSSL is too old to support TLSv1.2.intooption.
6.0.5 (2015-01-03)¶
Fix a regression with 6.0.4 under Windows where most commands would raise an exception due to Windows not having the
os.geteuid()function.
6.0.4 (2015-01-03)¶.
6.0.1 (2014-12-22)¶. issuesURLs.OS Framework layout installs
Fixed bug preventing uninstall of editables with source outside venv.
Creates download cache directory if not existing.
0.5¶option to install ignore package dependencies
Added
--no-indexoption¶
Make
-ework¶
Added an option
--install-optiontowhich) | https://pip.readthedocs.io/en/stable/news/ | 2020-03-28T20:38:11 | CC-MAIN-2020-16 | 1585370493120.15 | [] | pip.readthedocs.io |
public class PathResourceResolver extends AbstractResourceResolver
ResourceResolverthat tries to find a resource under the given locations matching to the request path.
This resolver does not delegate to the
ResourceResolverChain and is
expected to be configured at the end in a chain of resolvers.
logger
resolveResource, resolveUrlPath
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public PathResourceResolver()
public void setAllowedLocations(@Nullable Resource... locations)
CssLinkResourceTransformerresolves public URLs of links it contains, the CSS file is the location and the resources being resolved are css files, images, fonts and others located in adjacent or parent directories.
This property allows configuring a complete list of locations under which resources must be so that if a resource is not under the location relative to which it was found, this list may be checked as well.
By default
ResourceWebHandler initializes this property
to match its list of locations.
locations- the list of allowed locations
@Nullable public Resource[] getAllowedLocations()
protected reactor.core.publisher.Mono<Resource> resolveResourceInternal(@Nullable ServerWebExchange exchange, String requestPath, List<? extends Resource> locations, ResourceResolverChain chain)
resolveResourceInternalin class
AbstractResourceResolver
protected reactor.core.publisher.Mono<String> resolveUrlPathInternal(String path, List<? extends Resource> locations, ResourceResolverChain chain)
resolveUrlPathInternalin class
AbstractResourceResolver
protected reactor.core.publisher.Mono<Resource> getResource(String resourcePath, Resource location)
The default implementation checks if there is a readable
Resource for the given path relative to the location.
resourcePath- the path to the resource
location- the location to check
Monoif none found
protected boolean checkResource(Resource resource, Resource location) throws IOException
allowed locations.
resource- the resource to check
location- the location relative to which the resource was found
IOException | https://docs.spring.io/spring/docs/5.1.9.RELEASE/javadoc-api/org/springframework/web/reactive/resource/PathResourceResolver.html | 2020-03-28T21:50:30 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.spring.io |
Deprecation: #73482 - $LANG->csConvObj and $LANG->parserFactory¶
See Issue #73482
Description¶
The properties of LanguageService (also known as
$GLOBALS[LANG]) csConvObj and parserFactory
have been marked as deprecated. Since these three PHP classes are not dependent on each other, they
can be decoupled. The getter method
getParserFactory() has thus been marked as deprecated as well.
Impact¶
These properties will be removed in TYPO3 v9. Calling
LanguageService->getParserFactory() will trigger a
deprecation log entry.
Affected Installations¶
Installations with custom extension accessing the LanguageService properties and method above. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.0/Deprecation-73482-LANG-csConvObjAndLANG-parserFactory.html | 2020-03-28T20:23:33 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.typo3.org |
Default Gateway/Route¶
In the past (VyOS 1.1) used a gateway-address configured under the system tree
(
set system gateway-address <address>), this is no longer supported
and existing configurations are migrated to the new CLI command.
Configuration¶
Specify static route into the routing table sending all non local traffic to the nexthop address <address>.
Operation¶
Show routing table entry for the default route.
[email protected]:~$ show ip route 0.0.0.0 Routing entry for 0.0.0.0/0 Known via "static", distance 10, metric 0, best Last update 09:46:30 ago * 172.18.201.254, via eth0.201 | https://docs.vyos.io/en/latest/system/default-route.html | 2020-03-28T20:00:35 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.vyos.io |
{ lateinit var net: MockNetwork lateinit var a: MockNetwork.MockNode lateinit var b: MockNetwork.MockNode lateinit var notary: Party @Before fun setup() { net = MockNetwork() val nodes = net.createSomeNodes() a = nodes.partyNodes[0] b = nodes.partyNodes[1] notary = nodes.notaryNode.info.notaryIdentity net.runNetwork() } @After fun tearDown() { net = ResolveTransactionsFlow(setOf(stx2.id), a.info.legalIdentity) val future = b.services.startFlow(p).resultFuture net.runNetwork() val results = future.getOrThrow() assertEquals(listOf(stx1.id, stx2.id), results.map { it.id }) b.database.transaction { assertEquals(stx1, b.storage.validatedTransactions.getTransaction(stx1.id)) assertEquals(stx2, b.storage node A
but not node B.
The test logic is simple enough: we create the flow, giving it node A’s identity as the target to talk to.
Then we start it on node B and use the
net node B, MEGA_CORP.ref(1)).let { if (withAttachment != null) it.addAttachment(withAttachment) if (signFirstTX) it.signWith(MEGA_CORP_KEY) it.signWith(DUMMY_NOTARY_KEY) it.toSignedTransaction(false) } val dummy2: SignedTransaction = DummyContract.move(dummy1.tx.outRef(0), MINI_CORP_PUBKEY).let { it.signWith(MEGA_CORP_KEY) it.signWith(DUMMY_NOTARY_KEY) it.toSignedTransaction() } a.database.transaction { a.services.recordTransactions(dummy1, node A by sending them
directly to the
a.
And that’s it: you can explore the documentation for the MockNetwork API here. | https://docs.corda.net/flow-testing.html | 2017-05-22T17:16:12 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.corda.net |
Installation¶
You can use django-fancypages as standalone app in your Django project or you can integrate it with your django-oscar shop using the included extension module. In the following sections, the standalone setup of django-fancypages will be referred to as FP and the Oscar integration as OFP.
#Most of the installation steps are exactly the same for both so let’s #go through these steps first. After you have completed them, follow the
Note
The two sandbox sites in FP show an example integration with django-oscar and as standalone project. They both use django-configurations maintained by the awesome Jannis Leidel to make dealing with Django settings much simpler. Using it is not a requirement for django-fancypages it’s just a personal preference. The following settings explain the setup using the basic Django settings.py but I recommend checking out django-configurations.
Installing Fancypages¶
For both FP and OFP, you have to install the python package django-fancypages which is available on PyPI and can be installed with:
$ pip install django-fancypages
or you can install the latest version directly from the github repo:
$ pip install git+
Standalone Setup¶
Let’s start with adding all required apps to you INSTALLED_APPS. FP relies on several third-party apps in addition to the fancypages app itself. For convenience, FP provides two functions get_required_apps and get_fancypages_apps that make it easy to add all apps in one additional line of code:
from fancypages import get_required_apps, get_fancypages_apps INSTALLED_APPS = [ ... ] + get_required_apps() + get_fancypages_apps()
Note
FP supports Django 1.7 which replaces South migrations with a new migration system integrated in Django. The fancypages.migrations module containse the new-style migrations and will only work for Django 1.7+. For Django 1.5 and 1.6, you have to add south to your installed apps and specify an alternative migrations module in the SOUTH_MIGRATION_MODULES settings. Add the following to your settings when using either of these versions:
SOUTH_MIGRATION_MODULES = { 'fancypages': "fancypages.south_migrations", }
It will then behave in exactly the same way as before.', )
Fancypages requires several default settings to be added. To make sure that you have all the default settings in your settings, you can use the defaults provided by fancypages itself. Add the following in your settings file before you overwrite specific settings:
... from fancypages.defaults import * # override the defaults here (if required) ...
Finally, you have to add URLs to your urls.py to make the fancypages dashboard and all FP-enabled pages available on your sight. FP uses a very broad matching of URLs to ensure that you can have nicely nested URLs with your pages. This will match all URLs it encounters, so make sure that you add them as the very last entry in your URL patterns:
urlpatterns = patterns('', ... url(r'^', include('fancypages.urls')), )
If you would like the home page of your project to be an FP-enabled page as well, you have to add one additional URL pattern:
urlpatterns = patterns('', url(r'^$', views.HomeView.as_view(), name='home'), ... url(r'^', include('fancypages.urls')), )
This view behaves slightly different from a regular FancyPageView: if no FancyPage instance exists with the name Home (and the corresponding slug home), this page will be created automatically as a “Draft” page. Make sure that you publish the page to be able to see it as non-admin user.
Setup Alongside Oscar¶
Note
The following instructions assume that you have Oscar set up succesfully by following Oscar’s documentation. Addressing Oscar-specific set up details aren’t considered here. We recommend that you take a close look at Oscar’s documentation before continuing.
Setting up django-fancypages alongside your django-oscar shop is very similar to the standalone setup. You also have to add extra apps to your INSTALLED_APPS and once again, you can use the convenience function provided by fancypages. Note that we pass use_with_oscar=True to ensure that the fancypages.contrib.oscar_fancypages app is added:
from fancypages import get_required_apps, get_fancypages_apps INSTALLED_APPS = [ ... ] + fp.get_required_apps() \ + fp.get_fancypages_apps(use_with_oscar=True) \ + get_core_apps()
Note
Once again, FP ships the new-style migrations for Django 1.7+ by default. If you are using Django 1.5 or 1.6, you have to make sure that you have south in your INSTALLED_APPS and add the following setting to point to the alternative South migrations:
SOUTH_MIGRATION_MODULES = { 'fancypages': "fancypages.south_migrations", 'oscar_fancypages': 'fancypages.contrib.oscar_fancypages.south_migrations', # noqa }
You can now use syncdb and migrate as you would normally.', )
Similar to the standalone setup, you have to import the default settings for FP in your settings.py. However, to make the integration with Oscar seamless, you have to set the FP_NODE_MODEL to Oscar’s Category model. The reason for this is, that categories in Oscar already provide a tree-structure on the site that we can leverage. Switching the page node from FP’s internal model to Oscar’s Category is as easy as:
... from fancypages.defaults import * FP_NODE_MODEL = 'catalogue.Category' FP_PAGE_DETAIL_VIEW = 'fancypages.contrib.oscar_fancypages.views.FancyPageDetailView' ...
In addition, you should integrate the page management dashboard with Oscar’s builtin dashboard. We recommend replacing the entry “Catalogue > Categories” with FP’s page management by replacing:
OSCAR_DASHBOARD_NAVIGATION = [ ... { 'label': _('Categories'), 'url_name': 'dashboard:catalogue-category-list', }, ... ]
with:
OSCAR_DASHBOARD_NAVIGATION = [ ... { 'label': _('Pages / Categories'), 'url_name': 'fp-dashboard:page-list', }, ... ]
This usually means, you have to copy the entire OSCAR_DASHBOARD_NAVIGATION dictionary from oscar.defaults to overwrite it with your own.
The last thing to configure is the URLs for the pages. Conceptually, a FancyPage is equivalent to a Category in Oscar, therefore, a FancyPage wraps the Category model and adds FP-specific behaviour. Therefore, we have to modify Oscar’s URLs to replace the category URLs with those for our FP pages. This sounds more complicated than it actually is:
from fancypages.app import application as fancypages_app from fancypages.contrib.oscar_fancypages import views from oscar.app import Shop from oscar.apps.catalogue.app import CatalogueApplication class FancyCatalogueApplication(CatalogueApplication): category_view = views.FancyPageDetailView class FancyShop(Shop): catalogue_app = FancyCatalogueApplication() urlpatterns = patterns('', ... url(r'', include(FancyShop().urls)), ... url(r'^', include(fancypages_app.urls)), )
All we are doing here is, replacing the CategoryView in Oscar with the FancyPageDetailView from OFP, which will display the same details as Oscar’s template.
Replacing the home page with a FP page works exactly the same way as described in Standalone Setup. | http://django-fancypages.readthedocs.io/installation.html | 2017-06-22T16:18:38 | CC-MAIN-2017-26 | 1498128319636.73 | [] | django-fancypages.readthedocs.io |
All of the text files read by the engine utilize a modified JSON format that we call SJSON (for simplified JSON).
The changes from standard JSON are:
emissiveMap = "textures/honeycomb.dds" scale_factor = 0.8 mask = 0xDEADBEEF evaluate = true code = [=[ this is "code" ]=] values = [ "one" "two" { name="three" type=float } ] | http://docs.futureperfectgame.com/sjson.html | 2017-06-22T16:30:37 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.futureperfectgame.com |
Asynchronous Reply Router Configuration Reference
This page provides details on the elements you configure for asynchronous reply routers. This information is pulled directly from
mule.xsd and is cached. If the information appears to be out of date, refresh the page.
Single async reply router
Configures a Single Response Router. This will return the first message it receives on a reply endpoint and will discard the rest.
Collection async reply router
Configures a Collection Response Router. This will return a MuleMessageCollection message type that will contain all messages received for the current correlation. | https://docs.mulesoft.com/mule-user-guide/v/3.2/asynchronous-reply-router-configuration-reference | 2017-06-22T16:38:58 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.mulesoft.com |
backend, which enables the use of OpenGL 3.x and 4.x features such as tessellation and geometry shaders.. | https://docs.unity3d.com/560/Documentation/Manual/OpenGLCoreDetails.html | 2017-06-22T16:35:26 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.unity3d.com |
Genomic Association Tester (GAT)¶
Welcome to the home page of the Genomic Association Tester (GAT).
Overview¶
A common question in genomic analysis is whether two sets of genomic intervals overlap significantly. This question arises, for example, in the interpretation of ChIP-Seq or RNA-Seq data. Because of complex genome organization, its answer is non-trivial.
The Genomic Association Tester (GAT) is a tool for computing the significance of overlap between multiple sets of genomic intervals. GAT estimates significance based on simulation and can take into account genome organization like isochores and correct for regions of low mapability.
GAT accepts as input standard genomic file formats and can be used in large scale analyses, comparing the association of multiple sets of genomic intervals simultaneously. Multiple testing is controlled using the false discovery rate.
In this manual, the Introduction covers the basic concepts of GAT. In order to get an idea of typical use cases, see the Tutorials section. The Usage instructions section contains a complete usage reference.
Contents¶
- Introduction
- Installation
- Tutorials
- Usage instructions
- Interpreting GAT results
- Performance
- Background
- Glossary
- Release Notes
Developers’ notes¶
The following section contains notes for developers. | http://gat.readthedocs.io/en/latest/contents.html | 2017-06-22T16:18:42 | CC-MAIN-2017-26 | 1498128319636.73 | [] | gat.readthedocs.io |
Developer Guide¶
- Writing a WPS process
- Designing a process
- Writing Documentation
- Using Anaconda in birdhouse
- Using Buildout in birdhouse
- Python Packaging
- Python Code Style
- Coding Style using EditorConfig
Writing a WPS process¶
In birdhouse, we are using the PyWPS implementation of a Web Processing Service. Writing a WPS process in birdhouse is the same as in PyWPS. The PyWPS documentation has a tutorial on writing a process. Please follow this PyWPS tutorial.
To get started more easily, you can install Emu with some example processes for PyWPS.
Data production¶
WPS is designed to reduce data transport and enables data processing close to the data archive. Nevertheless, files are stored within birdhouse in a structured way. For designing a WPS process or process chain, the location of input, output and temporary files are illustrated as follows:
Resources, which are already on the local disc system (output by other processes or as locally stored data archives), are linked into the cache simply with a soft link to avoid data transport and disc space usage.
The locations are defined as follows:
- Resources: Any kind of accessable data such as ESGF, thredd server or files stored on the server-side disc system.
- Cache:
~/birdhouse/var/lib/pywps/cache/The cache is for external data which are not located on the server side. The files of the cache are separated by the birds performing the data fetch and keep the folder structure of the original data archive. Once a file is already in the cache, the data will not be refetched if a second request is made. The cache can be seen as a local data archive. Under productive usage of birdhouse, this folder is growing, since all requested external data are stored here.
- Working directory:
~/birdhouse/var/lib/pywps/tmp/Each process is running in a temporary folder (= working directory) which is removed after the process is successfully executed. Like the cache, the working directories are separated by birds. Resource files are linked into the directory.
- Output files:
~/birdhouse/var/lib/pywps/outputs/The output files are also stored in output folders separated by the birds producing the files. In the case of flyingpigeon, you can get the paths with:
from flyingpigeon import config output_path = config.output_path() # returns the output folder path outputUrl_path = config.outputUrl_path() # returns the URL address of the output folder
And in some special cases, static files are used (e.g. html files to provide general information). These files are located in the repository. In the case of flyingpigeon, they are located at:
./flyingpigeon/flyingpigeon/static/
and copied during the installation (or update) to:
~/birdhouse/var/www/
Designing a process¶
For designing a process it is necessary to know some basic concepts about how data are produced in birdhouse. The following are some basic explanations to help in developing appropriate processes to provide a scientific method as a service. The word process is used in the same sense as in the OGC standard: for any algorithm, calculation or model that either generates new data or transforms some input data into output data, and can be illustrated as follows:
The specific nature of web processing services is that processes can be described in a standardised way (see: Writing a WPS process). In the flyingpigeon repository, the process descriptions are located in:
./flyingpigeon/flyingpigeon/processes
As part of the process description there is an execute function:
def execute(self): # here starts the actual data processing import pythonlib from flyingpigeon import aflyingpigeonlib as afl result = afl.nicefunction(indata, parameter1=argument1, parameter2=argument2) self.output.setValue( result )
It is a recommended practice to separate the functions (the actual data processing) from the process description. This creates modularity and enables multiple usage of functions when designing several processes. The modules in flyingpigeon are located here:
./flyingpigeon/flyingpigeon
Generally, the execution of a process contains several processing steps, where temporary files and memory values are generated. Birdhouse runs each job in a separate folder, by default situated in:
~/birdhouse/var/lib/pywps/tmp/
This tmp folder is removed after job is successfully executed. To reuse temporary files, it is necessary to declare them as output files. Furthermore, during execution, there are steps which are necessary to be successfully performed and a result is called back. If this particular step fails, the whole process should exit with an appropriate error message, while in other cases it is not relevent for producing the final result. The following image shows a theoretical chain of functions:
In practice, the functions should be encapsulated in try and except calls and appropriate information given to the log file or shown as a status message:
The log file then looks like:
tail -f ~/birdhouse/var/log/pywps/flyingpigeon.log PyWPS [2016-09-14 11:49:13,819] INFO: Start ocgis module call function PyWPS [2016-09-14 11:49:13,820] INFO: Execute ocgis module call function PyWPS [2016-09-14 11:49:13,828] DEBUG: input has Lambert_Conformal projection and can not subsetted with geom PyWPS [2016-09-14 11:49:13,828] DEBUG: failed for point ['2.356138', ' 48.846450'] Validation failed on the parameter "uri" with the message: Cannot be None PyWPS [2016-09-14 11:49:13,993] INFO: Start ocgis module call function PyWPS [2016-09-14 11:49:13,994] INFO: Execute ocgis module call function PyWPS [2016-09-14 11:49:14,029] INFO: OcgOperations set PyWPS [2016-09-14 11:49:14,349] INFO: tas as variable dedected PyWPS [2016-09-14 11:49:14,349] INFO: data_mb = 0.0417938232422 ; memory_limit = 1660.33984375 PyWPS [2016-09-14 11:49:14,349] INFO: ocgis module call as ops.execute() PyWPS [2016-09-14 11:49:16,648] INFO: Succeeded with ocgis module call function
Logging information is written to the logfile depending on the ‘log-level’ settings in ~/custom.cfg
Another point to think about when designing a process is the possibility of chaining processes together. The result of a process can be a final result or be used as an input for another process. Chaining processes is a common practice but depends on the user you are designing the service for. Technically, for the development of WPS process chaining, here are a few summary points:
- the functional code should be modular and provide an interface/method for each single task
- provide a wps process for each task
- wps processes can be chained, manually or programmatically, to run a complete workflow
- wps chaining can be done manually, with workflow tools, direct wps chaining or with code scripts
- a complete workflow chain could also be started by a wps process.
In birdhouse, restflow and dispel4py are integrated, and WPS chaining is used in the wizard of phoenix. This WPS chain fetches data and runs a process (selected by the user) with the fetched data :
Here is a tutorial to follow: Chaining WPS processes.
or:
Writing Documentation¶
Documentation is written in ReStructuredText and generated with Sphinx. The birdhouse components use the Buildout recipe birdhousebuilder.recipe.sphinx which sets up Sphinx and a minimal
docs folder. With
make docs the documentation is generated locally. The documentation is published to Read the Docs with each commit to the master branch. The API reference is generated automatically using the Sphinx plugin AutoAPI.
Using Anaconda in birdhouse¶
The installation of the birdhouse components and especially the processes involve many software dependencies. The core dependencies are of course the WPS-related packages like PyWPS and OWSLib from the GeoPython project. But most dependencies come from the processes themselves served by the WPS, such as numpy, R, NetCDF, CDO, matplotlib, ncl, cdat, and many more.
The aim of birdhouse is to take care of all these dependencies so that the user does not need to install them manually. If these dependencies were only pure Python packages, then using the Buildout build tool, together with the Python package index PyPi, would be sufficient. But many Python packages have C extensions and there are also non-Python packages that need to be installed like R and NetCDF.
In this situation, the Anaconda Python distribution is helpful. Anaconda already has a lot of Python-related packages available for different platforms (Linux, MacOSX, Windows), and there is no compilation needed on the installation host. Anaconda makes it easy to build own packages (conda recipes) and upload them to the free Anaconda Server.
Conda recipes by birdhouse¶
Birdhouse uses Anaconda to maintain package dependencies. Anaconda allows you to write your own conda recipes. In birdhouse, we have written several conda recipes for the packages that were not available on Anaconda. These additional conda recipes by birdhouse are available on GitHub. Some of the missing packages are: PyWPS, OWSLib, cfchecker, Nginx, ...
Anaconda provides a free Anaconda Server. Here you can upload your built conda packages for different platforms (Linux, MacOX, Windows). These packages are then available for installation with the conda installer.
Birdhouse has an organisation where all conda packages are collected which are built from the conda recipes on GitHub. These packages can be installed with the conda installer using the birdhouse channel. For example, if you are already using Anaconda, you can install PyWPS with the following command:
$ conda install --channel birdhouse pywps
Building conda packages¶
There are several ways to build conda packages and upload them to the Anaconda Server:
- You can build packages locally and upload them with the Binstar command line tool.
- You can also build packages remotely on Anaconda. Additionally, you can set a GitHub Webhook so that on each commit of your recipe, a build will be run on Binstar.
- The remote builds on Anaconda are done using Docker images. The Anaconda docker image for Linux-64 is available on Docker Hub.
In birdhouse, we usually use the remote build on Anaconda which is triggered by commits to GitHub. But sometimes the docker image for Linux-64 provided by Binstar fails for some packages. That is why birdhouse has in addition its own Linux-64 build image which is based on the Anaconda image. The Dockerfile for this image is on GitHub.
Warning
When you build conda packages for Linux-64, you need to be very careful to ensure that these packages will run on most Linux distributions (like CentOS, Debian, Ubuntu, ...). Our experience is that packages tjat build on CentOS 6.x will also run on recent Debian/Ubuntu distributions. The Docker build images are also CentOS 6.x based.
Note
You can build a conda package with the provided docker image for Linux-64. See the readme on how to use it.
Note
For future conda packages, one should use the community-driven conda-forge channel.
Example: building a conda package for pygbif¶
pygbif is a Python package available on PyPi. Generate conda package files using
conda skeleton:
$ conda skeleton pypi pygbif $ cd pygbif $ vim meta.yaml # check dependencies, test, build number $ vim build.sh # for non-python packges, here is most of the work to do
Enable anaconda build:
$ cd pygbif $ anaconda-build init $ vim .binstar.yml
Edit the anaconda config (
binstar.yml) to have the following entries (change the package name for a different recipe):
See the conda recipe on GitHub.
Run binstar build for the first time:
$ binstar package --create birdhouse/pygbif $ anaconda-build submit . $ anaconda-build tail -f birdhouse/pygbif 1 # checks logs
On successful build, go to the birdhouse channel on binstar and search for the pygbif package ().
Go to the
files tab and add the channel main for the successfully-built package.
All packages on the main channel are available for public usage.
Register GitHub webhook for pygbif:
on the Anaconda Server, go to Settings/Continuous Integration of the
pygbif package.
Edit the fields:
- github.com/ = bird-house/conda-recipes
- Subdirectory = pygbif
Warning
If you’re logged into anaconda with your own rather than the birdhouse organization account, then the
anaconda-build submit . way mentioned above seems to cause some problems (as of October 2015). A more reliable way to upload your package is to build it locally, upload it to your own account and then transfer the ownership to birdhouse via the web interface:
$ anaconda-build init # just as before $ vim .binstar.yaml $ # skip package creation here $ conda build . # build locally $ anaconda upload /your/path/to/conda-bld/platform/packagename-version.tar.bz2 # full path is listed in conda build output Now switch to `anaconda.org/yourname/packagename` and go to `Settings` -> `Admin` -> `Transfer` to transfer the package to `birdhouse`. (You could use ``-u birdhouse`` to upload it to `birdhouse` directly, but it seems to make some difference e.g. some fields in the web interface will not be filled in automatically, so I figured the other workaround to be more reliable.)
Using conda¶
See the conda documentation.
Warning
To fix the SSL cert issues in conda when updating to python 2.7.9, do the following:
$ conda config --set ssl_verify False $ conda update requests openssl $ conda config --set ssl_verify True
See this conda issue at
Anaconda alternatives¶
If Anaconda is not available, one could also provide these packages from source and compile them on each installation host. Buildout does provide ways to do so, but an initial installation with most of the software used in climate science could easily take hours.
Alternative package managers to Anaconda are for example Homebrew (MacOSX only) and Linuxbrew (a fork of Homebrew for Linux).
Using Buildout in birdhouse¶
Birdhouse uses the Buildout build tool to install and configure all birdhouse components (Phoenix, Malleefowl, Emu...). The main configuration file is
buildout.cfg which is in the root folder of the application.
As an example, have a look at the buildout.cfg from Emu.
Before building an application with Buildout, you have an initial bootstrap step:
$ python bootstrap-buildout.py -c buildout.cfg
This will generate the
bin/buildout script.
Now you can build the application:
$ bin/buildout -c buildout.cfg
The default configuration in the
buildout.cfg should always work to run your application on
localhost with default ports. You can customize the configuration by editing the
custom.cfg which extends and overwrites the settings of
buildout.cfg. You may have a look at the
custom.cfg example of Emu. So, instead of using
buildout.cfg, you should use
custom.cfg for the build:
$ bin/buildout -c custom.cfg
For convenience, birdhouse has a Makefile which hides all these steps. If you want to build an application, you just need to run:
$ make install
See the Makefile example of Emu For more details, see the Installation section and the Makefile documentation.
Buildout recipes by birdhouse¶
Buildout has a plugin mechanism to extend the build tool functionality with recipes. Buildout can handle Python dependencies on its own. But in birdhouse, we install most dependencies with Anaconda. We are using a Buildout extension to install conda packages with Buildout. Buildout does use these Python packages instead of downloading them from PyPi. There is also a set of recipes to set up Web Processing Services with PyWPS, Nginx, Gunicorn and Supervisor. All these Buildout recipes are on GitHub and can be found on PyPi.
Here is the list of currently-used Buildout recipes by birdhouse:
- birdhousebuilder.recipe.conda: A Buildout recipe to install Anaconda packages.
- birdhousebuilder.recipe.pywps: A Buildout recipe to install and configure PyWPS Web Processing Service with Anaconda.
- birdhousebuilder.recipe.pycsw: A Buildout recipe to install and configure pycsw Catalog Service (CSW) with Anaconda.
- birdhousebuilder.recipe.nginx: A Buildout recipe to install and configure Nginx with Anaconda.
- birdhousebuilder.recipe.supervisor: A Buildout recipe to install and configure supervisor for Anaconda.
- birdhousebuilder.recipe.docker: A Buildout recipe to generate a Dockerfile for birdhouse applications.
- birdhousebuilder.recipe.sphinx: A Buildout recipe to generate documentation with Sphinx.
- birdhousebuilder.recipe.ncwms: A Buildout recipe to install and configure ncWMS2 Web Map Service.
- birdhousebuilder.recipe.adagucserver: A Buildout recipe to install and configure Adagucserver Web Map Service.
Python Packaging¶
Links:
Example:
$ python setup.py sdist $ python setup.py bdist_wheel $ python setup.py register -r pypi $ twine upload dist/*
Check the rst docs in the long_description of
setup.py:
Example:
$ python setup.py checkdocs
Python Code Style¶
Birdhouse uses PEP8 checks to ensure a consistent coding style. Currently the following PEP8 rules are enabled
in
setup.cfg:
[flake8] ignore=F401,E402 max-line-length=120 exclude=tests
See the flake8 documentation on how to configure further options.
To check the coding style run
flake8:
$ flake8 emu # emu is the folder with python code # or $ make pep8 # make calls flake8
To make it easier to write code according to the PEP8 rules enable PEP8 checking in your editor. In the following we give examples how to enable code checking for different editors.
Atom¶
- PEP8 Atom Plugin:
Sublime¶
- Install package control if you don’t already have it:
- Follow the instructions here to install Python PEP8 Autoformat:
- Edit the settings to conform to the values used in birdhouse, if necessary
- To show the ruler and make wordwrap default, open Preferences → Settings—User and use the following rules
{ // set vertical rulers in specified columns. "rulers": [79], // turn on word wrap for source and text // default value is "auto", which means off for source and on for text "word_wrap": true, // set word wrapping at this column // default value is 0, meaning wrapping occurs at window width "wrap_width": 79 }
Coding Style using EditorConfig¶
EditorConfig is used to keep consistent coding styles between different editors.
The configuration is on github in the top level directory
.editorconfig.
See the EditorConfig used in Birdhouse.
Check the EditorConfig page on how to activate it for your editor. | http://birdhouse.readthedocs.io/en/latest/dev_guide.html | 2017-06-22T16:31:36 | CC-MAIN-2017-26 | 1498128319636.73 | [array(['_images/filelocations.png', '_images/filelocations.png'],
dtype=object)
array(['_images/process_schema_1.png', '_images/process_schema_1.png'],
dtype=object)
array(['_images/module_chain.png', '_images/module_chain.png'],
dtype=object)
array(['_images/wps_chain.png', '_images/wps_chain.png'], dtype=object)
array(['_images/binstar_channel.png', '_images/binstar_channel.png'],
dtype=object)
array(['_images/binstar_ci.png', '_images/binstar_ci.png'], dtype=object)
array(['_images/atom-pep8.png', '_images/atom-pep8.png'], dtype=object)] | birdhouse.readthedocs.io |
class OEMCMolBase : public OEMolBase
The OEMCMolBase class provides the basic multi-conformer molecule in OEChem. It is an abstract base class which defines the interface for multi-conformer molecule implementations. Coordinates can be stored in OEHalfFloat (16-bit), float (32-bit), double (64-bit), or long double (>= 64-bit). The precision is determined by the constructor argument given to OEMol, taken from the OEMCMolType namespace. OEMCMolBase have an interface which allow access to conformers in a modal (OEMCMolBase::GetActive) or a non-modal manner (OEMCMolBase::GetConfs).
The following methods are publicly inherited from OEMolBase:
The following methods are publicly inherited from OEBase:
OEMCMolBase &operator=(const OEMolBase &rhs) OEMCMolBase &operator=(const OEConfBase &rhs) OEMCMolBase &operator=(const OEMCMolBase &rhs)
Assignment operator of multi-conformer molecules via this abstract base class.
void ClearBase()=0
Clear the generic data from the OEBase base class of this object. Equivalent to just calling OEBase::Clear without actually clearing away molecule data like atoms, bonds, and conformers.
void ClearMCMol()=0
Clears molecule data like atoms, bonds, and conformers without clearing away the OEBase generic data.
void ClearMolBase()=0
Equivalent to calling OEMolBase::Clear.
bool DeleteConf(OEConfBase *)=0
Deletes the conformer which is passed in from the OEMCMolBase object.
void DeleteConfs()=0
Warning
OEMCMolBase::DeleteConfs leaves the OEMCMolBase object in an unstable state. It should be used with care and followed shortly with a call to OEMCMolBase::NewConf.
Deletes all of the conformers from the OEMCMolBase object. This is a very useful function for creating a new transformed OEMCMolBase from an untransformed molecule.
See also
Listing 3 code example in the Conformer Creation section
OEConfBase *GetActive() const =0
Returns the currently active conformer of the OEMCMolBase object.
Note
The OEMCMolBase::GetActive and OEMCMolBase::SetActive methods are often sufficient for accessing conformations in multi-conformer molecules.
See also
OEConfBase * GetConf( const OESystem::OEUnaryPredicate<OEChem::OEConfBase > &) const =0
Returns the first conformer in the molecule for which the predicate passed in returns true.
OESystem::OEIterBase<OEConfBase > *GetConfs() const =0 OESystem::OEIterBase<OEConfBase > *GetConfs( const OESystem::OEUnaryPredicate<OEChem::OEConfBase > &) const =0
Returns an iterator over the conformers in the multi-conformer molecule. The return value of this function should always be assigned to an OEIter object. The function which takes no arguments returns an iterator over all of the conformers. The function which takes a predicate returns an iterator which only contains conformers for which the predicate returns true.
unsigned int GetMaxConfIdx() const =0
Returns the maximum conformer index of the OEMCMolBase object. Similar to OEMolBase::GetMaxAtomIdx and OEMolBase::GetMaxBondIdx this method is useful for creating temporary external data structures which can hold information that can be referenced via the OEConfBase::GetIdx method.
const char *GetMCMolTitle() const =0
Return the title for the parent molecule, don’t fall back to a conformer title like OEConfBase::GetTitle.
bool IsDeleted(OEConfBase *) const =0
Returns whether the passed in conformer has already been deleted.
See also
OEConfBase *NewConf()=0 OEConfBase *NewConf(const OEPlatform::OEHalfFloat *)=0 OEConfBase *NewConf(const float *)=0 OEConfBase *NewConf(const double *)=0 OEConfBase *NewConf(const long double *)=0 OEConfBase *NewConf(const OEMolBase *)=0 OEConfBase *NewConf(const OEConfBase *)=0 OEConfBase *NewConf(const OEConfBase *, const OETrans &)=0 OEConfBase *NewConf(const OEConfBase *, OETYPENAME std::vector<OETorsion> &t)=0
These methods generate a new conformer that is owned by the current OEMCMolBase. Each of the methods will return a pointer to the newly created conformer. The OEMCMolBase::NewConf methods act as virtual constructors of the OEConfBase objects. OEMCMolBase::NewConf constructs a conformer with its default constructor. NewConf(const OEMolBase *) and NewConf(const OEConfBase *) copy construct a new conformer with the coordinates from the object passed into the function. The objects passed in must have the same graph as the current OEMCMolBase. NewConf that takes a OEHalfFloat, float, double, or long double pointer constructs a new conformer with the coordinates passed in as coords. The array must be of length GetMaxAtomIdx() * 3, and the coordinates for each atom in the new conformer should be the dimension values in the array starting at coords[atom->GetIdx() * 3].
Passing a NULL to any of these methods will effectively do nothing and just return a NULL OEConfBase pointer.
Warning
The dimension of the conformer will be set to 0 for the NewConf default constructor. The NewConf methods that create conformers from coordinates will set the dimension of the conformer to 3. The NewConf methods that copy construct will copy the dimension from the source.
unsigned int NumConfs() const =0
Returns the number of conformers contained in the OEMCMolBase object.
bool OrderConfs(const OETYPENAME std::vector<OEConfBase *> &)=0
Reorders the conformers in the molecule to the order specified in the vector argument. If the vector contains an incomplete list, the remaining conformers will come at the end. This function call changes the order in which the conformers are returned by OEMCMolBase::GetConfs, but does not change the conformer indices.
void PopActive()=0
The OEMCMolBase::PopActive method along with the OEMCMolBase::PushActive method allow to maintain a stack of active conformers.
The OEMCMolBase::PopActive method removes the top active conformer from the active stack and makes the next highest conformer in the stack active.
See also
bool PushActive(OEConfBase *)=0
The OEMCMolBase::PushActive method along with the OEMCMolBase::PopActive method allow to maintain a stack of active conformers.
The OEMCMolBase::PushActive method makes the new conformer the active one and pushes the previous active conformer down the stack.
See also
bool SetActive(OEConfBase *)=0
Makes the conformer passed in become the active conformer. The conformer passed in must already be a member of the OEMCMolBase object.
Note
The OEMCMolBase::GetActive and OEMCMolBase::SetActive methods are often sufficient for accessing conformations in multi-conformer molecules.
See also
bool SweepConfs()=0
Cleans up unused memory and objects which may be associated with the conformers of the OEMCMolBase. Renumber the conformer indices sequentially. This function invalidates the conformer indices of all conformers in a molecule. Note that this function doesn’t guarantee that all conformer indices are sequential upon completion (some molecule implementations may treat OEMolBase::Sweep as a no-op). | https://docs.eyesopen.com/toolkits/cpp/oechemtk/OEChemClasses/OEMCMolBase.html | 2017-06-22T16:38:12 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.eyesopen.com |
Crate amethyst_ecs
Consuming builder for easily constructing a new simulations.
A collection of entities and their respective components.
The trait implemented by all processors.
An unsigned 64-bit handle to an entity.
The error type reported by SimBuilder if they fail to initialize.
TODO: original note specified it was en error type reported by a processor,
although, as seen below, Processor doesn't have any function to return an error,
thus, only SimBuilder can return Result as of now. | https://docs.rs/amethyst_ecs/0.1.1/amethyst_ecs/ | 2017-06-22T16:34:36 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.rs |
API Management Reference
From API Manager, you can perform API management tasks. As a member of the API Creators or Organization Administrators role, you can register new APIs or add new versions to existing APIs. An API Versions Owner can access the API version details pages for the versions they own. You can share resources in an organization and perform other API management tasks.
To create a new API using API Manager, click Add New API from the API Administration page. Enter a name and version identifier (required). The API and version names cannot exceed 42 characters in length. In the master organization, the conjunction of the API name and version must be unique. If you use business groups, the name must be unique within all your business groups and the master organization.
If you plan to deploy the API on CloudHub, observe CloudHub naming conventions.
Anypoint Platform uses the name and version to create an administrative command center for your API, called the API version details page in this document.
API Administration Overview
After logging into your Anypoint Platform account, and opening API Manager, a list of the APIs entered into in the platform appears. An API registered in API Manager belongs to a business group and can have multiple API versions.
On the API Administration page, Add New API imports an existing API or adds a definition. The API Administration page also lists the names and versions of the APIs you define or import. Hover over and click the version name area to show details on the left panel of the API Administration page:
To start an API management task, click a version name. A page of controls for performing API management tasks on the selected version appears on the API version details page:
Managing Versions
Managing Policies
After deploying an API from API Manager, you can protect your API using policies. As an API Versions Owner, you typically add policies and SLA tiers to the API that you deploy by proxy. The policies combined with an SLA definition restricts access to the API from applications by tier.
Available policies for an API appear only after you deploy the API.
Click
> to get the status and description of a policy in the list of available policies.
Publishing APIs
You can publish APIs on a portal in Anypoint Platform to expose the APIs to users. API Manager sends an email notification to you when someone requests access to an API on the portal.
You can set API alerts to receive notification of events related to the API, such as excessive blocked requests on an API.
Linking Multiple API Versions to a Shared API Portal
The new version of your API is unique. No description, tags, RAML definitions, SLAs, policies, or endpoints are shared between versions. However, you can choose to have multiple versions share a single API portal. Using a shared portal can save you time if you have multiple versions that need exactly the same documentation for developers. The only items that are not identical in shared API Portals are:
The API Portal URL – the portal URL contains your unique organization name, API name, and version number. Developers can be confident they are accessing the correct portal for the API version they want to consume.
The API Console (for APIs with RAML definitions) – even if multiple API versions share a single portal, the API Console displayed on a portal always matches the API version in the portal URL.
An API Notebook (for APIs with RAML definitions) – even if multiple API versions share a single portal, an API Notebook displayed on a portal always matches the API version in the portal URL.
Managing an API Life Cycle
Managing the lifecycle of an API within Anypoint Platform is a transparent and orderly process. For example, you don’t have to create a new API in the system if you change the underlying data model; instead, create a new version of your API and document the changes. Other users with access to your API Portals can follow a clear path of transition to your new version while still having access to all the information of the older versions.
To communicate migration information to developers, you can access the list of consumer applications from the Applications tab of the API version details page. Click each application to see the contact information for the developer who owns that application. To ensure uninterrupted service, application developers can request access to the new version of the API before you revoke access to the old version. Applications can continue to use the same client ID and client secret for the new version.
While you are transitioning consumers to an updated version of your API, you might want to prevent developers from signing up for access to your old API version. In this case, deprecate the old API version. | https://docs.mulesoft.com/api-manager/manage-api-reference | 2017-06-22T16:38:14 | CC-MAIN-2017-26 | 1498128319636.73 | [array(['./_images/index-aad67.png', 'index-aad67'], dtype=object)
array(['./_images/index-4908b.png', 'index-4908b'], dtype=object)
array(['./_images/managing-api-versions-b1e81.png',
'managing-api-versions-b1e81'], dtype=object)
array(['./_images/import-archive-or-version2017.png',
'import-archive-or-version2017'], dtype=object)
array(['./_images/walkthrough-manage-0994c.png',
'walkthrough-manage-0994c'], dtype=object)] | docs.mulesoft.com |
Crate ddg [−] [src]
ddg: A DuckDuckGo Instant Answers wrapper library.
This library provides a strongly typed wrapper around the DuckDuckGo Instant
Answers API. Most of the documentation comes from the
DuckDuckGo Instant Answers API Documentation
This library comes with reqwest by default for convenience, however it can be
disabled. If disabled the library will fallback to hyper for
IntoUrl so it
can be used with your own hyper client implementation.
Example
use ddg::Query; const APP_NAME: &'static str = "ddg_example_app"; // Search for Rust and we want to strip out any HTML content in the answers. let query = Query::new("Rust", APP_NAME).no_html(); let response = query.execute().unwrap(); println!("{:?}", response); | https://docs.rs/ddg/0.3.0/ddg/ | 2017-06-22T16:37:38 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.rs |
A graphically rich professional board summary pack designed to convey vital information in a comprehensive yet concise manner..
This Powerpoint Roadmap with PEST Factors Template shows how your Project delivers Strategic Benefit. For use with Business Change and Transitions. Simple SWOT Template is an easy to edit SWOT layout. Add to presentation and add your Strengths, Weaknesses, Opportunities and Threats.
This SWOT Analysis Templates collection features 24 Powerpoint guidance and template slides: cheat sheet, list, prioritisation, actions, and SWOT Plan.
Published : March 7th, 2013 | SKU: BDUK-25
Last Updated : April 19th, 2016
Author: Jeff Armstrong
This Template is part of these SPECIAL OFFERS:
Save your team time and get great looking results with The CEO & Boardroom Premium Business Template Package. Get 70 premium templates immediately at more than 70% discount.
The Project Schedule Discount Bundle gives you a massive 69% discount on our Project Schedule templates
The Strategic SWOT Template Collection includes our most successful SWOT templates at a discount of 55%: Strengths, Weaknesses, Opportunities, & Threats
This Procurement Template Pack will give you more than 45% discount on our most popular Procurement Templates. Save and be ready! | https://business-docs.co.uk/downloads/powerpoint-swot-pest-and-porters-workshop-brainstorm-pack/ | 2018-02-18T01:16:14 | CC-MAIN-2018-09 | 1518891811243.29 | [] | business-docs.co.uk |
Using Home Options
Goto
Dashboard => Appearance => Theme
Greenr Pro version.
- Click
Save Changeswhen you're done with settings | http://docs.webulous.in/greenr-free/theme-options/home.php | 2018-02-18T01:17:01 | CC-MAIN-2018-09 | 1518891811243.29 | [array(['http://docs.webulous.in/greenr-free/images/theme-options/home.png',
None], dtype=object) ] | docs.webulous.in |
str – Original URL scheme requested by the user agent, if the request was proxied. Typical values are ‘http’ or ‘https’.
The following request headers are checked, in order of preference, to determine the forwarded scheme:
Forwarded
X-Forwarded-For
If none of these headers are available, or if the Forwarded header is available but does not contain a “proto” parameter in the first hop, the value of
schemeis returned instead.
(See also: RFC 7239, Section 1)
forwarded_host¶
str – Original host request header as received by the first proxy in front of the application server.
The following request headers are checked, in order of preference, to determine the forwarded scheme:
Forwarded
X-Forwarded-Host
If none of the above headers are available, or if the Forwarded header is available but the “host” parameter is not included in the first hop, the value of
hostis returned instead.
Note
Reverse proxies are often configured to set the Host header directly to the one that was originally requested by the user agent; in that case, using
hostis sufficient.
(See also: RFC 7239, Section 4)
port
str – The initial portion of the request URI’s path that corresponds to the application object, so that the application knows its virtual “location”. This may be an empty string, if the application corresponds to the “root” of the server.
(Corresponds to the “SCRIPT_NAME” environ variable defined by PEP-3333.)
str – The template for the route that was matched for this request. May be
Noneif the request has not yet been routed, as would be the case for process_request() middleware methods. May also be
Noneif your app uses a custom routing engine and the engine does not provide the URI template when resolving a route.
remote_addr¶
str – IP address of the closest client or proxy to the WSGI server.
This property is determined by the value of
REMOTE_ADDRin the WSGI environment dict. Since this address is not derived from an HTTP header, clients and proxies can not forge it.
Note
If your application is behind one or more reverse proxies, you can use
access_routeto retrieve the real IP address of the client.
access_route¶
list – IP address of the original client, as well as any known addresses of proxies fronting the WSGI server.
The following request headers are checked, in order of preference, to determine the addresses:
Forwarded
X-Forwarded-For
X-Real-IP
If none of these headers are available, the value of
remote_addris used instead.
Note
Per RFC 7239, the access route may contain “unknown” and obfuscated identifiers, in addition to IPv4 and IPv6 addresses
Warning
Headers can be forged by any client or proxy. Use this property with caution and validate all values before using them. Do not rely on the access route to authorize requests.
forwarded
File-like input object for reading the body of the request, if any. This object provides direct access to the server’s data stream and is non-seekable. In order to avoid unintended side effects, and to provide maximum flexibility to the application, Falcon itself does not buffer or spool the data in any way.
Since this object is provided by the WSGI server itself, rather than by Falcon, it may behave differently depending on how you host your app. For example, attempting to read more bytes than are expected (as determined by the Content-Length header) may or may not block indefinitely. It’s a good idea to test your WSGI server to find out how it behaves.
This can be particulary problematic when a request body is expected, but none is given. In this case, the following call blocks under certain WSGI servers:
# Blocks if Content-Length is 0 data = req.stream.read()
The workaround is fairly straightforward, if verbose:
# If Content-Length happens to be 0, or the header is # missing altogether, this will not block. data = req.stream.read(req.content_length or 0)
Alternatively, when passing the stream directly to a consumer, it may be necessary to branch off the value of the Content-Length header:
if req.content_length: doc = json.load(req.stream)
For a slight performance cost, you may instead wish to use
bounded_stream, which wraps the native WSGI input object to normalize its behavior.
Note
If an HTML form is POSTed to the API using the application/x-www-form-urlencoded media type, and the
auto_parse_form_urlencodedoption is set, the framework will consume stream is aware of the expected Content-Length of the body, and will never block on out-of-bounds reads, assuming the client does not stall while transmitting the data to the server.
For example, the following will not block when Content-Length is 0 or the header is missing altogether:
data = req.bounded_stream.read()
This is also safe:
doc = json.load(req.bounded_stream)
media¶
object – Returns a deserialized form of the request stream. When called, it will attempt to deserialize the request stream using the Content-Type header as well as the media-type handlers configured via
falcon.RequestOptions.
See Media for more information regarding media handling.
Warning
This operation will consume the request stream the first time it’s called and cache the results. Follow-up calls will just retrieve a cached version of the object.
range¶
tuple of int – A 2-member
tupleparsed from the value of the Range header.
The two members correspond to the first and last byte positions of the requested resource, inclusive. Negative indices indicate offset from the end of the resource, where -1 is the last byte, -2 is the second-to-last byte, and so forth.
Only continous ranges are supported (e.g., “bytes=0-0,-1” would result in an HTTPBadRequest exception when the attribute is accessed.)
range_unit
dict – Raw HTTP headers from the request with canonical dash-separated names. Parsing all the headers to create this dict is done the first time this attribute is accessed. This parsing can be costly, so unless you need all the headers in this format, you should use the get_header
Return the raw value of a query string parameter as a string.
Note
If an HTML form is POSTed to the API using the application/x-www-form-urlencoded media type, Falcon can automatically parse the parameters from the request body and merge them into the query string parameters. To enable this functionality, set
auto_parse_form_urlencodedto
Truevia
API.req_options.
str – HTTP status line (e.g., ‘200 OK’). Falcon requires the full status line, not just the code (e.g., 200). This design makes the framework more efficient because it does not have to do any kind of conversion or lookup when composing the WSGI response.
If not set explicitly, the status defaults to ‘200 OK’.
Note
Falcon provides a number of constants for common status codes. They all start with the
HTTP_prefix, as in:
falcon.HTTP_204.
media¶
object – A serializable object supported by the media handlers configured via
falcon.RequestOptions.
See Media for more information regarding media handling.
body¶
str or unicode – String representing response content.
If set to a Unicode type (
unicodein Python 2, or
strin Python 3), Falcon will encode the text as UTF-8 in the response. If the content is already a byte string, use the
dataattribute instead (it’s faster).
data
Either a file-like object with a read() method that takes an optional size argument and returns a block of bytes, or an iterable object, representing response content, and yielding blocks as byte strings. Falcon will use wsgi.file_wrapper, if provided by the WSGI server, in order to efficiently serve file-like objects.
stream_len
Set the Accept-Ranges header.
The Accept-Ranges header field indicates to the client which range units are supported (e.g. “bytes”) for the target resource.
If range requests are not supported for the target resource, the header may be set to “none” to advise the client not to attempt any such requests.
Note
“none” is the literal string, not Python’s built-in
Nonetype.
add_link(target, rel, title=None, title_star=None, anchor=None, hreflang=None, type_hint=None)[source]¶
Add a link header to the response.
(See also: RFC 5988, Section 1)
Note
Calling this method repeatedly will cause each link to be appended to the Link header value, separated by commas.
Note
So-called “link-extension” elements, as defined by RFC 5988, are not yet supported. See also Issue #288.
append_header(name, value)[source for these numbers (no need to convert to
strbeforehand). The optional value unit describes the range unit and defaults to ‘bytes’
Note
You only need to use the alternate form, ‘bytes */1234’, for responses that use the status ‘416 Range Not Satisfiable’. In this case, raising
falcon.HTTPRangeNotSatisfiablewill do the right thing.
(See also: RFC 7233, Section 4.2)
content_type¶
Sets the Content-Type header.
The
falconmodule.
context_type
Set the Retry-After header.
The expected value is an integral number of seconds to use as the value for the header. The HTTP-date syntax is not supported.
Set a response cookie.
Note
This method can be called multiple times to add one or more cookies to the response.
See also
To learn more about setting cookies, see Setting Cookies. The parameters listed below correspond to those defined in RFC 6265.
set_header(name, value)[source, and ignore stream_len. In this case, the WSGI server may choose to use chunked encoding or one of the other strategies suggested by PEP-3333.
Unset a cookie in the response
Clears the contents of the cookie, and instructs the user agent to immediately expire its own copy of the cookie.
Warning
In order to successfully remove a cookie, both the path and the domain must match the values that were used when the cookie was created.
vary¶
Value to use for the Vary header.
Set this property to an iterable of header names. For a single asterisk or field value, simply pass a single-element
listor
tuple.).
(See also: RFC 7231, Section 7.1.4) | http://falcon.readthedocs.io/en/latest/api/request_and_response.html | 2018-02-18T00:58:01 | CC-MAIN-2018-09 | 1518891811243.29 | [] | falcon.readthedocs.io |
Configure BizTalk Server
Configure BizTalk Server using basic configuration or custom configuration.
Basic configuration vs. Custom configuration
- If your configuration uses domain groups, do a Custom Configuration.
- If your configuration uses custom group names instead of the default group names, do a Custom Configuration.
- If your configuration uses custom database names instead of the default database names, do a Custom Configuration.
- If BizTalk Server and SQL Server are on separate computers, domain groups are required. As a result, do a Custom Configuration.
- You cannot configure BAM Analysis on a SQL Server named instance using Basic Donfiguration. If you are using named instances and want to configure BAM Analysis, do a Custom Configuration.
- Basic Configuration is recommended for users setting up a complete installation of BizTalk Server and SQL Server running on a single server.
- Basic Configuration is faster because it automatically creates the local groups and databases using the default names.
Before you begin
- BizTalk Server can be configured using SQL Server default instances and named instances.
- The account you are logged on as must be a member of the local administrators group and have System Administrator (SA) rights on SQL Server.
- If you use Domain Groups, the Domain Groups must exist before configuring BizTalk Server.
- The default accounts generated by BizTalk Server and listed in the BizTalk Server Configuration are local groups. In a multiserver environment, replace the local groups with domain groups.
- If you configure BAM Analysis Services, then the account you are logged on as must be a member of the OLAP Administrators group on the OLAP computer.
Basic Configuration
- In the start menu, right-select BizTalk Server Configuration, and then select Run as Administrator. This opens the configuration wizard.
Select the following options:
- Select Basic configuration.
- The Database server name automatically defaults to the local computer name.
- Enter the User name and Password for the account that the BizTalk services will run as. As a best practice, create a unique account. Do not use your personal username.
If you enter a user name with administrative credentials on this computer, you receive a warning. This is normal. Select OK to continue.
Select Configure.
- Review your configuration details, and select Next.
- When the configuration wizard completes, select Finish.
A configuration log file is generated in a temp folder, similar to:
C:\Users\username\AppData\Local\Temp\ConfigLog(01-12-2017 0h37m59s).log.
When you do a basic configuration, the following occurs:
- All database names are generated automatically by BizTalk Server.
- All applicable database logon information is run under the account you enter.
- All BizTalk services are generated automatically by BizTalk Server.
- All BizTalk services run under the account you enter. The configuration process grants this account the necessary security permissions on the server and objects in SQL Server.
- All features are configured based on the prerequisite software you installed on the computer.
- Groups are automatically created local to the computer using the default group names.
- The Default Web Site in Internet Information Services (IIS) is used for any feature that requires IIS.
Custom Configuration
- In the start menu, right-select BizTalk Server Configuration, and then select Run as Administrator. This opens the configuration wizard.
- Select Custom configuration, and select Configure.
Configure Enterprise Single Sign-on (SSO)
- When SSO is configured, it cannot be reconfigured using BizTalk Server Configuration. To reconfigure SSO, use BizTalk Server Administration.
- When configuring the SSO Windows accounts using local accounts, enter only the account name. Do not enter the computer name.
- When using a local SQL Server named instance as data store, use
LocalMachineName\InstanceName. Do not use
LocalMachineName\InstanceName, PortNumber.
- Select Enterprise SSO.
Configure the following:
Select Enterprise SSO Secret Backup. This option saves the master secret to an encrypted backup file.
Configure the following:
ALWAYS backup the master secret, and share the password with another BizTalk Administrator.
Configure Groups
- When using a local SQL Server named instance as data store, use
LocalMachineName\InstanceName. Do not use
LocalMachineName\InstanceName, PortNumber.
- Select Group.
Configure the following:
Configure the BizTalk Runtime
- Once the Runtime is configured, it cannot be reconfigured using BizTalk Server Configuration. To reconfigure the Runtime, use BizTalk Server Administration.
- The first host you create in the group must be an In-Process host and host instance.
- When you configure the Runtime on multiple BizTalk Servers in the same group, the same service account cannot be used for both the trusted and untrusted host applications. You must use a unique account for the trusted application, and for the untrusted application.
- Select BizTalk Runtime.
Configure the following:
Configure Business Rules Engine (BRE)
If you don't use BRE, then skip this section.
- We recommend that you configure a BizTalk Server group before you configure the Business Rule Engine. If you configure BRE before configuring a BizTalk Server group, the BizTalk Server configuration does not add group-related administrative roles to the Rule Engine database.
- Select Business Rules Engine.
Configure the following:
Configure BAM Tools
If you don't use BAM Tools, then skip this section.
The Business Activity Monitoring Tools include:
- BAM add-in for Excel
- BAM Manager
BAM Portal
Configuring BAM tools requires certain SQL Server administrative functionality and must be performed from a machine that has Integration Services installed .
- The BAM tools may be used by multiple BizTalk groups. When you unconfigure the BAM tools, the connection to the BizTalk group is removed. However, the BAM SQL Server infrastructure continues to work for other BizTalk groups pointing to the BAM Primary Import tables.
- You use the Business Activity Monitoring Tools page to reconfigure the BAM database on-the-fly. For example, configure the BAM database again without removing the existing configuration. Reconfiguring these BAM databases breaks any already-deployed OLAP views and any alerts. If you have existing views and alerts that you want to keep in the newly-configured databases, then do one of the following:
- Undeploy the alerts and views before reconfiguring, and then redeploy them after reconfiguring. Any data that has been archived is not present in the views.
- If you are not using BAM Alerts, then back up the databases before you reconfigure. After reconfiguring, restore the databases to the newly configured location.
- If you are consolidating BizTalk Server databases, you should exclude the BAM Archive, and BAM Analysis databases.
- Select BAM Tools.
Configure the following:
Configure BAM Alerts
BAM alerts require BAM tools to be enabled.
- Select BAM Alerts.
Configure the following:
Configure the BAM Portal
- Select BAM Portal.
Configure the following:
Configure BizTalk EDI/AS2 Runtime
- Enterprise SSO, Group, and BizTalk Runtime must be configured before you configure BizTalk EDI/AS2 Runtime.
- BAM Tools must be enabled before configuring the EDI/AS2 Runtime Status Reporting features.
- If you are only configuring EDI, then BAM is not required.
- Select BizTalk EDI/AS2 Runtime.
Configure the following:
Configure Windows SharePoint Services web service - BizTalk Server 2013 and R2 only
Important
This section ONLY applies to BizTalk Server 2013 R2 and BizTalk Server 2013. If you're not using BizTalk Server 2013 R2 or BizTalk Server 2013, then skip this section.
- This SharePoint Services web service (SSOM) is removed starting with BizTalk Server 2016, and deprecated in BizTalk Server 2013 R2. It is replaced with the SharePoint Services Adapter (CSOM). The CSOM option is not displayed in the BizTalk configuration. The CSOM option is installed automatically with BizTalk, just as the File adapter, or the HTTP adapter is installed automatically.
- Select Windows SharePoint Services Adapter.
Configure the following:
Apply your configuration
Select Apply configuration, and continue with the configuration.
- In Summary, review the components you selected, and select Next.
- When complete, select Finish.
When finished, a configuration log file is generated in a temp folder, similar to:
C:\Users\username\AppData\Local\Temp\ConfigLog(1-12-2017 2h39m30s).log.
IIS application pools and web sites
After you configure BizTalk Server, the following Internet Information Services (IIS) application pools and virtual applications may be created:
Application pools
Virtual applications
More configuration topics
Configuring BizTalk Server on an Azure VM
Configuring BizTalk Server in a Cluster
Post-configuration steps to optimize your environment
Securing Your BizTalk Server Deployment | https://docs.microsoft.com/en-us/biztalk/install-and-config-guides/configure-biztalk-server?redirectedfrom=MSDN | 2018-02-18T02:20:14 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.microsoft.com |
Date: Fri, 15 Sep 1995 21:33:10 -0500 (CDT) From: January <[email protected]> To: [email protected] Subject: Getting FreeBSD to boot Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
Someone suggested I mail this service with my question so here goes... :) I have a 486DX4/100, 16M RAM, Maxtor 1.2Gb E-IDE disk. I installed FreeBSD 2.0.5 on a block of the disk that straddles the 1024 cylinder boundary. The root (/) partition, however, is entirely below the boundary. Here's the problem. It won't boot. When I select it off of my boot menu, the FreeBSD BOOT: prompt comes up with its little message. If I let it time out, or press <enter> (or anything else for that matter), | appears in the corner of the screen. It does not spin; it just sits there. At that point, I have to coldboot my machine. I cannot get it to boot by pointing a bootdisk to the harddrive either. Please help me on this... I would very much like to be using FreeBSD 2.0.5. :) -dp <[email protected]>
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=519183+0+archive/1995/freebsd-questions/19950910.freebsd-questions | 2021-09-16T19:37:02 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Module java.desktop
Package javax.swing.text
Interface TabableView
- All Known Implementing Classes:
GlyphView,
InlineView,
LabelView
public interface TabableViewInterface for
Views that have size dependent upon tabs.
- See Also:
TabExpander,
LabelView,
ParagraphView
Method Detail
getTabbedSpan
float getTabbedSpan(float x, TabExpander e)Determines the desired span when using the given tab expansion implementation. If a container calls this method, it will do so prior to the normal layout which would call getPreferredSpan. A view implementing this should give the same result in any subsequent calls to getPreferredSpan along the axis of tab expansion.
- Parameters:
x- the position the view would be located at for the purpose of tab expansion >= 0.
e- how to expand the tabs when encountered.
- Returns:
- the desired span >= 0
getPartialSpan
float getPartialSpan(int p0, int p1)Determines the span along the same axis as tab expansion for a portion of the view. This is intended for use by the TabExpander for cases where the tab expansion involves aligning the portion of text that doesn't have whitespace relative to the tab stop. There is therefore an assumption that the range given does not contain tabs.
- Parameters:
p0- the starting location in the text document >= 0
p1- the ending location in the text document >= p0
- Returns:
- the span >= 0 | https://docs.huihoo.com/java/javase/9/docs/api/javax/swing/text/TabableView.html | 2021-09-16T19:22:23 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.huihoo.com |
cayenne-particle (community library)
Summary
Modified Cayenne Library for particle
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Cayenne Particle
Unofficial Particle library for Cayenne IoT. This is a heavily modified version of the blynk and Cayenne library to work with the Particle's Photon.
Browse Library Files | https://docs.particle.io/cards/libraries/c/cayenne-particle/ | 2021-09-16T18:06:36 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.particle.io |
Installing Kentico :
How to choose the target location?
The target location depends on the type of server you want to use for the development of websites:
- The Local IIS/IIS Express option is practical, because the live site will also run under IIS.
- The Built-in web server in Visual Studio option is suitable for developing and debugging of your web project. It is intended for developers, who are used to working with Visual Studio web server.
- Select this option if you want to install a Microsoft Azure project. See Installing an Azure project.
- Prepare for installation on a remote server option only copies the web project files into the chosen location on your local computer to prepare them for deployment to your production server over FTP. See Deploying Kentico to a live server.
If you are not sure, select the Local IIS/IIS Express option. If you do not have IIS server installed, the free IIS Express version is installed automatically for you.
How to install Kentico to the root of a website?
If you wish to run Kentico in the root of an IIS website:
- Perform a standard installation of Kentico to a virtual directory.
- Develop the project inside the virtual directory.
- Deploy the project to the root of your IIS website. You can choose one of the following approaches:
- Use the Deploy to feature of the Kentico Installation Manager utility (recommended, only available for web site projects)
- Directly copy the content of the project's CMS folder (for example, C:\inetpub\wwwroot\Kentico82\CMS) to the root
Note: You will not be able to compile the deployed project.
You can find more information about virtual directories in this article: Understanding Sites, Applications, and Virtual Directories on IIS 7.
Which web project type to.
To learn more about the requirements for installing Microsoft Azure projects, see Requirements and limitations for running Kentico on Microsoft Azure. have to install the program files and can install a new web project without delays.
You can also uninstall the program files or move them to a different location:
- For uninstalling, use the Uninstall -> Remove only Kentico program files option of the installer.
- For moving the database, your web project cannot function at all. If you do not install the database during the installation process, you can always install it later. You only have to access any page of your web project in a browser and the system opens the database installation wizard (see Additional database installation).
If you do not have any SQL server available during the installation process, consider checking only the Installation with database option. In this case, the free SQL Server 2012 Express LocalDB will be installed automatically for you.
How can I access my LocalDB database through Microsoft SQL Server Management Studio?
Use the (localdb)\Kentico server name. The database files (Kentico8.mdf, Kentico8 of from an installed Kentico web project for more information.
Which sample site should I install?
If you want to evaluate the capabilities of Kentico or if you are new to the Kentico system, choose the Corporate site or the E-commerce site.
For development, we recommend the Blank Site, which is best suited for developing websites form scratch. However, you can also install one of the preconfigured sample sites and then adjust them accordingly.
If you want to install sample sites after the installation, use the New site wizard. Keep in mind, though, that you must have the Sample site templates installed (you can add these templates additionally using the Modify option of the Installer)._8_2.exe file?
Yes, you can find the Installer in Windows Start -> All programs -> Kentico 8.2 -> Kentico Installer 8.2. If you run the Installer this way though, you will not be able to uninstall the Kentico program files.
To run the Kentico Installer with all options:
- Open Windows Start -> All programs -> Kentico 8.2 -> Uninstall Kentico 8.2.
- Select Kentico 8.2 in the Programs and Features list.
- Click Change.
The Windows system opens full Kentico Installer.
I can't open my site in the browser
If your browser shows the 404 error after trying to open a site, try to open the site from the Windows Start menu:
- Click Start -> All Programs -> Kentico 8.2 -> Sites -> your site
If you have installed a site with a LocalDB or IIS Express, then it may happen that these applications are not run automatically after you restart Windows. In this case, open the site from the Start menu, which ensures that these applications are started properly.
Where can I find the installation log?
The path to the log is C:\Program Files (x86)\Kentico\8.2\ or the location of the program files.
Where can I get more information?
You can contact our support department, which will gladly help you at [email protected]. | https://docs.xperience.io/k82/installation/installing-kentico-questions-and-answers | 2021-09-16T19:37:01 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.xperience.io |
Like with ecore_main_loop_thread_safe_call_sync() you can provide a callback to call inline in the mainloop, but this time with ecore_main_loop_thread_safe_call_async() the callback is queued and called asynchronously, without the thread blocking.
The mainloop will call this function when it comes around to its synchronisation point. This acts as a "fire and forget" way of having the mainloop do some work for a thread that has finished processing some data and is read to hand it off to the mainloop and the thread wants to march on and do some more work while the main loop deals with "displaying" the results of the previous calculation. | https://docs.enlightenment.org/elementary/current/efl_thread_3.html | 2021-09-16T17:53:05 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.enlightenment.org |
Overview
Records
Settings
Tips/Shortcodes
You are here:
0 out Of 5 Stars
A LatePoint modification to place a custom field (or any) to the clients location address in your google event calendar for your Agents. Using this will allow a one click lookup to a google map.
Just place your Custom Customer booking field into the Event Template Settings in your Google Configuration.
When the Customer books and places their address it automatically gets entered into the Location field of the appointment and the Agent can then get directions via the normal Google Maps.
Book this Mod
Created On
Last Updated On
byThomas Walker
Was this article helpful?
0 out Of 5 Stars
Table of Contents | https://docs.itme.guru/latepoint/book-google-location-addition/ | 2021-09-16T19:15:44 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.itme.guru |
Custom Vision with TensorFlow exported model using C# .NET Core and TensorFlowSharp
In a previous post, I built an image classification model for mushrooms using CustomVision.ai. In this post I want to take that a stage further and create a TensorFlow model that I can use on different operating systems and crucially, offline with no internet connection and using my favourite language, C#.
There are already some mobile CustomVision.ai samples on Github:
iOS (CoreML) Android (TensorFlow)
So what about other platforms?
Well, it turns out that our very own Elon Musk, Miguel de Icaza has been busy working on a TensorFlowSharp library which are NET bindings to the TensorFlow library published here:
This library makes it easy to use an exported TensorFlow model from CustomVision.ai in your own cross platform .NET applications. So, I've built a sample .NET Core CLI that takes a TensorFlow exported model and uses TensorFlowSharp to perform offline image classification. To get started, head over to the CustomVision.ai portal. Make sure you have one of the Compact Domains selected, as these are the only ones that you can export. If you change your Domain, you'll need to retrain the model. Once you've done this, the option to Export the model in the TensorFlow format will become available. Download the zip file containing both the model.pb and model.txt file.
If you need help on this, follow the docs: /en-us/azure/cognitive-services/custom-vision-service/export-your-model
In the .NET Core CLI application sample, simply replace your model.pb and model.txt file within the \Assets folder and change the image to your own. Depending on your Domain you may need to change the RGB values as described in the Readme.md
Full code with mushroom classification exported TensorFlow model here: | https://docs.microsoft.com/en-us/archive/blogs/jamiedalton/custom-vision-with-tensorflow-exported-model-using-c-net-core-and-tensorflowsharp | 2021-09-16T18:41:43 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['https://msdnshared.blob.core.windows.net/media/2018/02/test-Custom1-300x268.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2018/02/test-image-300x166.png',
None], dtype=object) ] | docs.microsoft.com |
You can use System Manager to modify a local Windows user account if you want to change an existing user's full name or description, and if you want to enable or disable the user account. You can also modify the group memberships assigned to the user account.
The local Windows user account attributes are modified and is displayed in the Users tab. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-147142B2-8B53-45C6-B407-CEE383BB712E.html | 2021-09-16T19:53:38 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.netapp.com |
Builder-defined Hotkeys¶
You can define hotkeys for mass or global actions on the Table component. Additionally, Button Set, Page Title, Page Title
Some users employ screen reading technology that interprets what is displayed on the screen and outputs that information to assistive technologies such as text-to-speech or Braille output devices.
While runtime pages are not yet fully accessible to assistive technology, Skuid does incorporate the following WAI-ARIA markups into all pages, allowing standard HTML navigation elements to communicate with assistive technology:
- Navigation items that include sub-navigation are tagged with
aria-hasdropdown.
- Expand states are tagged with
aria-expanded=true.
- Collapsed states are tagged with
aria-expanded=false.
- Links are tagged with
role=link.
- Actions are tagged with
role=button. | https://docs.skuid.com/v12.0.8/v1/en/skuid/keyboard-shortcuts.html | 2021-09-16T18:22:00 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.skuid.com |
Create time-based static KPI thresholds in ITSI
Time-based static thresholds let you define specific threshold values to be used at different times to account for changing workloads over time. Use time-based static thresholds if you know the workload schedule for a specific KPI. Time policies.
IT Service Intelligence (ITSI) stores thresholding information at the KPI level in the KV store. Any updates you make to a KPI threshold template are applied to all KPIs using that template, overriding any changes made to those KPIs. Updates are also applied to any services or service templates using those KPIs.
You can only have one active time policy at any given time. When you create a new time policy, the previous time policy is overwritten and cannot be recovered.
Available KPI threshold templates
ITSI provides:
Time zones with threshold templates
Time blocks in threshold templates, including custom templates you create, are stored in the backend in UTC time but presented in the UI in your own time zone. For example, if you're on PST, a time block of 11:00 AM - 12:00 PM on your system is stored in the backend as 6:00 PM - 7:00 PM. This doesn't affect the preview thresholds.
If another user logs in from a different time zone and views the exact same time policy, they'll see the time blocks in their own time zone. For example, a person on EST would see the exact same time block as above as 2:00 PM - 3:00 PM, but the name of the time policy would remain the same. If your organization has people using the same system in two different time zones, this behavior could be confusing for one set of users..
- Select a thresholding template such as 3-hour blocks every day (adaptive/stdev). Selecting an adaptive template automatically enables Adaptive Thresholding and Time Policies. ITSI backfills the threshold preview with aggregate data from the past 7 days. If there Configuration > KPI Threshold Templates.
- Click Create Threshold Template. Your role must have write access to the Global team to see this.! | https://docs.splunk.com/Documentation/ITSI/4.9.1/SI/TimePolicies | 2021-09-16T19:22:58 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
File: monitor.rb [monitor: Ruby Standard Library Documentation] Last Modified 2017-04-01 12:51:23 -0500 Requires thread Description frozen_string_literal: false monitor.rb¶ ↑ Copyright (C) 2001 Shugo Maeda <[email protected]> This library is distributed under the terms of the Ruby license. You can freely distribute/modify this library. Ruby-doc.org is a service of James Britt and Neurogami, an application development company in Scottsdale, AZ. Generated with Rubydoc Rdoc Generator 0.36.0. | http://docs.activestate.com/activeruby/beta/ruby/stdlib/libdoc/monitor/rdoc/monitor_rb.html | 2018-11-13T00:09:34 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.activestate.com |
The Delete Simple Table Rows task will delete the specified rows from the nominated Simple Table Name.
When using pipebar (|) delimited lists to delete from multiple columns please note you can only match ONE item for each column to delete having specified the columns in the Field Names property and the Items in the Match Values property.
You can not use a pipebar delimited list to delete multiple different items from the same column. You can only delete multiple of the same item from a column.
It is important to make sure that the number of items in the Match Values property is the same as the number of columns specified in the Field Names.
If you specify two columns to look in for the Field Names property you must make sure you have two items specified for the Match Values property as well..
When this task is added the properties are static by default.
See How To: Change A Static Property To A Dynamic Property to enable rules to be built on these properties.
The Output results for the example would be:
The Output results for the example would be: | http://docs.driveworkspro.com/Topic/DeleteSimpleTableRows | 2018-11-13T01:39:36 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.driveworkspro.com |
The base layer: layer-basic¶. Check out the
code for the basic layer on Github.
Usage¶
To create a charm layer using this base layer, you need only include it in
a
layer.yaml file..
Hooks¶
This layer provides a
hook.template which starts the reactive framework when
the hook is run. During the build process, this template is used to implement
all of the following hooks, as well as any necessary relation and storage hooks:
config-changed
install
leader-elected
leader-settings-changed
start
stop
upgrade-charm
update-status
pre-series-upgrade
post-series-upgrade
A layer can implement other hooks (e.g.,
metrics) by putting them in the
hooks
directory.
Note
Because
update-status is invoked every 5 minutes, you should take
care to ensure that your reactive handlers only invoke expensive operations
when absolutely necessary. It is recommended that you use helpers like
data_changed to ensure
that handlers run only when necessary.
Note
The charm snap has been the supported
way to build charms for a long time, but there is still an old version of
charm-tools available via apt on some systems. This old version doesn’t
properly handle the
hook.template file, leading to missing hooks when
charms are built. If you encounter this issue, please make sure you have
the snap installed and remove any copies of the
charm or
charm-tools
apt packages.
Reactive flags for Charm config¶
This layer will set the following flags:
config.changedAny config option has changed from its previous value. This flag is cleared automatically at the end of each hook invocation.
config.changed.<option>A specific config option has changed.
<option>will be replaced by the config option name from
config.yaml. This flag is cleared automatically at the end of each hook invocation.
config.set.<option>A specific config option has a True or non-empty value set.
<option>will be replaced by the config option name from
config.yaml. This flag is cleared automatically at the end of each hook invocation.
config.default.<option>A specific config option is set to its default value.
<option>will be replaced by the config option name from
config.yaml. This flag is cleared automatically at the end of each hook invocation.
An example using the config flags would be:
@when('config.changed.my-opt') def my_opt_changed(): update_config() restart_service()
Layer Configuration¶
This layer supports the following options, which can be set in
layer.yaml:
packages A list of system packages to be installed before the reactive handlers are invoked.
Note
The
packageslayer option is intended for charm dependencies only. That is, for libraries and applications that the charm code itself needs to do its job of deploying and configuring the payload. If the payload (the application you’re deploying) itself has dependencies, those should be handled separately, by your Charm using for example the Apt layer
use_venv If set to true, the charm dependencies from the various layers’
wheelhouse.txtfiles will be installed in a Python virtualenv located at
$JUJU:
includes: ['layer:basic'] options: basic: packages: ['git'] use_venv: true include_system_packages: true
Wheelhouse.txt for Charm Python dependencies¶
layer-basic provides two methods to install dependencies of your charm code:
wheelhouse.txt for python dependencies and the
packages layer option for
apt dependencies.
Each layer can include a
wheelhouse.txt file with Python requirement lines.
The format of this file is the same as pip’s
requirements.txt file..
See PyPI for packages under the
charms. namespace which might
be useful for your charm. See the
packages layer option of this layer for
installing
apt dependencies of your Charm code.
Note
The
wheelhouse.yaml are intended for charm dependencies only.
That is, for libraries and applications that the charm code itself needs to
do its job of deploying and configuring the payload. If the payload (the
application you’re deploying) itself has dependencies, those should be
handled separately.
Exec.d Support¶
It is often necessary to configure and reconfigure machines after provisioning, but before attempting to run the charm. Common examples are specialized network configuration, enabling of custom hardware, non-standard disk partitioning and filesystems, adding secrets and keys required for using a secured network.
The reactive framework’s base layer invokes this mechanism as early as possible, before any network access is made or dependencies unpacked or non-standard modules imported (including the charms.reactive framework itself).
Operators needing to use this functionality may branch a charm and create an exec.d directory in it. The exec.d directory in turn contains one or more subdirectories, each of which contains an executable called charm-pre-install and any other required resources. The charm-pre-install executables are run, and if successful, state saved so they will not be run again.
$JUJU_CHARM_DIR/exec.d/mynamespace/charm-pre-install
An alternative to branching a charm is to compose a new charm that contains the exec.d directory, using the original charm as a layer,
A charm author could also abuse this mechanism to modify the charm
environment in unusual ways, but for most purposes it is saner to use
charmhelpers.core.hookenv.atstart().
General layer info¶
Layer Namespace¶:
from charms.layer.foo import my_helper
Layer Options¶:
The
foo layer can then use the API provided by the
options layer
(which is automatically included via the
basic layer) to load the values for the
options that it defined. For example:
from charms import layer @when('flag') def do_thing(): # check the value of the "enable-bar" option for the "foo" layer if layer.options.get('foo', 'enable-bar'): hookenv.log("Bar is enabled") # or get all of the options for the "foo" layer as a dict foo_opts = layer.options.get('foo')
You can also access layer options in other handlers, such as Bash, using the command-line interface:
. charms.reactive.sh @when 'flag' function do_thing() { if layer_option foo enable-bar; then juju-log "Bar is enabled" juju-log "bar-value is: $(layer_option foo bar-value)" fi } reactive_handler_main
Note that options of type
boolean will set the exit code, while other types
will be printed out. | https://charmsreactive.readthedocs.io/en/latest/layer-basic.html | 2018-11-13T00:17:34 | CC-MAIN-2018-47 | 1542039741176.4 | [] | charmsreactive.readthedocs.io |
date
The date tag outputs a string according to the given format parameter using the given date parameter. If no date provided, the current time is used.
<cms:date />
<cms:date k_page_date />
<cms:date k_page_date
Parameters
- date
- format
- gmt
- locale
- charset
date
The date to be formated.
This parameter is expected to be in 'Y-m-d H:i:s' format (e.g. 2010-05-30 21:35:54). All date related variables set by Couch tags, e.g. k_page_date etc., are in this format.
format
The date tag supports two different types of format characters - locale-aware and non locale-aware.
With locale-aware characters, you can specify that the date is to formatted according to, for example, french locale or italian locale by setting the locale parameter.
The locale-aware characters all have a % sign prefixed to them.
The locale-aware and the non locale-aware characters cannot be intermixed.
Non Locale-aware format characters
Locale-aware format characters
gmt
By setting this parameter to '1', you can get the GMT equivalent of the date provided.
locale
If you use the locale-aware format characters mentioned above, this parameter can be set to the locale desired for formatting the provided date.
<cms:date k_page_date
<cms:date k_page_date
This feature depends entirely on the indicated locale being available at your web server. If the locale is not available, the default 'english' locale is used.
charset
Some locales do not provide their output in UTF8 character set. This causes strange ?? characters to appear in the output.
The date tag can help converting the output to UTF8 if you can provide it with information about the charset used by the locale.
For example -
<cms:date k_page_date
<cms:date k_page_date
The following is a rough list of the charset used by different languages -
ISO-8859-1 - Latin 1
Western Europe and Americas: Afrikaans, Basque, Catalan, Danish, Dutch, English, Faeroese, Finnish, French, Galician, German, Icelandic, Irish, Italian, Norwegian, Portuguese, Spanish and Swedish.
ISO-8859-5 - Cyrillic
Bulgarian, Byelorussian, Macedonian, Russian, Serbian and Ukrainian.
ISO-8859-6 - Arabic
Non-accented Arabic.
ISO-8859-7 - Modern Greek
Greek.
ISO-8859-8 - Hebrew
Non-accented Hebrew.
ISO-8859-9 - Latin 5
Same as 8859-1 except for Turkish instead of Iceland.
Variables
This tag is self-closing and does not set any variables of its own. | https://docs.couchcms.com/tags-reference/date.html | 2018-11-13T00:57:56 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.couchcms.com |
There are two functional areas included in the SIOS iQ features (Resource Optimization, Performance Optimization).
- Resource Optimization – This includes features that increase utilization rates of hardware, decrease the amount of waste in the environment, and optimize the use of compute, storage, and network components. This provides the user with the information needed to help make intelligent forecast and planning decisions.
- Performance Optimization – This includes features that help maintain predictable performance and service levels. It also helps improve the performance of the environment and helps diagnose and mitigate anomalies or other problems impacting workload performance. The specific feature under Performance Optimization is “Performance Root Cause Analysis”.
Features
- SIOS PERC Dashboard™ (PERC > SIOS PERC Dashboard™) – The following charts are available.
- Performance Issues
- Critical
- Warning
- Information
- All Performance Issues
- Efficiency Issues
- Information
- All Efficiency Issues
- Reliability Issues
- Warning
- Information
- All Reliability Issues
- Capacity Issues
- Critical
- Warning
- Information
- All Capacity Issues
- Compute Performance
- Host CPU Utilization (%)
- CPU Ready Time
- Host Memory Utilization (%)
- Memory Ballooning
- Memory Swapping
- Compute Efficiency
- Waste – Compute Cost
- Waste – vCPUs
- Waste – vMemory
- Avg VMs per Host
- Compute Reliability
- Host Uptime
- Live Migrations
- Host Failure Tolerance
- Compute Capacity
- Total CPU Utilization
- Total Memory Utilization
- Storage Performance
- Average Throughput
- Average IOPS
- Average Latency
- Storage Efficiency
- Waste – Storage Cost
- Waste – Storage Space
- Storage Acceleration Candidates
- Storage Reliability
- Storage Capacity
- Total Storage Utilization
- # of Days to Out of Capacity
- Network Performance
- Average Throughput
- Network Efficiency
- Network Reliability
- Dropped Packets
- Network Capacity
- SIOS PERC Topology – The SIOS PERC Topology is an innovative dashboard that renders complex infrastructure behaviors in a clear, condensed form that enables users to readily comprehend the overall health of their environments and clusters. The dashboard Pies give IT instantaneous access to the operating status across four key service dimensions – Performance, Efficiency, Reliability and Capacity utilization – and enable interactive exploration of infrastructure issues at every level of granularity with a single click.
- Performance Forecasting – The Performance Forecasting Dashboard provides a daily visual 7-day forecast of predicted Performance Issues due to the computational and storage resource limitations associated with environment configuration. The forecast is accompanied by a navigable Issue List that enables detailed investigation of specific predicted issues.
- Efficiency Dashboard – The Efficiency Dashboard highlights the wasted resources that can be reclaimed within the infrastructure along with the source of the waste. The Efficiency Dashboard includes Idle VMs and Snapshots. Idle V. Users are able to identify snapshots that may be deleted or merged in order to reduce waste and costs as well as improve performance. Users can sort the list of snapshots based on key metrics such as snapshot size, age, number of snapshots under the same VM, datastore capacity utilization, and cost savings.
- Rogue Snapshots – Users can reduce storage waste and save costs by identifying rogue snapshots that may be merged or deleted.
- Policies – Users can adjust the default settings for Idle VMs, Cost, Oversized VMs, Storage Acceleration, Machine Learning and Capacity Forecasting.
- Cost: Calculation Values – Monthly vCPU Cost, Monthly Memory Cost per GB, Monthly Storage Cost per GB
- Idle VMs: Parameters – Avg CPU Utilization, Percent of Avg Disk Utilization, Percent of Avg Network Utilization
- Oversized VMs: Parameters – Avg CPU Usage, Maximum Avg CPU Usage, Avg vMemory Usage, Maximum Avg vMemory Usage
- Storage Acceleration: Parameters – Read Ratio, Cache Hit Ratio
- Machine Learning: Parameter – Sensitivity
- Capacity Forecasting: Parameters – Critical Number of Days, Warning Number of Days, Information Number of Days, Percent of Capacity Used
- Post Installation Configuration – Network, Service, and Email Notification configuration information can be updated after installation.
- Event Log – Users can view informational, warning, and error events in the User Interface (Manage > Event Log).
- Audit Log – The Audit Log captures the administrative actions of the users, along with the outcome of the action (Manage > Audit Log).
- Performance Root Cause and Meta Analysis – The Performance Root Cause Analysis Dashboard provides a comprehensive and detailed overview of problems in the infrastructure detected and analyzed by the SIOS iQ Performance Meta Analysis feature, which brings Deep Learning to bear on Performance Root Cause Analysis. Deep Learning is a Machine Learning approach that helped AlphaGo master the game of Go and Deep Blue to master Chess. The incarnation of Deep Learning in SIOS iQ helps).
- Update – SIOS iQ updates are available via the User Interface.
- Environments – Users can view, add, edit, and remove Environments from the Inventory > Environments console.
Users can also view their associated virtual machines from the Inventory > Environments/Virtual Machines consoles.
- Health Status – The health status of an item is listed on the Inventory page under the sub-menus. The Health status of a particular item is determined by the highest severity level of any In-Progress events on the PERC Issue List related to that item.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/siosiq/latest/en/topic/product-features | 2018-11-13T01:05:23 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.us.sios.com |
Inserting and updating data
Steps for updating a CQL-based core.
For DSE Search, inserting and updating data uses the same CQL statements like any update to the Cassandra database.
Updates to a CQL-based Solr core replace the entire row. You cannot replace only a field in a CQL table.
To update a CQL-based core:
Procedure
Building on the collections example, insert data into the mykeyspace.mytable data and Solr. | https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/srch/insertingUpdatingData.html | 2018-11-13T00:39:46 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.datastax.com |
Note: This document is based on Jelastic version 5.4.5
Depending on a type of the created node, the set of hostnames for it could differ. Thus, below we’ll consider the possible ways to refer to a particular node, hosted at Jelastic Cloud, either from inside (i.e. when managing it via Jelastic SSH Gate) or outside of the Cloud:
Being able to easily connect to Cloud services is a criteria of great importance for all of the developers. In Jelastic, each newly created node is assigned a number of automatically generated hostnames, pointed to the appropriate server internal/external IP address. | https://docs.jelastic.com/container-dns-hostnames | 2018-11-13T00:14:48 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.jelastic.com |
NoiseTexture¶
Inherits: Texture < Resource < Reference < Object
Category: Core
Brief Description¶.
Property Descriptions¶
If true, the resulting texture contains a normal map created from the original noise interpreted as a bump map.
Height of the generated texture.
- OpenSimplexNoise noise
The OpenSimplexNoise instance used to generate the noise.
Whether the texture can be tiled without visible seams or not. Seamless textures take longer to generate.
Width of the generated texture. | https://godot.readthedocs.io/en/latest/classes/class_noisetexture.html | 2018-11-13T01:21:51 | CC-MAIN-2018-47 | 1542039741176.4 | [] | godot.readthedocs.io |
INested
Container
INested Container
INested Container
INested Container
Interface
Definition
Provides functionality for nested containers, which logically contain zero or more other components and are owned by a parent component.
public interface class INestedContainer : IDisposable, System::ComponentModel::IContainer
public interface INestedContainer : IDisposable, System.ComponentModel.IContainer
type INestedContainer = interface interface IContainer interface IDisposable
Public Interface INestedContainer Implements IContainer, IDisposable
- Derived
-
- Implements
-
Remarks
The INestedContainer interface adds the concept of an owning component to the IContainer interface. A nested container is an object that logically, but not necessarily visually, contains zero or more child components and is owned by some parent component. For visual containment, the owning component is often another container.
Nested containers allow sections of a control to be designable, without requiring an explicit serialized member variable or a custom serializer for each subcontrol. Instead, the form designer maintains one master container of components. Each component’s site may have a nested container that provides a place to put extra components. When a component is sited in a nested container, the name it receives is a combination of its given name and its owning component’s name. Additionally, components added to a nested container have full access to the services of the parent container, and the nested container provides the same behavior of the parent with respect to adding new components. The nested container will create the designer for each component it contains, thereby enabling design-time support. Because standard code serializers do not look at nested containers, these components are only serialized if a path to them can be obtained by walking the components in the primary container.
Nested containers can be found by querying a component's site for services of type INestedContainer. | https://docs.microsoft.com/en-US/dotnet/api/system.componentmodel.inestedcontainer?view=netframework-4.7.1 | 2018-11-13T00:24:20 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
Changes
To see the fixes and features included in our latest official release please refer to our Release History .
R2 2017
What's Changed
BingMapProvider, BingRouteProvider, BingSearchProvider, BingGeocodeProvider are deleted.
The providers use the old Bing SOAP services which will be stopped in June 2017. Instead, you can use the new REST map provider.
The BingMapProvider can be easility replaced with BingRestMapProvider using the same Bing Key. BingRestMapProvider is REST services based and displays the same image map tiles the BingMapProvider does.
BingRouteProvider and BingGeocodeProvider can be replaced with BingRestMapProvider.
BingSearchProvider has no replacement currently in RadMap. This is because Bing stops the SOAP Search API in June 30 2017 and they also have no current replacement for search. They might release a new REST based search API as stated in this forum post.
BingMapTrafficProvider is deleted.
This is due to a limitation in the Bing Maps developer API terms of use. Also the service which stays behind this provider might be stopped at some point, so if you use RadMap with old version (before R2 2017), this provider might also stop working.
Currently there is no direct replacement of BingMapTrafficProvider. Instead, you can use Bing REST Traffic API and retrieve traffic information. Then you can use the data and display map objects over RadMap. You can check the Using the REST Services with .NET MSDN tutorial.
Q1 2014
What's Fixed
Fixed: The AsyncShapeFileReader does not read very small DBF-files
Fixed: BingMapProvider memory leaks
Fixed: InvalidOperationException is thrown when using VisualizationLayer and changing themes runtime
Fixed: Specific PathData is not displayed in VisualizationLayer
What's New
- Feature: Add ability to setup RectangleGeometryData using coordinates of the top-left (NW) and bottom-right (SE) corners of the rectangle | https://docs.telerik.com/devtools/silverlight/controls/radmap/changes-and-backward-compatibility/changes | 2018-11-13T00:04:49 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.telerik.com |
DriveWorks makes use of the SOLIDWORKS document manager to maintain references to the locations of the captured SOLIDWORKS models.
The SOLIDWORKS document manager is installed by default with SOLIDWORKS. Corrupt installations of the document manager will display the following message when attempting to re-reference captured SOLIDWORKS models using the DriveWorks Data Management Tool.
The SOLIDWORKS installation can be repaired using the Add/Remove programs feature of Windows. Select your SOLIDWORKS installation and click Uninstall/Change | http://docs.driveworkspro.com/Topic/InfoSolidWorksDocumentManager | 2018-11-13T01:41:33 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.driveworkspro.com |
You Cannot Create a Profile for a New Mailbox User in Exchange 2007
Microsoft Exchange Server 2007 will reach end of support on April 11, 2017. To stay supported, you will need to upgrade. For more information, see Resources to help you upgrade your Office 2007 servers and clients.
This topic provides information about how to troubleshoot a scenario in which you cannot create a profile for a new mailbox user in Microsoft Exchange Server 2007.
When you try to create a profile for a new user, you receive the following error message:
This issue may occur after you migrate users from Exchange Server 2003 to Exchange Server 2007. Additional symptoms may include the following:
The migrated users cannot see other users in the online global address list (GAL) in Microsoft Office Outlook 2003.
Mail-enabled contacts and distribution lists (DL) may be missing from the GAL.
This issue may occur if the value of the purportedSearch attribute in the default GAL is not the correct value.
To resolve this issue, use Active Directory Service Interfaces (ADSI) Edit to change the value of the purportedSearch attribute.
Procedure
To use ADSI Edit to change the value of the purportedSearch attribute
Start ADSI Edit.
Expand the Configuration container, and then expand CN=Services/ CN=Microsoft Exchange/CN=<ExchangeOrganizationName>.
Click CN=System Policies.
In the right pane, right-click CN=Mailbox Enable User, and then click Properties.
In the Attributes list, click purportedSearch, and then click Edit.
Click Clear, and then paste the following value in the Value box:
(& ) )).
Click OK two times, and then close ADSI Edit.
Start the Exchange Management Shell.
Run the following command:
Update-GlobalAddressList -Identity "<distinguished name of Default GAL>" | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/dd285508(v=exchg.80) | 2018-11-13T01:11:02 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
9th EORNA Congress, May 16-19, 2019, The Netherlands
Start Date : May 16, 2019
End Date : May 19, 2019
Time : 8:00 am to6:00 pm
Phone : +49 30 246 03 0
Location :
World Forum, Churchillplein 10, Den Haag, 2517 JW, Netherlands
Description!
URL:
Inquiries:
Price:
Early Bird Registration, starts at: EUR 100.0
Registration Info
Organized by
Organized by LoveEvvnt1
K.I.T. Group GmbH
Event Categories: Internal Medicine. | http://meetings4docs.com/event/9th-eorna-congress-may-16-19-2019-the-netherlands/ | 2018-11-13T00:12:59 | CC-MAIN-2018-47 | 1542039741176.4 | [] | meetings4docs.com |
Remove a domain from Office 365
Check the Domains FAQ if you don't find what you're looking for.
Are you removing your domain because you want to add it to a different Office 365 subscription plan? Or do you just want to cancel your subscription? You can change your plan or subscription or cancel your subscription.
Step 1: Move users to another domain
Choose Users > Active Users.
Select the boxes next to the names of the users who you want to move.
In the Bulk actions pane, choose Edit domains.
In the Edit domains pane, choose a different domain.
Choose Set as primary, then choose Save.
You'll need to do this for yourself, too, if you're on the domain that you want to remove. When you edit the domain for your account, you'll have to log out and log back in using the new domain you chose to continue.
For example, if you're logged in as *[email protected]* :
Go to Users > Active Users, select your account from the list, and then choose Edit in the Username row in the left pane.
Choosea different domain: contoso.com
Choose Set as primary, choose Save, and then Close.
At the top, choose your account name, then choose Sign Out.
Sign in with the new domain and your same password: *[email protected]*
You can also use PowerShell to move users to another domain. See Set-MsolUserPrincipalName for more information. To set the default domain, use Set-MsolDomain.
Step 2: Move groups to another domain
Choose Groups > Groups.
Select the box for any group or distribution listassociated with the domain that you want to remove.
In the right pane, next to the group name, choose Edit.
Under Group Id, use the drop-down to choose another domain.
Choose Save, then Close. Repeat this process for any groups or distribution lists associated with the domain that you want to remove.
Step 3: Remove the old domain
Choose Setup > Domains.
On the Domains page, choose the domain that you want to remove.
In the right pane, choose Remove.
Follow any additional prompts, and then choose Close.
How long does it take for a domain to be removed?
It can as little as 5 minutes for Office 365 to remove a domain if it's not referenced in a lot of places such as security groups, distribution lists, users, and Office 365 groups. If there are many references that use the domain it can take several hours (a day) for the domain to be removed.
If you have hundreds or thousands of users, use PowerShell to query for all users and then move them to another domain. Otherwise, it's possible for a handful of users to be missed in the UI, and then when you go to remove the domain, you won't be able to and you won't know why. See Set-MsolUserPrincipalName for more information. To set the default domain, use Set-MsolDomain.
Still need help?
Note
You can't remove the ".onmicrosoft.com" domain from your account.
Still not working? Your domain might need to be manually removed. Give us a call and we'll help you take care of it! | https://docs.microsoft.com/en-us/office365/admin/get-help-with-domains/remove-a-domain?redirectSourcePath=%252far-sa%252farticle%252f%2525D8%2525A5%2525D8%2525B2%2525D8%2525A7%2525D9%252584%2525D8%2525A9-%2525D9%252585%2525D8%2525AC%2525D8%2525A7%2525D9%252584-%2525D9%252585%2525D9%252586-Office-365-F09696B2-8C29-4588-A08B-B333DA19810C&view=o365-worldwide | 2018-11-13T01:29:38 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
This section lists all newly released Webtrekk features and products. In addition, you will be informed about current changes and additions to the online documentation so that you are always up to date.
If you want to keep track of Webtrekk release notes updates subscribe to our RSS-feed. Please note that you will need an RSS-newsreader to read a feed. | https://docs.webtrekk.com/display/RN/Webtrekk+Release+Notes | 2018-11-13T01:31:23 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.webtrekk.com |
- .gif via tesseract-ocr
- .jpg and .jpeg via tesseract-ocr
- .json via python builtins
- .html via beautifulsoup4
- .odt via python builtins
- .pptx via python-pptx
- .pdf via pdftotext (default) or pdfminer
- .png via tesseract-ocr
- .ps via ps2text
- .txt via python builtins
Please recommend other file types by either mentioning them on the issue tracker or by contributing | https://textract.readthedocs.io/en/v0.5.1/ | 2018-11-13T00:12:12 | CC-MAIN-2018-47 | 1542039741176.4 | [] | textract.readthedocs.io |
To access the Partition Initialization dialog, in the Visual LANSA Logon dialog, enter your User ID and Password or select Use Windows Credentials and press the Partition Init... button before you press the OK button.
Partition initialization must be performed for each new partition created, on each PC that has its own local Repository, including slave workstations and independent workstations. Visual LANSA Client PCs co-operating in a Network Install do not require Partition Initialization.
The development environment will not allow a partition to be used by a PC until the mandatory Partition Initialization has been done. Partition initialization is automatically displayed when you first create or access a new partition.
You may use Partition Initialization to add or update options in an existing partition. For example, you may not have included the Visual LANSA Framework when the partition was first initialized. At a later date, you can select partition initialization again to add the Visual LANSA Framework to the partition.
You may select the following options:
When you have initialized a partition, you will be provided with a report, if any errors have occurred. Refer to 4.4.7 Show Last Log Button for further information.
When the first time a Partition Initialization is performed, the following dialog is displayed:
This dialog lists the codepages and CCSID that are used for multilingual text conversions. Principally this is used when exporting from IBM i to Windows and when using the Host Monitor, but it is also used when exporting from Windows to another Windows repository. Refer to Language Options for more information on this process.
The purpose of the dialog is to
a) make you aware that LANSA has to make these mapping decisions and,
b) precisely what decisions LANSA has made in assigning codepages and CCSID to each language.
It is only shown the very first time that a Partition Initialization is performed. If this dialog is not shown at that time then it is possible that a communications error has prevented the retrieval of the CCSID mappings from the server. In this case, an error message will have been added to the import log. To fix the problem, check the listener is started and look in the job logs on the server for further information. If the CCSID mappings were retrieved successfully you will find them in langmap.txt for each partition and language in the Installation Details tab of the Product Information which you open from the Visual LANSA Editor.
Also See
4.3 System Initialization | https://docs.lansa.com/14/en/lansa011/content/lansa/l4wadm02_0025.htm | 2018-11-13T00:43:49 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.lansa.com |
Backing Up and Recovering Websites
Plesk provides the backing up and restoration facilities that you can use to prevent the loss of your data.
During the backup operation, Plesk saves your account settings and websites' content into a number of files. You can create a number of backups and restore any of them at any time later.
This chapter describes how to perform the following operations in Plesk:
- Backing up all data related to your account and sites. Learn more in the section Backing Up Account and Websites.
- Scheduling backups. Learn more in the section Scheduling Backups.
- Restoring data from backup archives. Learn more in the section Restoring Data. | https://docs.plesk.com/en-US/12.5/administrator-guide/website-management/backing-up-and-recovering-websites.70620/ | 2018-11-13T00:31:35 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.plesk.com |
API Portal 7.5.5 Administrator Guide Customize API Portal look and feel This section provides the basic information you need to get started with customizing your branded API Portal, including: Customize with ThemeMagic Customize your logo Customize CSS styles Customize forums Add a custom stylesheet For internally-facing API deployments, you can deploy API Portal "as is" using the out-of-the-box Axway branding. This this type of deployment requires no customization. For external-facing API deployments, you may want to customize API Portal to provide a branded developer portal experience. This type of deployment contains a collection of style settings that can be configured in your account, including logos, colors, fonts or you can perform advanced modification of the layout and structure. Supported API Portal customization Customization can be performed at three levels: Customization through configuration: Use the Joomla! Admin Interface (JAI) (https://<API Portal host>/administrator) to change CSS stylesheets, templates, and layouts. These types of customizations are can be upgraded and retained when you move to new version. The customization does not modify the API Portal source code and is supported by Axway. Customization through code: API Portal is developed using the PHP scripting language and the source code is provided. This is how Joomla! applications are deployed. You can modify the PHP source code to customize API Portal, such as to change the functionality of pages and to extend by adding new pages. Caution The customizations are lost when you upgrade. The source code is subject to frequent changes without notice; therefore, you must reintegrate customizations into the new API Portal code to avoid restoring a deprecated code along with the customizations. If you submit a case to Axway Support and it is suspected that the customizations may be the root cause of the issue, you must reproduce the issue on a non-customized API Portal. This type of customization is only recommended for customers with Joomla!/PHP experience that need to deploy a highly tailored developer portal. Customization through the addition of Joomla! plug-ins: The Joomla! CMS offers thousands of extensions that are available from their website. Axway is only responsible for the support to extensions that are delivered out of the box (EasyBlog and EasyDiscuss). Caution If you submit a case to Axway Support and it is suspected that unsupported third-party extensions may be the root cause of the issue, you must reproduce the issue on a non-customized API Portal. Prerequisites To get started with customization, you need the following: API Portal installed and configured. For more details, see the API Portal Installation and Upgrade Guide. An API Portal user account. When you log in, the default API Portal web page is displayed, so you can check how the changes look to your end users. Basic understanding of Joomla! ThemeMagic. This feature enables to change CSS stylesheets, templates, and layouts. For more advanced modifications, you can modify the PHP source code to customize API Portal, such as to change the functionality of pages and to extend by adding new pages. Related topics Customize with ThemeMagic Customize your logo Customize CSS styles Add a custom stylesheet Customize API Portal page content API Portal overview Related Links | https://docs.axway.com/bundle/APIPortal_755_AdministratorGuide_allOS_en_HTML5/page/Content/APIPortalAdminGuideTopics/customize_getting_started.htm | 2018-11-13T00:44:08 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.axway.com |
Enable certificate revocation list checking for improved security
When you enable certificate revocation list (CRL) checking, Citrix Workspace app checks to see if the server’s certificate is revoked. Forcing Citrix Workspace app to check this helps improves the cryptographic authentication of the server and the overall security of the TLS connection between the user device and a server.
You can enable CRL checking at several levels. For example, you can configure Citrix Workspace app to check only its local certificate list or to check the local and network certificate lists. In addition, you can configure certificate checking to allow users to log on only if all the CRLs are verified.
If you are making this change on your local computer, exit Citrix Workspace app. Make sure all the Citrix Workspace components, including the Connection Center, are closed.
For information about configuring TLS, see Configure and enable TLS | https://docs.citrix.com/en-us/citrix-workspace-app-for-windows/authentication/enable-cert-revocation-list.html | 2018-11-13T01:34:41 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.citrix.com |
The metrics component provides metrics from DC/OS cluster hosts, containers running on those hosts, and from applications running on DC/OS that send StatsD metrics to the Mesos Metrics Module. The metrics component is natively integrated with DC/OS and is available per-host from the
/system/v1/metrics/v0 HTTP API endpoint.
OverviewOverview
DC/OS provides these types of metrics:
- Host: metrics about the specific node which is part of the DC/OS cluster.
- Container: metrics about cgroup allocations from tasks running in Mesos or Docker containerizers.
- Application: metrics about an application running inside the DC/OS Universal Container Runtime.
The Metrics API exposes these areas.
All three metrics layers are aggregated by a collector which is shipped as part of the DC/OS distribution. This enables metrics to run on every host in the cluster. It is the main entry point to the metrics ecosystem, aggregating metrics sent to it by the Mesos Metrics module, or gathering host and container level metrics on the machine on which it runs.
The Mesos Metrics module is bundled with every agent in the cluster. This module enables applications running on top of DC/OS to publish metrics to the collector by exposing StatsD host and port environment variables inside every container. These metrics are appended with structured data such as
agent-id,
framework-id, and
task-id. DC/OS applications discover the endpoint via an environment variable (
STATSD_UDP_HOST or
STATSD_UDP_PORT). Applications leverage this StatsD interface to send custom profiling metrics to the system.
For more information on which metrics are collected, see the Metrics Reference.
Quick Start
BETA
Use this guide to get started with the DC/OS metrics component. The metrics component is natively integrated with DC/OS and no additional setup is required.…
Metrics API
BETA
You can use the Metrics API to periodically poll for data about your cluster, hosts, containers, and applications. …
Metrics Reference
These metrics are automatically collected by DC/OS.…
Sending DC/OS Metrics to Datadog
BETA
The Datadog metrics plugin supports sending metrics from the DC/OS metrics service directly to DatadogHQ. The plugin includes the function of the Datadog agent. You must install a plugin on each node in your cluster. This plugin works with DC/OS 1.9.4 and higher.…
Sending DC/OS Metrics to Prometheus
The Prometheus metrics plugin supports sending metrics from the DC/OS metrics service to a Prometheus server. You must install a plugin on each node in your cluster. This plugin works with DC/OS 1.9.4 and higher.… | https://docs.mesosphere.com/1.9/metrics/ | 2018-11-13T01:12:31 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.mesosphere.com |
To Add Whitelist Addresses for API Platform Proxies
This topic describes how to add whitelist addresses for API Platform Proxies in Anypoint Platform Private Cloud Edition. By default, proxy requests are enabled to the domain name where the platform is running.
To add whitelist addresses:
Login to Ops Center, the click then select Configuration.
Select the
api-platform-web-configconfig map from the drop-down list.
Locate the
featuresobject in the local.js tab:
features: { alerts: false, analytics: false, cloudHubDeployment: false, hybridDeployment: true, segmentio: false, proxyWhitelist: true }
Ensure that the
proxyWhitelistproperty is set to
true.
Locate the
proxyWhitelistobject in the same tab.
proxyWhitelist: { allowLocalhost: false, allowPlatformHost: true, rules: [] }
The
proxyWhitelistobject contains the following properties:
Update the
rulesarray as necessary. The following example shows how to define regular expressions to allow requests to be made to the
.somewhere.com/and
.somewhereelse.com/domains, where * can be any part of a DNS name or URL:
proxyWhitelist: { allowLocalhost: false, allowPlatformHost: true, rules: [ /.*\.somewhere\.com/, /.*\.somewhereelse\.com/ ] }
Click Apply to save changes to the
api-platform-web-configconfig map.
Recreate the pod to ensure each node in the cluster uses the most current configuration:
kubectl delete pod -l microservice=api-platform-web | https://docs.mulesoft.com/anypoint-private-cloud/v/1.6/config-add-proxy-whitelist | 2017-10-17T01:55:45 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mulesoft.com |
Car Accident Help
Did you know that chiropractic care and active rehabilitation greatly improves the recovery from motor vehicle accident (MVA) injuries. Most common misconception regarding whiplash is that the injury has healed when the symptoms have disappeared. If the underlying mechanical dysfunction of whiplash has not been corrected the symptoms will return and become chronic.
- Whiplash Symptoms Include:
- Neck, back and shoulder pain
- Stiffness in the spine
- Headaches
- Nausea/dizziness
- Loss of ranges of motion
- Difficulty concentrating
- Fatique
- Muscle aches, and weakness
The best self treatment immediately after the accident is applying ice to the injured area. We also recommend an evaluation after an accident of any severity, regardless of how mild it may seem.
Treatment and rehabilitation is covered by your insurance. We also work closely with your insurance adjuster, medical doctor, and your lawyer to get your claim processed. | http://chiro-docs.com/index.php?p=416710 | 2017-10-17T01:41:44 | CC-MAIN-2017-43 | 1508187820556.7 | [] | chiro-docs.com |
Transforms¶
YSLD allows for the use of rendering transformations. Rendering transformations are processes on the server that are executed inside the rendering pipeline, to allow for dynamic data transformations. In GeoServer, rendering transformations are typically exposed as WPS processes.
For example, one could create a style that applies to a point layer, and applies a Heatmap process as a rendering transformation, making the output a (raster) heatmap.
Because rendering transformations can change the geometry type, it is important to make sure that the symbolizer used matches the output of the rendering transformation, not the input. In the above heatmap example, the appropriate symbolizer would be a raster symbolizer, as the output of a heatmap is a raster.
Syntax¶
The full syntax for using a rendering transformation is:
feature-styles ... transform: name: <text> params: <options> rules: ...
where:
The values in the params options typically include values, strings, or attributes. However, it can be useful with a transformation to include environment parameters that concern the position and size of the map when it is rendered. For example, the following are common reserved environment parameters:
With this in mind, the following params are assumed unless otherwise specified:
params: ... outputBBOX: ${env('wms_bbox')} outputWidth: ${env('wms_width')} outputHeight: ${env('wms_height')} ...
Note
Be aware that the transform happens outside of the rules and symbolizers, but inside the feature styles.
Examples¶
Heatmap¶
The following uses the vec:Heatmap process to convert a point layer to a heatmap raster:
title: Heatmap feature-styles: - transform: name: vec:Heatmap params: weightAttr: pop2000 radiusPixels: 100 pixelsPerCell: 10 rules: - symbolizers: - raster: opacity: 0.6 color-map: type: ramp entries: - ['#FFFFFF',0,0.0,nodata] - ['#4444FF',1,0.1,nodata] - ['#FF0000',1,0.5,values] - ['#FFFF00',1,1.0,values]
Point Stacker¶
The point stacker transform can be used to combine points that are close together. This transform acts on a point geometry layer, and combines any points that are within a single cell as specified by the cellSize parameter. The resulting geometry has attributes geom (the geometry), count (the number of features represented by this point) and countUnique (the number of unique features represented by this point). These attributes can be used to size and label the points based on how many points are combined together:
title: pointstacker feature-styles: - transform: name: vec:PointStacker params: cellSize: 100 rules: - symbolizers: - point: size: ${8*sqrt(count)} symbols: - mark: shape: circle fill-color: '#EE0000' - filter: count > 1 symbolizers: - text: fill-color: '#FFFFFF' font-family: Arial font-size: 10 font-weight: bold label: ${count} placement: anchor: [0.5,0.75]
Point stacker | http://docs.geoserver.org/latest/en/user/styling/ysld/reference/transforms.html | 2017-10-17T02:08:28 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['../../../_images/transforms_pointstacker.png',
'../../../_images/transforms_pointstacker.png'], dtype=object)] | docs.geoserver.org |
aarch64 Machine Image v5.9.4 (Docker TAG v5.9.4)
Release Date: September 28, 2017
What is installed
- Operating System: Ubuntu 16.04
- Kernel Version: 4.10.0-26-generic
- Docker Server Version: 17.06
- Storage Driver: aufs
- Storage Root Dir: /var/lib/docker/aufs
- Docker Root Dir: /var/lib/docker
- Backing Filesystem: extfs
- Dirperm1 Supported: true
- Cgroup Driver: cgroupfs
Note: Only custom nodes are currently supported. Builds on dynamic nodes for aarch64 architecture are not supported for this version of AMI
Shippable Official Docker Images
These are the images used to run your CI jobs. The default image is picked up
based on the
language you set in your yml. All these images are available on
our Docker drydockaarch64 Hub. The source code is
available on our Github dry-dock-aarch64 org
If you would like to use your own CI images in place of the official images, instructions are described here
These are the official language images in this version
Common components installed
Packages
- build-essential
- curl
- gcc
- gettext
- git
- htop
- jq
- libxml2-dev
- libxslt-dev
- make
- nano
- openssh-client
- openssl
- python-dev
- python-pip
- python-software-properties
- software-properties-common
- sudo
- texinfo
- unzip
- virtualenv
- wget
CLIs
- ansible 2.3.0.0
- awscli 1.11.91
- awsebcli 3.9.0
- gcloud 160.0.0
- kubectl 1.5.1
- yarn - 0.24.5-1
Python
OS Versions
- Ubuntu 16.04
Language Versions These versions are pre-installed on u16pyt image
- 2.7.12
Additional packages
- Common components
- virtualenv
- Java 1.8
- Node 7.x
- Ruby 2.3.3 | http://docs.shippable.com/platform/tutorial/runtime/ami-v594-aarch64/ | 2017-10-17T01:56:07 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.shippable.com |
Message-ID: <729436653.1439.1425018154204.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1438_322798425.1425018154204" ------=_Part_1438_322798425.1425018154204 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Continuous Integration is what gives you confidence that your project is= healthy. By continuously compiling, unit testing and running reports again= st your project you can gain confidence that there are few "gotchas&qu= ot; when it comes time to release.=20
This guide will recommend installing continuous integration on a single = machine and to make this guide a bit more readable this machine will be cal= led NUCLEUS.=20
There are quite a few continuous integration solutions available.=20
This guide will walk through Continuum=20
Shell access is needed to exectute commands and to configure Continuum. = This may be done in conjunction with a sysadmin if you do not have the requ= ired privileges. Recommend a "continuum" user with group "ma= ven" with home directory "/usr/local/continuum".=20 a= ccounts.=20
You need ssh access to the NUCLEUS server to deploy both the maven artif= acts and the web site.=20
See .html for access to in.tar.gz=20
As the continuum user follow the instructions at rted/index.html and in Better Builds with Maven=20
If you don't have web browsable CVS for accessing your POMs then you wil= l need to allow Continuum to load a POM from file. Remove the comments arou= nd the allowedScheme for file.=20
<implementation>org.codehaus.plexus.formica.validation.UrlValida= tor</implementation> <configuration> <allowedSchemes> ... <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration>=20
Then manually checkout your project and reference the file's location on= disk when adding the project to Continuum=20 proje= ct, so run the following command as the continuum user and use an empty pas= sword:=20
cvs -d:pserver:anonymous@HOST:CVSROOT login=20
Run mvn for the first time, it will create the ~/.m2 directory structure= for you.=20
You will then need to create a ~/.m2/settings.xml file and customize it = to suit the environment. It should be very similar to the settings.xml file= needed for a developer. You will need to add the inhouse servers to the li= st (as the settings file assumes developers do not deploy to the inhouse re= positories directly)=20
<server> <id>inhouse</id> <username>continuum</username> <privateKey>/usr/local/continuum/.ssh/id_rsa</privateK= ey> <filePermissions>664</filePermissions> <directoryPermissions>775</directoryPermissions> </server> <server> <id>inhouse_snapshot</id> <username>continuum</username> <privateKey>/usr/local/continuum/.ssh/id_rsa</privateK= ey> <filePermissions>664</filePermissions> <directoryPermissions>775</directoryPermissions> </server>=20
Run=20
ssh-keygen -t rsa -C "Continuum Key for NUCLEUS"=20
And use an empty passphrase.=20
Ensure the ~/.ssh directory has been created with appropriate permission= s, and if not then:=20
chmod -R go-rwx ~/.ssh/=20
Assuming the authorized_keys files DOES NOT EXIST=20
cd ~/.ssh cp id_rsa.pub authorized_keys=20
Create "/etc/init.d/continuum" with the following contents:=20
#!/sbin/sh # CONTINUUM_HOME=3D/usr/local/continuum USER=3Dcontinuum if [ ! -d ${CONTINUUM_HOME} ]; then echo "Error: CONTINUUM_HOME does not exist: ${CONTINUUM_HOME}&= quot; exit 1 fi case "$1" in start) su ${USER} -c "${CONTINUUM_HOME}/bin/continuum start" ;; restart) su ${USER} -c "${CONTINUUM_HOME}/bin/continuum restart" ;; stop) su ${USER} -c "${CONTINUUM_HOME}/bin/continuum stop" ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 ;; esac=20
Continuum should only be running in run state 3. See init for more detai= ls.=20
ln -s /etc/init.d/continuum /etc/rc3.d/S99continuum=20
If you don't have a web browsable CVS then the workaround is to manually= checkout the module on the host and then use a file reference to the pom.<= /p>=20
mkdir ~/tmp-cvs cd ~/tmp-cvs cvs -d:pserver:anonymous@HOST:CVSROOT co MODULE=20
If prompted for a password enter it as it will get stored into ~/.cvspas= s as continuum will need to use this file at run time for the password= =20
Once checked out you can then provide an absolute reference via file url= , for example:=20
If the project you are adding contains module definitions then these wil= l also get created as separate projects in Continuum.=20
Often as you are developing you might not have all the modules defined b= efore the project has been added to Continuum. Currently once a project has= been added to Continuum it does not check if the module definitions have c= hanged and automatically add the module as a project.=20.=20
The default goals for Maven are "clean, install". However sinc= e Continuum is continuously integrating the projects it makes sense for Con= tinuum to also deploy the snapshot versions when the build is successful. T= his means the goals should be "clean, deploy". Unfortunately ther= e is no way to mass edit these entries and no way to specify that this shou= ld be the default, so you must manually modify all your projects as they ge= t added.=20
The nightly build should create and deploy the site as well so that the = project documentation is kept up to date.=20
Unfortunately there appears to be no way to tell Continuum that a second= build definition should be forced to run and because the primary build def= initions have completed successfully the secondary build definition fails t= C= ontinuum.=20
Goals =3D site site:deploy Arguments =3D --batch-mode POM File =3Dpom.xml Profile =3D DEFAULT Schedule =3D NIGHTLY_SITE_BUILD From =3D Project Where NIGHTLY_SITE_BUILD is (runs at 7:15 pm mon-fri) Name =3D NIGHTLY_SITE_BUILD Description =3D Build and deploy the site nightly Cron =3D 0 15 19 ? * MON-FRI Quiet Period (seconds) =3D 0=20
tail -f ~/continuum-1.0.3/logs/wrapper.log=20
Continuum checkouts the projects into its working directly. Located at:<= /p>=20
~/continuum-1.0.3/apps/continuum/working-directory.=20=20
If there are any failures with Continuum you should see useful informati= on as to the reason in the wrapper.log file. Often the failure is because t= he maven commands Continuum is attempting to run are requiring human interv= ention and hence are failing when run in batch mode.=20
Manually run the same command that Continuum is trying to run in the wor= king directory of the failing project and ensure that it runs correctly. On= ce you have fixed any errors that cause the build to run incorrectly then C= ontiuum should also be able to build your project. | http://docs.codehaus.org/exportword?pageId=46172 | 2015-02-27T06:22:34 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.codehaus.org |
example, change the resources to variables:
to
This uses the following variable search order to determine the default directory to display:
- TargetPanel.dir.<platform symbolic name> e.g. TargetPanel.dir.windows_7
- TargetPanel.dir.<platform name> e.g. TargetPanel.dir.windows
- TargetPanel.dir.<parent platform> e.g. given platform "FEDORA_LINUX" looks for TargetPanel.dir.linux, followed by TargetPanel.dir.unix
- TargetPanel.dir
- DEFAULT_INSTALL_PATH
- SYSTEM_user_dir corresponds to the system property "user.dir"
Available platforms can be found in the class Platforms. The names are the lowercase versions of those defined.
Allowed names include:
- a_8
- windows_xp
- windows_2003
- windows_vista
The DEFAULT_INSTALL_PATH variable is initialised to <PARENT_DIRECTORY>/$APP_NAME where <PARENT_DIRECTORY> is determined by:
Changes from 4.3.6
Changes from earlier versions
Prior to 4.3.6, resources were used rather than variables. Resources were searched for with the following names
- "TargetPanel.dir." + lower case version of System.getProperty("os.name"), with any spaces replace with underscores
- "TargetPanel.dir." + <platform> where platform is one of ("windows", "mac", "unix")
- "TargetPanel.dir"
IZPACK-798 changed the above to use variables instead of text files, following the same naming convention. | http://docs.codehaus.org/pages/diffpages.action?originalId=233053632&pageId=229740293 | 2015-02-27T06:18:05 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.codehaus.org |
A software framework contains a set of application development tools for developers. Joomla has been designed for extensibility. The scope of this tutorial will be to develop a simple Hello World! module using the convention and coding standards employed by Joomla!.
This tutorial is supported by the following versions of Joomla! | https://docs.joomla.org/index.php?title=User:Rvsjoen/tutorial/Developing_a_Module&diff=65618&oldid=65588 | 2015-02-27T06:33:01 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.joomla.org |
:
At this point you have installed the database.
If you want to create a database copy, you can use also you,)
After you create the Database for your Joomla download and install Akeeba, it can be download from Joomla extension directory. There is a link to full instructions there as well. | https://docs.joomla.org/index.php?title=Copying_a_website_from_localhost_to_a_remote_host&diff=73537&oldid=9974 | 2015-02-27T06:02:23 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.joomla.org |
User Enrollment
The steps to enroll a device in SimpleMDM via User Enrollment are:
- Create a new enrollment profile.
- Set 'User Enrollment' to "Yes".
- Save the profile.
- Enter the enrollment URL into Safari on your device (or send the enrollment link directly to the device and have the user tap the link).
- You will be prompted to enter a Managed Apple ID. Enter your Managed Apple ID username/email address (from Apple Business Manager) here and tap 'Continue'.
- Tap 'Download Enrollment Profile'.
- Safari will ask if you want to allow the configuration profile to be downloaded. Select "Allow".
- Open the Settings app on the device.
- At the top of the Settings menu, find the option labeled "Enroll in SimpleMDM" and tap it.
- When prompted, select "Enroll My iPhone".
- Enter the password for your Managed Apple ID.
After entering your password, your device should be authenticated and the user enrollment should be complete.
Note: User Enrollment is only available for iOS 13+ and macOS 10.15+.
For more information about User Enrollment, see this article: What is Apple's "User Enrollment"?
For guidance on creating Managed Apple IDs in Apple Business Manager, refer to Apple's documentation here: Create Managed Apple IDs in Apple Business Manager | https://docs.simplemdm.com/article/122-user-enrollment | 2021-07-24T01:58:38 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.simplemdm.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.