content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
GlusterFS with Kubernetes
Steps to use GlusterFS in Kubernetes for Seldon.
AWS
Assumes you wish to create your glusterfs cluster in a separate VPC from Kubernetes so its lifetime is not connected to that of the Kubernetes cluster.
- Create an AWS VPC
- Ensure ip range does not overlap with Kubernetes default, e.g. use 192.*
- Create a GlusterFS cluster in VPC
- Two t2.micro instances minimum
- Create a Kubernetes Cluster
- Add glusterfs software for client to each minion and also master if you wish
- Create a VPC Peering connection in glusterFS VPC to Kubernetes VPC
- Create Peering connection and accept request.
- Edit glusterfs routing table to allow traffic to kubernetes
- There will be two routing tables. Choose the routing table with the subnet. Add the ip range for kubernetes (usually 172.20.0.0/16). Choose the Peering connection as destination.
- Edit kubernetes routing table to allow traffic to glusterfs
- There will be two routing tables. Choose the routing table with the subnet. Add the ip range for glusterfs (for example 192.168.0.0/16). Choose the Peering connection as destination.
- Update security group for glusterfs inbound to allow kubernetes traffic
- Update security group for kubernetes inbound to allow glusterfs traffic
- Test mount glusterfs volume on master or minion node, e.g.
mkdir /mnt/glusterfs,
mount.glusterfs 192.168.0.149:/gv0 /mnt/glusterfs
- Ensure you follow docs for glusterfs use in Seldon
Other Cloud Providers
Contributions welcome. | http://docs.seldon.io/glusterfs.html | 2017-07-20T18:30:30 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.seldon.io |
For definition (.rdl) file and other resource files you need for your report.
In the following lessons, you define a data source for your report, define a dataset, and define the report layout. When you run the report, the data is retrieved and combined with the layout, and then rendered on your screen. From there you can export it, print it, or save it.
To create a report server project
Open SQL Server Data Tools.
On the File menu > New > Project.
Under Installed > Templates > Business Intelligence, click Reporting Services.
Click Report Server Project
.
Note: If you don't see the Business Intelligence or Report Server Project options, you need to update SSDT with the Business Intelligence templates. See Download SQL Server Data Tools (SSDT)
In Name, type Tutorial.
By default, it's created in your Visual Studio 2015\Projects folder in a new directory.
Click OK to create the project.
The Tutorial project is displayed in the Solution Explorer pane on the right.
To create a new report definition file
In the Solution Explorer pane, right-click the Reports > Add > New Item.
Tip: If you don't see the Solution Explorer pane, on the View menu, click Solution Explorer.
In the Add New Item window, click Report
..
Next lesson
You have successfully created a report project called "Tutorial" and added a report definition (.rdl) file to the report project. Next, you will specify a data source to use for the report. See Lesson 2: Specifying Connection Information (Reporting Services).
See Also
Create a Basic Table Report (SSRS Tutorial) | https://docs.microsoft.com/en-us/sql/reporting-services/lesson-1-creating-a-report-server-project-reporting-services | 2017-07-20T19:30:40 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.microsoft.com |
Single Container Docker Configuration
This section describes how to prepare your Docker image and container for uploading
to
Elastic Beanstalk. Any web application that you deploy to Elastic Beanstalk in single-container
Docker container must
include a
Dockerfile, which defines a custom image, a
Dockerrun.aws.json file, which specifies an existing image to use and
environment configuration, or both. You can deploy your web application from a Docker
container
to Elastic Beanstalk by doing one of the following:
Create a
Dockerfileto customize an image and to deploy a Docker container to Elastic Beanstalk.
Create a
Dockerrun.aws.jsonfile to deploy a Docker container from an existing Docker image to Elastic Beanstalk.
Create a
.zipfile containing your application files, any application file dependencies, the
Dockerfile, and the
Dockerrun.aws.jsonfile.
If you use only a
Dockerfileor only a
Dockerrun.aws.jsonfile to deploy your application, you do not need to compress the file into a .zip file.
Sections
Dockerrun.aws.json v1
A
Dockerrun.aws.json file describes how to deploy a Docker container
as an Elastic Beanstalk application. This JSON file is specific to Elastic Beanstalk.
If your application runs on an
image that is available in a hosted repository, you can specify the image in a
Dockerrun.aws.json file and omit the
Dockerfile.
Valid keys and values for the
Dockerrun.aws.json file include the
following:
- AWSEBDockerrunVersion
(Required) Specifies the version number as the value
1for single container Docker environments.
- Authentication
(Required only for private repositories) Specifies the Amazon S3 object storing the
.dockercfgfile.
See Using Images from a Private Repository.
- Image
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the Name key in the format
<organization>/<image name>for images on Docker Hub, or
<site>/<organization name>/<image name>for other sites.
When you specify an image in the
Dockerrun.aws.jsonfile, each instance in your Elastic Beanstalk environment will run
docker pullon that image and run it. Optionally include the Update key. The default value is "true" and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.
Do not specify the Image key in the
Dockerrun.aws.jsonfile when using a
Dockerfile. .Elastic Beanstalk always builds and uses the image described in the
Dockerfilewhen one is present.
- Ports
(Required when you specify the Image key) Lists the ports to expose on the Docker container. Elastic Beanstalk uses ContainerPort value to connect the Docker container to the reverse proxy running on the host.
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
- Volumes
Map volumes from an EC2 instance to your Docker container. Specify one or more arrays of volumes to map.
- Logging
Specify the directory to which your application writes logs. Elastic Beanstalk uploads any logs in this directory to Amazon S3 when you request tail or bundle logs. If you rotate logs to a folder named
rotatedwithin this directory, you can also configure Elastic Beanstalk to upload rotated logs to Amazon S3 for permanent storage. For more information, see Viewing Logs from Your Elastic Beanstalk Environment's Amazon EC2 Instances.
The following snippet is an example that illustrates the syntax of the
Dockerrun.aws.json file for a single container.
{ "AWSEBDockerrunVersion": "1", "Image": { "Name": "janedoe/image", "Update": "true" }, "Ports": [ { "ContainerPort": "1234" } ], "Volumes": [ { "HostDirectory": "/var/app/mydb", "ContainerDirectory": "/etc/mysql" } ], "Logging": "/var/log/nginx" }.
Note
The two files must be at the root, or top level, of the
.zip
archive. Do not build the archive from a directory containing the files. Navigate
into that
directory and build the archive there.
Note
When you provide both files, do not specify an image in the
Dockerrun.aws.json file. Elastic Beanstalk builds and uses the image described in
the
Dockerfile and ignores the image specified in the
Dockerrun.aws.json file..
The following example shows the use of an authentication file named
mydockercfg in a bucket named
my-bucket to use a
private image in a third party registry.
{ "AWSEBDockerrunVersion": "1", "Authentication": { "Bucket": "
my-bucket", "Key": "
mydockercfg" }, "Image": { "Name": "quay.io/johndoe/private-image", "Update": "true" }, "Ports": [ { "ContainerPort": "1234" } ], "Volumes": [ { "HostDirectory": "/var/app/mydb", "ContainerDirectory": "/etc/mysql" } ], "Logging": "/var/log/nginx" }
Building Custom Images with a Dockerfile
Docker uses a
Dockerfile to create a Docker image that contains your
source bundle. A Docker image is the template from which you create a Docker container.
Dockerfile is a plain text file that contains instructions that Elastic Beanstalk
uses to build a customized Docker image on each Amazon EC2 instance in your Elastic
Beanstalk environment.
Create a
Dockerfile when you do not already have an existing image hosted
in a repository..
The following snippet is an example of the
Dockerfile. When you
follow the instructions in Single Container Docker Environments, you can upload this
Dockerfile as written. Elastic Beanstalk runs the game 2048 when you use this"]
For more information about instructions you can include in the
Dockerfile, go to Dockerfile reference on the Docker website. | http://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/create_deploy_docker_image.html | 2017-07-20T18:41:51 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.amazonaws.cn |
Division Comparison Widget¶
The Division Comparison widget displays a line graph comparing one division’s attendance to another’s over the past twelve weeks. The comparison is represented with the first division’s attendance as a percentage of the second division’s. Up to three years (the current year and the previous two) can be displayed with color-coded lines. Alternatively, the first division can be compare to a fixed number – such as a target attendance or the seating capacity of your facilities.
The widget utilizes an HTML file, SQL script and Python script as shown below. Since the attendance data is the same for all users, Caching should be set to all users.
- To customize the widget to compare two divisions, you will need to change the division names and IDs on lines 7 & 8 of the Python script. Attendance for the first division will be shown as a percentage of attendance for the second division. No more than two divisions should be entered in the script.
- To customize the widget to compare a division’s attendance to a fixed number, enter the division name and ID on line 7 of the Python script and the fixed number and description on lines 14 & 15. In addition, you will also need to change the value of displayAttendanceVsAttendance to true.
- To configure the number of years (lines) to display on the graph, Modify line 11 setting Division Comparison widget. As supplied by TouchPoint, the file name is WidgetCurrentRatioByWeekHTML.
SQL Script¶
Below is the SQL script for the Division Comparison widget. As supplied by TouchPoint, the file name is WidgetCurrentRatioByWeekSQL. | http://docs.touchpointsoftware.com/CustomProgramming/HomepageWidgets/DivisionComparison.html | 2020-07-02T17:42:40 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.touchpointsoftware.com |
Energy UK comments on Brexit day
Commenting on the UK leaving the EU today, Audrey Gallacher, Energy UK’s interim chief executive, said:
“The UK will officially leave the EU at 23.00 today, marking the beginning of a new era for the UK. The transition period which will start immediately after the exit and last until 31 December 2020 will see the UK and the EU enter into negotiations on their future relationship.
“Energy UK is looking forward to working with the UK government to achieve a deal that will support the energy sector in its mission to carry on its decarbonisation, while supporting other parts of the economy and society on that same journey in order to reach net-zero by 2050 creating new opportunities across the country.
“The deal should provide a framework for collaboration and cooperation to deliver secure, clean and affordable energy to all consumers, preserving existing benefits and creating new ones, and put in place solid foundations to fight climate change.” | https://www.docs.energy-uk.org.uk/media-and-campaigns/press-releases/463-2020/7321-energy-uk-comments-on-brexit-day.html | 2020-07-02T17:53:09 | CC-MAIN-2020-29 | 1593655879738.16 | [] | www.docs.energy-uk.org.uk |
For instructions on how to find the relevant endpoint for the SOAP API, see Introduction to Jade APIs.
Introduction
The Jade User API and Jade Admin API SOAP Jade APIs, all API messages are sent via HTTPS requests. The XML for the SOAP request is sent as an HTTP POST request with a special SOAPAction header; the response is sent back as the response to the POST. All Jade User API and Admin API calls are encrypted with SSL (that is, as HTTPS) to protect the privacy of your data.
The header describes metadata about the message. The body of the message specifies the requested operation (such as createCustomer) along with any applicable parameters (such as the parameters for the new customer)..
Toolkits
Typically, to use the Jade User API and Jade Admin API Web Services, you would download a toolkit that knows how to interpret WSDL files and how to encode and decode XML request and response messages. When a Jade web service receives a request, it sends back the response as an XML message. The web service toolkit knows how to parse the response and return a data structure or object back to the caller, as appropriate for the language.
The toolkits help generate stubs that know how to locate the Jade Admin. Commonly used toolkits include:
- Java: JAX-WS
- Perl: SOAP::Lite
- PHP: SoapClient
- Python: ZSI
Note that the Jade APIs uses document/literal style. Some toolkits work differently for rpc-style web services than for document-style.
If you are using Java, we also supply a client library. This obviates the need for you to use a separate toolkit.
To learn more about SOAP, see the SOAP Tutorial at.
Namespaces
All operations and complex types defined in the Jade Admin API web services are associated with the following XML namespace:
Where
cp.example.com is the domain name associated with the endpoint.
Lowercase types shown throughout this guide (for fields, parameters, and return types) are XMLSchema datatypes and are associated with the following namespace: | http://docs.flexiant.com/display/DOCS/About+the+SOAP+APIs | 2020-07-02T18:09:31 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.flexiant.com |
Under the hood, the Pixel Vision 8 engine is broken down into several "chips" that control how the engine behaves. The Chip Editor allows you to make changes to each of these chips and define the system limitations for each game. After opening the
data.json file, you’ll be presented with a graphical representation of the system’s chips.
The specs panel underneath the chips represents a summary of a system template’s limitations.
It is broken down into six aspects that map over to Pixel Vision 8’s built-in chips. Here are the main chips groups that make up a working system: resolution, colors, sprites, tilemap, sounds, and music. Together these limitations help define what you can and can’t do when making PV8 games and tools.
Here is a breakdown of each property:
Finally, if you want to access some of the more advanced options, simply edit the
data.json file directly. In the Chip Editor, you can access this option via the drop-down menu.
Each chip has its own properties. Changes here will directly impact how a Pixel Vision 8 game runs so be careful when editing these settings by hand. | https://docs.pixelvision8.com/gamedata/chipeditor | 2020-07-02T18:36:10 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.pixelvision8.com |
Coralogix
Coralogix is a machine-learning powered logging platform built for companies performing CI/CD at scale.
In order to integrate TestFairy with Coralogix, and automatically push all the logs collected from
your mobile devices to your Coralogix account, please do the following:
1. Install the TestFairy Logs Client on your server:
Install the TestFairy fetch sessions logs client on your server by running the following command:
npm install -g --link git+
2. Configure a cron job that will run the TestFairy client
Create a cron job that will run this command every 15 minutes.
testfairy-fetch-sessions --endpoint "your_subdomain.testfairy.com" --user "[email protected]" --api-key "YOUR_API_KEY" --project-id=1000 --logs --json --rsa-private-key ../my_private_keys/private.pem
Please make sure to replace the following params:
Replace your_subdomain.testfairy.com with your server address
Replace [email protected] with your admin username
Replance YOUR_API_KEY with your API KEY (found under User preferences --> Upload API key)
Replace 1000 with your project ID
Optional: replace ../my_private_keys/private.pem with the path to your private key if you have one.
Optional: add --json to have log line as a json with all session attributes.
Optional: add --all-time flag to get logs from all time. If not used, tool will fetch logs from the last 24 hours only. Do not use this option unless this is the first time you are debugging the service. Logs older than 24 hours are usually a pure waste of good disk space.
3. Install FluentD
3.1. Install FluentD v1.0+ for your environment.
3.2 Install the following fluentd plugins:
* Coralogix shipper plugin fluent-plugin-coralogix
* Detect exceptions plugin fluent-plugin-detect-exceptions
* Concatenate lines plugin fluent-plugin-concat
3.5. Download the preconfigured fluentd.conf and save it (note the file location).
3.6. Edit fluentd.conf and under <source> => @type tail update path to point to the testfairy sessions folder. You may also change the location of the pos_file if you wish (which keeps track of the current pointer for each log file and prevents duplicates).
3.7. Under <label @CORALOGIX> => <match * *> change privatekey, appname and subsystemname. Your Coralogix private_key can be found under Settings --> Send your logs.
4. Run FluentD
Run fluentd with terminal parameter -c /etc/config/fluentd.conf (change based on where you downloaded fluentd.conf in 3.5) and enjoy the flow of TestFairy logs into your Coralogix account. FluentD will automatically ship new logs as they are dowbloaded by testfairy-fetch-sessions cron job.
Last updated on 2020-06-13 | https://docs.testfairy.com/Integrations/Coralogix.html | 2020-07-02T18:00:44 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['/img/coralogix/image.png', 'coralogix'], dtype=object)
array(['/img/coralogix/image2.png', 'coralogix'], dtype=object)] | docs.testfairy.com |
Web file generation cannot proceed when an error message box pops up and displays "Exception at: Cstring CparseEngine".
Cause: One of the possible causes is that the Application Profile does not contain all the necessary PBLs, or some referenced objects in the application cannot be found in the application PBLs.
Solution: Verify that the application can be compiled (Full Build) successfully, and that all PBLs for the target have been added into the Application Profile. Run Appeon Deployment again. | https://docs.appeon.com/2015/appeon_troubleshooting_guide/ch03s06s04.html | 2020-07-02T19:08:41 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.appeon.com |
Applies to
InkEdit, RichText controls
Description
Specifies whether the text in the control has been modified since it was opened or last saved. Modified is the control's "dirty" flag, indicating that the control is in an unsaved state.
Usage
The value of the Modified property controls the Modified event. If the property is false, the event occurs when the first change occurs to the contents of the control. The change also causes the property to be set to true, which suppresses the Modified event. You can restart checking for changes by setting the property back to false.
In scripts
The Modified property takes a boolean value. The following example sets the Modified property of the InkEdit control ie_1 to false so that the Modified event is enabled:
ie_1.Modified = FALSE | https://docs.appeon.com/pb2017r3/objects_and_controls/ch03s178.html | 2020-07-02T20:06:27 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.appeon.com |
DeletePolicy
Deletes the specified policy from your organization. Before you perform this operation, you must first detach the policy from all organizational units (OUs), roots, and accounts.
This operation can be called only from the organization's master account.
Request Syntax
{ "PolicyId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- PolicyId
The unique identifier (ID) of the policy that you want to delete. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern
for a policy ID string requires "p-" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
Type: String
Length Constraints: Maximum length of 130.
Pattern:
^p-[0-9a-zA-Z_]{8,128}$ an invalid value.
- PolicyInUseException
The policy is attached to one or more entities. You must detach it from all roots, OUs, and accounts before performing this operation.
HTTP Status Code: 400
- PolicyNotFoundException
We can't find a policy with the
Policy
- UnsupportedAPIEndpointException
This action isn't available in the current AWS Region.
HTTP Status Code: 400
Example
The following example shows how to delete a policy from an organization. The example assumes that you previously detached the policy from all entities.
Sample Request
POST / HTTP/1.1 Host: organizations.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 135 X-Amz-Target: AWSOrganizationsV20161128.DeletePolicy X-Amz-Date: 20160802T193159 { "PolicyId": "p-examplepolicyid111" }
Sample Response
HTTP/1.1 200 OK x-amzn-RequestId: c7c142fb-58e7-11e6-a8d8-d5a10f646b91 Content-Type: application/x-amz-json-1.1 Content-Length: 0 Date: Tue, 02 Aug 2016 19:31:59 GMT
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/organizations/latest/APIReference/API_DeletePolicy.html | 2020-07-02T20:30:49 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.aws.amazon.com |
Index Storage Modes
A key feature of global secondary indexes is the ability to change the underlying storage method to best suit your indexing needs. Both a memory-optimized storage engine and a disk-optimized (standard) storage engine are available.
Memory-Optimized Global Indexes
Memory optimized global secondary indexes is an additional storage setting for Couchbase Server clusters. Memory optimized global secondary indexes (also called memory optimized indexes or MOI) can perform index maintenance and index scan faster at in-memory speeds.
Memory Optimized Global Secondary Index Performance
There are several performance advantages to using memory optimized global secondary indexes:
MOIs use a memory efficient index structure for a lock-free index maintenance and index scan. Memory optimized indexes also provide a much more predictable latency with queries as they never reach into disk for index scans.
MOIs store a snapshot of the index on disk. However writes to storage are done purely for crash recovery and is not in the critical path of latency of index maintenance or index scans. The snapshots on disk are used to avoid rebuilding the whole index when an index node experiences failure.
In short, MOIs enough free memory becomes available on the node. There are two important metrics you need to monitor to detect the issues:
MAX Index RAM Used %: Reports the max ram quota used in percent (%) through the cluster and on each node both real-time and with a history over minutes, hours, days, weeks and more.
Remaining Index RAM: Reports the free index RAM quota for the cluster as a total and on each node both real-time and with a history over minutes, hours, days weeks and more.
If a node is approaching high percent usage of Index RAM Quota, a warning is displayed in the web console, so that remedial action can be taken. Below are a few suggestions for steps which can be taken:
You can increase the RAM quota for the index service on the node to give indexes more RAM.
You can place some of the indexes on the node onto other index nodes with more RAM available.
Drop few indexes from the index node which is in the Paused state.
Flush the bucket on which indexes of the Paused node are created.
Handling Out-of-Memory Conditions
Memory-optimized global indexes reside in memory. When a node running the index service runs out of configured Index RAM Quota on the node, indexes on the node can no longer process additional changes to the indexes on the node that has run out of memory.
consistency=request_plus or
consistency=at_plus fail if the timestamp specified exceeds the last timestamp processed by the specific index on the node.
However, queries with
consistency=unbounded continue to execute normally.
To resume indexing operations on a node where the Indexer has paused due to low memory, consider taking one or more of the following actions:.
Standard Global Secondary Indexes
Standard global secondary indexes is the default storage setting for Couchbase Server clusters. Standard global secondary indexes can index larger data sets as long as there is disk space available to store the index.
Standard global secondary indexes use a disk optimized format that can utilize both memory and persistent storage for index maintenance and index scans.
Standard Global Secondary Index Performance
Different from the memory-optimized storage setting for GSI, the performance of standard GSIs depend heavily on the IO subsystem performance.
When placing indexes, it is important to note the disk IO "bandwidth" remaining on the node as well as CPU, RAM and other resources.
Changing the Global Secondary Index Storage Mode (XDCR) to replicate the data to the new cluster. If you don’t have a spare cluster, you can also create all the indexes using the view indexer. See the CREATE INDEX statement and the USING VIEW clause for details. However, the view indexer for N1QL provides different performance characteristics as it is a local index to each data node and not a global index like GSI. For better availability when changing the storage mode from MOI to GSI, use the XDCR approach as opposed to view indexes for N1QL. | https://docs.couchbase.com/server/5.0/indexes/storage-modes.html | 2020-07-02T18:27:25 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.couchbase.com |
Install Control Panel Server
- Use corresponding option of the Control Panel installer in case MySQL is already installed and configured.
- Installer output is redirected to ./onapp-cp-install.log
- All installer critical errors are in /var/log/messages
- This instruction is applicable for installing OnApp 6.0 Patch 2
To install Control Panel server, perform the following procedure:
Update your server:
# yum update
Download OnApp YUM repository file:
# rpm -Uvh
Install OnApp Control Panel installer package:
#> yum install onapp-cp-install
(Optional) You can optionally apply the Control Panel custom configuration. It is important to set the custom values before the installer script runs.Edit the /onapp/onapp-cp.conf file to set Control Panel custom values
# vi /onapp/onapp-cp.conf
Run the Control Panel installer:
#> /onapp/onapp-cp-install/onapp-cp-install.sh -i SNMP_TRAP_IPSThe full list of Control Panel installer options:
(Optional) Install CloudBoot dependencies:
#> yum install onapp-store-install #> > Cloud > Groups menu in the Control Panel.Once you have entered a license it can take up to 15 minutes to activate.
Restart the OnApp service:
#> service onapp restart | https://docs.onapp.com/vcenter/latest/get-started/installation-and-upgrade/install-control-panel-server | 2020-07-02T19:53:34 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.onapp.com |
Contributing to the OpenStack SDK¶
This section of documentation pertains to those who wish to contribute to the development of this SDK. If you’re looking for documentation on how to use the SDK to build applications, please see the user section.
About the Project¶
The OpenStack SDK is a OpenStack project aimed at providing a complete software development kit for the programs which make up the OpenStack community. It is a Python library with corresponding documentation, examples, and tools released under the Apache 2 license.
Contribution Mechanics¶
Contacting the Developers¶
IRC¶
The developers of this project are available in the #openstack-sdks channel on Freenode. This channel includes conversation on SDKs and tools within the general OpenStack community, including OpenStackClient as well as occasional talk about SDKs created for languages outside of Python.
The openstack-discuss
mailing list fields questions of all types on OpenStack. Using the
[sdk] filter to begin your email subject will ensure
that the message gets to SDK developers.
Coding Standards¶
We are a bit stricter than usual in the coding standards department. It’s a good idea to read through the coding section.
Development Environment¶
The first step towards contributing code and documentation is to setup your development environment. We use a pretty standard setup, but it is fully documented in our setup section.
Testing¶
The project contains three test packages, one for unit tests, one for
functional tests and one for examples tests. The
openstack.tests.unit
package tests the SDK’s features in isolation. The
openstack.tests.functional and
openstack.tests.examples packages test
the SDK’s features and examples against an OpenStack cloud.
Project Layout¶
The project contains a top-level
openstack package, which houses several
modules that form the foundation upon which each service’s API is built on.
Under the
openstack package are packages for each of those services,
such as
openstack.compute.
Adding Features¶
Does this SDK not do what you need it to do? Is it missing a service? Are you a developer on another project who wants to add their service? You’re in the right place. Below are examples of how to add new features to the OpenStack SDK. | https://docs.openstack.org/openstacksdk/latest/contributor/index.html | 2020-07-02T19:33:18 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.openstack.org |
PDF Print (Alpha)
(Article Coming Soon)
What is the PDF Print Component?
Each page in your app can have a corresponding PDF page associated with it. The PDF Print Link creates a button that generates a PDF file in a new tab. The PDF file generated is based on the components configured on the PDF tab of the given page.
PDFs are still in early alpha at this time. To view the PDF page, you must have Alpha Features enabled. | https://docs.tadabase.io/categories/manual/article/pdf-print-%28alpha%29 | 2020-07-02T19:18:46 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.tadabase.io |
Configuring BMC Remedyforce CMDB 2.0). An asset is a corporate resource that you need to manage from a financial and contractual perspective (for example, laptops and mobiles). In some cases, CIs are also assets and vice-versa (for example, computer systems and printers)..
Important
This section provides information about configuring BMC Remedyforce CMDB 2.0, which is the enhanced CMDB available starting from BMC Remedyforce version 20.14.01. For new installations of BMC Remedyforce 20.14.01 and later, CMDB 2.0 is available out of the box. If you have upgraded from BMC Remedyforce 20.13.02 or earlier to version 20.14.01 or later, you have to manually upgrade to CMDB 2.0.
BMC recommends that you upgrade to CMDB 2.0 because CMDB 1.0 does not support many features, such as asset management, models, normalization, and locations. However, if you want to continue using CMDB 1.0, see Configuring CMDB 1.0.
The following table describes the recommended process for configuring BMC Remedyforce CMDB 2.0 in your organization:
Related topics
Troubleshooting BMC Remedyforce CMDB 2.0 issues
Managing updates to configuration items | https://docs.bmc.com/docs/BMCHelixRemedyforce/201902/en/configuring-bmc-remedyforce-cmdb-2-0-868129839.html | 2020-07-02T19:38:56 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
Statements: Return
A
return statement can only occur inside a function, in which case, it causes that function to terminate normally. The function can
optionally return a single value (but one which could contain other values, as in a tuple, a shape, or an object of some user-defined
type), whose type must be compatible with the function's declared return type. If the
return statement contains no value, or there
is no
return statement (in which case, execution drops into the function's closing brace), no value is returned. For example:
function average_float(float $p1, float $p2): float { return ($p1 + $p2) / 2.0; } type IdSet = shape('id' => ?string, 'url' => ?string, 'count' => int); function get_IdSet(): IdSet { return shape('id' => null, 'url' => null, 'count' => 0); } class Point { private float $x; private float $y; public function __construct(num $x = 0, num $y = 0) { $this->x = (float)$x; // sets private property $x $this->y = (float)$y; // sets private property $y } // no return statement public function move(num $x = 0, num $y = 0): void { $this->x = (float)$x; // sets private property $x $this->y = (float)$y; // sets private property $y return; // return nothing } // ... }
However, for an async function having a
void return type, an object of type
Awaitable<void> is returned. For an async function,
the value having a non-
void return type, the return value is wrapped in an object of type
Awaitable<T> (where
T is the type of
the return value), which is returned.
Returning from a constructor behaves just like returning from a function having a return type of
void.
The value returned by a generator function must be the literal
null. A
return statement
inside a generator function causes the generator to terminate.
A return statement must not occur in a finally block or in a function declared
noreturn. | https://docs.hhvm.com/hack/statements/return | 2020-07-02T19:15:56 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.hhvm.com |
The:
These properties can be used to look up a specific event via this endpoint.
Further, all but one event type (UNBOUNCE) have the following properties:
Events can be looked up in bulk via this endpoint using 'recipient', both 'appId' and 'campaignId', or any combination of the above properties.
The following additional properties are also available for all event types (including UNBOUNCE):
The event reference properties -- 'sentBy', 'obsoletedBy', and 'causedBy' -- are discussed in detail later in this document.
There are 12 event types that can be generated by HubSpot's Email API during the lifecycle of an email message. They are broadly grouped into categories: Submission, Delivery, User Engagement, and User Status. Event types, event categories, and their relationships are diagrammed below.
When an email message is created and sent by HubSpot on behalf of a customer, we first verify whether the recipient is eligible to receive it. If not, we reject the message, triggering the creation of a DROPPED event. Otherwise, we submit it to our delivery provider for further handling, triggering a SENT event. An email message will almost always have exactly one submission event associated with it; for example, there will never be multiple SENT events for a message.
We make every effort to reject messages before passing them along to our delivery provider. However, sometimes our delivery provider will decide to reject a message even after we have verified its eligibility. This follow-on rejection results in a DROPPED event being created, in addition to the previously-created SENT event.
Submission events all share the following properties:
Additionally, DROPPED events have the following properties:
Once our delivery provider has accepted an email message, we create a PROCESSED event. At this point, the delivery provider has queued the message for delivery. If everything goes smoothly, the delivery provider will dequeue the message and deliver it to the recipient's email server, generating a DELIVERED event.
Occasionally, things don't go smoothly, and one of two things happens: delivery is deferred because of a temporary rejection, or delivery fails and won't be retried.
In the first case, the message could not be delivered to the recipient's email server for some non-fatal (usually transient reason, such as a spurious time-out. The delivery provider will re-queue the message for later delivery, and we create a DEFERRED event. A message can be deferred multiple times before it completes the delivery phase, with a new event created on each attempt.
If delivery fails, no further attempts will be made to deliver the message, and we create a BOUNCE event. This can occur for a variety of reasons, such as the recipient being unknown by the email server.
The specific delivery event types have the following properties:
Once an email message reaches its recipient, there are four different event types that can occur: OPEN, CLICK, PRINT, and FORWARD. These represent the recipient's interaction with the message and its content, and each can occur multiple times. For example, each time any URL is clicked, a new CLICK event is created, even if that URL has previously been clicked and generated such an event.
User engagement events all share the following properties:
Additionally, CLICK events have the following properties:
And OPEN events may have the following property:
A recipient can also update their communication preferences via the email message. By clicking on the subscription preferences link in the message, they can change their subscriptions, either subscribing or unsubscribing from various lists, triggering a STATUSCHANGE event. Note that a status change can be for any list(s), not just the one which is associated with the current email message.
An email message may also be flagged as spam by the recipient, resulting in a SPAMREPORT event. Note that this is independent of subscription status — flagging a message as spam does not simply unsubscribe the recipient from the list in question. Rather, the subscription status is left unchanged, and a flag is set indicating that recipient should never receive another email message from HubSpot. Once this happens, you'll need manual intervention by HubSpot to remove the flag.
A STATUSCHANGE event has the following additional properties:
There is a 13th event type, which is unrelated to a specific email message. UNBOUNCE events occur when a particular email address is either automatically or manually unbounced by HubSpot. This resets the bounce status of the recipient, potentially allowing them to receive emails from your portal.
Many events are related to other events that occurred either before or after it. As described in the first section above, we use EventIds to build this reference chain.
Note that event references are relatively new, and older events may not have them populated.
As discussed previously, each email message has either a SENT or DROPPED event (or one of each) associated with it. This will be the first event generated for any given message. If a message generates a SENT event, all subsequently generated events will reference that event via the property 'sentBy'.
This backward-reference can be useful to get more information on the parent SENT event, or to manually find all events associated with a given message.
Sometimes, a follow-on event occurs for a given message, signifying that an earlier event should be ignored. This relationship is captured in a forward-reference in the property 'obsoletedBy'.
For instance, in the case where we generate both a SENT event and a subsequent DROPPED event, the SENT event is ultimately irrelevant, and is obsoleted by the DROPPED event. Accordingly, the SENT event will reference the DROPPED event via 'obsoletedBy'.
Certain events occur precisely because of some previous event, often for a different message. This relationship is captured in a backward-reference in the property 'causedBy'. It can be used to get additional details on why a particular event caused the following event.
For example, a DROPPED event will occur when there was a previous BOUNCE event for the same recipient. In this case, the DROPPED event will have its 'dropReason' set to PREVIOUSLY_BOUNCED, and it's 'causedBy' will reference that previous BOUNCE event. | https://legacydocs.hubspot.com/docs/methods/email/email_events_overview | 2020-07-02T19:03:06 | CC-MAIN-2020-29 | 1593655879738.16 | [] | legacydocs.hubspot.com |
xolotl: a fast and flexible neuronal simulator¶
xolotl is a fast single-compartment and
multi-compartment simulator written in
C++ with
a MATLAB interface that you'll actually enjoy using.
Why use xolotl? This is why:
xolotl is FAST¶
xolotl is written in C++, and it's fast. In our testing, it's more than 3 times faster than NEURON for single compartment neurons.
xolotl is easy to use¶
Want to set up a Hodgkin-Huxley model, inject current, integrate it and plot the voltage trace? This is all you need:
x = xolotl; x.add('compartment', 'HH','A', 0.01); x.HH.add('liu/NaV', 'gbar', 1000); x.HH.add('liu/Kd', 'gbar', 300); x.HH.add('Leak', 'gbar', 1); x.I_ext = .2; x.plot;
xolotl has documentation¶
Unlike certain widely used NEURON simulators that shall remain nameless, xolotl has documentation that actually... exists.
This is what it looks like:
xolotl lets you do this¶
xolotl lets you manipulate any parameter in any model and view the effects of changing that parameter in real time
xolotl is fully programmable¶
xolotl is designed to be used from within MATLAB. It gives you the best of both worlds: the high performance of C++ compiled code with the rich power of all the toolboxes MATLAB has to offer. You can:
- write functions that pass models as arguments
- optimize parameters of neuron models using the Global Optimization Toolbox
- run simulations in parallel across multiple computers
- have a single script to run the simulation and analyze results
Hooked? Check out the quickstart to see how easy it is to use. | https://xolotl.readthedocs.io/en/master/ | 2020-10-19T23:32:27 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['https://user-images.githubusercontent.com/6005346/41205222-30b6f3d4-6cbd-11e8-983b-9125585d629a.png',
None], dtype=object)
array(['https://user-images.githubusercontent.com/6005346/50499683-9c0bf400-0a19-11e9-9375-92a1fdefa2fc.png',
None], dtype=object)
array(['https://user-images.githubusercontent.com/6005346/50499847-e3df4b00-0a1a-11e9-8aba-b3be57c3e784.png',
None], dtype=object) ] | xolotl.readthedocs.io |
@Generated(value="OracleSDKGenerator", comments="API Version: 20190801") public final class ErratumSummary extends Object
Important changes for software. This can include security advisories, bug fixes, or enhancements.
Note: Objects should always be created or deserialized using the
ErratumSummary.Builder. This model distinguishes fields that are
null because they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
ErratumS","updated","advisoryType","relatedCves"}) @Deprecated public ErratumSummary(String name, String id, String compartmentId, String synopsis, String issued, String updated, UpdateTypes advisoryType, List<String> relatedCves)
public static ErratumSummaryUpdated()
most recent date the erratum was updated
public UpdateTypes getAdvisoryType()
Type of the erratum.
public List<String> getRelatedCves()
list of CVEs applicable to this erratum
public Set<String> get__explicitlySet__()
public boolean equals(Object o)
equalsin class
Object
public int hashCode()
hashCodein class
Object
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/osmanagement/model/ErratumSummary.html | 2020-10-20T00:45:43 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloud.oracle.com |
By creating and uploading your own template to a specific desination, you will be able to make the layout of your Word or PDF report match your original form more closely or create a completely new format.
To customize the Word or PDF file during your Destination set-up, you'll need to create a Word or PDF Destination.
Note: If you already have a custom template uploaded to your destination and would like to update it, see this article.
Set up your PDF / Word Destination
1) Select the Destination location (in the example below, that is email).
2) Next, click on either the PDF or DOCX file format.
Download your Form's Sample Template
3) Once you select the PDF or DOCX format, navigate down to Step 5 and click the link for "Show Options".
4) After you click "Show Options", you will see the screen below. Click on the green "Download a sample template" button to download the Word file template which you can use to customize your output.
This sample template provides you with all the placeholders. These placeholders dictate where the form fields map in the document once the form has been submitted.
Note: For another method to quickly obtain valid placeholders in for your form, please have a look at this article.
5) Once the file has downloaded and opened, the first page will look something like this:
The screenshot below was taken from a sample form titled "Work Order". As you scroll through it, you will see the different placeholders available.
Note: Everything in double curly brackets, including the double curly brackets is a placeholder).
Create your Custom Word Template
6) Now, open a new Word document. In the new Word document, you can set up the form with the headers and footers you'd like to use, as well as any branding you want to include.
7) Next, you can go on to add the question titles to the document in any order you like.
Add your Placeholders
8) Copy the placeholders from the sample and paste them wherever you would like the answers to appear.
Note: Questions and answers can also be put in tables. Design the table as usual, and put the placeholders where you want the answers to go.
See the example image below, on the left is the downloaded sample template and on the right is the custom template that will be uploaded to the destination.
Note: In the event that a question is not answered, the placeholder will be replaced with a blank space. If you would like to conditionally show questions/data in your report, check out this article on if statements.
Upload your Custom Template
9) Once you're done with designing your Word template, save it to your computer.
10) Navigate back to your Destination Settings and click the "Upload my Word .DOCX template" button and select the file.
11) Lastly, click "Update Destination" and you're all set!
Additional Custom Formatting:
Font, Size and Color:
If you would like the questions and answers to appear in a particular font, size or color, all you need to do is highlight the questions and placeholders and change the font, size and color. When the form comes through, the answers that have replaced the placeholder will be in the same font, size and color as the placeholder.
Images:
Images added to the form can be resized using size specifications in the placeholder. For example, if your image question placeholder is
{{fields.image_question}} , and you want the image to appear in 400x200 you can ensure the image arrives at the destination in the size you desire by adding
|400x200 to the end of the placeholder. It will look something like this:
{{fields.image_question|400x200}} .
Decimal/Integer Questions:
Decimal and integer question placeholders can be edited by adding
|currency at the end of the placeholder, before the closing brackets. This will make the decimal and integer answers come through with two spaces after the decimal. For example, if the answer is 23.23246, the adjusted placeholder will make the answer come through as 23.23.
Multi-Select Questions:
When a select question has the "Allow Multiple Answers" box checked, and multiple options are selected in the form, the answers come through all on the same line, separated by commas. This can now be adjusted so the answers come through on separate lines, by adding
|each_answer_on_new_line at the end of the placeholder, before the closing brackets.
Location Questions:
Maps can now be resized using size specifications in the placeholder, much the same as the Image question. So, for example, if your location question placeholder is
, and you want the map to appear in 100x100, you can ensure it arrives in that size by adding the size restriction at the end of the placeholder so the placeholder looks like this:
{{fields.location_question}}
{{fields.location_question|100x100}} .
For more location formatting options, click here.
Time/Date Questions:
For Time and Date formatting options, click here.
Subforms:
When it comes to subforms, you will see the main subform heading among the available placeholders, as well as each of the subform's question's placeholders. You can choose to either have the subform altogether, and use the subform heading placeholder, or you can choose to separate the questions (perhaps include some, but not others, or place them separately in the template), and use the individual placeholders.
Repeat Groups:
Repeat group answers are automatically generated in table format, but there are options to display these answers in other layouts.
-Preferred Repeat Group Method
For the preferred method of customizing data within a repeat group, please see this help article: Advanced PDF/Word Template Design: For Loops
-Legacy Method
The options below are legacy options which are still supported and can be used if you prefer.
To display your Repeat Group answers in a list instead of a table, use the formatting option
|list .
Another way you can format your repeat groups is by filtering the questions that you choose to display (this works with the default display as well as the list layout).
The basic way to structure the repeat group placeholder is as follows:
filter_fields: [question_1, question_2]
Sub-groups in repeat groups can be filtered as well by adding the sub section when specifying the fields you want included:
For example,
[field1, field2: [subfield1, subfield2], field3]
The placeholder should look something like this:
{{fields.Group|list, filter_fields: [Question1, Question2: [SubQuestion2_1, SubQuestion2_2], Question3]}}
Lastly, if you filter image and location questions, you can resize the images and maps (the same as mentioned above). For example, your image question's placeholder will look something like this:
{{fields.Group|list, image_size:300x100}} , whereas your location question's placeholder will look something like this:
{{fields.Group|list, map_size:300x100}} .
Conditionally Show Data:
If statements are used to conditionally show data. For example, this can be useful if you'd like a question label to be omitted from the document, if the associated question isn't answered. They can also be used to create checkboxes. This article explains how to set up if statements.
Other Useful Articles
- Microsoft Word Destination
- PDF Destination
- Updating an Existing Word or PDF Template
- Word and PDF Placeholders Shortcut
- Advanced PDF/Word Template Design: IF Statements
- Advanced PDF/Word Template Design: For Loops
If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/392912-pdf-word-template-customization | 2020-10-19T23:59:18 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['https://downloads.intercomcdn.com/i/o/150565530/0a37cf34b1b75952e6cfc436/Screen+Shot+2019-09-23+at+2.16.28+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/150565555/2d6eb64e58e9af23ac27e7c4/Screen+Shot+2019-09-23+at+2.16.38+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/150565580/3275d28f0fbf9cccf2363115/Screen+Shot+2019-09-23+at+2.16.43+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/150565608/69af03971717105bb48b1a41/Screen+Shot+2019-09-23+at+2.17.03+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/41046903/bc109b86726aae222b3aa06f/Screen+Shot+2017-11-29+at+10.52.19+AM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/39701350/70f238418236dc43de8c3d55/Work+Order.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/158426883/15f25e23772a556010a1db9b/Screen+Shot+2017-11-09+at+12.49.13+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/158427043/852c6894354e55f59140d36d/Screen+Shot+2017-11-09+at+12.50.45+PM.png',
None], dtype=object) ] | docs.devicemagic.com |
Cheaha 2018 Summer Maintenance Scheduled
We are expanding the storage platform on Cheaha this fall. We have to power down Cheeha for a short time to prepare for the expansion. The power down is scheduled for Friday, August 10th at 6PM CDT. Compute nodes, login nodes and remote clients (e.g. Galaxy) will not be available during the maintenance period.
Please note that all running jobs will be terminated. Pending jobs will not be affected. The expected duration of this maintenance event is 4 hours. | https://docs.uabgrid.uab.edu/tgw/index.php?title=Template:Main_Banner&diff=next&oldid=5683 | 2020-10-20T01:10:31 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.uabgrid.uab.edu |
Consuming an RSS Feed¶
Reading a feed¶
Reading an RSS feed is as simple as passing the URL of the feed to
Zend\Feed\Reader\Reader’s
import
method.
If any errors occur fetching the feed, a
Zend\Feed\Reader\Exception\RuntimeException will be thrown.
Get properties¶
Once you have a feed object, you can access any of the standard RSS “channel” properties directly on the object:
Properties of the channel can be accessed via getter methods, such as
getTitle,
getAuthor …
If channel properties have attributes, the getter method will return a key/value pair, where the key is the attribute name, and the value is the attribute value.
Most commonly you’ll want to loop through the feed and do something with its entries.
Zend\Feed\Reader\Feed\Rss
internally converts all entries to a
Zend\Feed\Reader\Entry\Rss. Entry properties, similarly to channel
properties, can be accessed via getter methods, such as
getTitle,
getDescription …
An example of printing all titles of articles in a channel:
Where relevant,
Zend\Feed supports a number of common RSS extensions including Dublin Core, Atom (inside RSS)
and the Content, Slash, Syndication, Syndication/Thread and several other extensions or modules.
Please see the official RSS 2.0 specification for further information. | https://zf2-docs.readthedocs.io/en/latest/modules/zend.feed.consuming-rss.html | 2020-10-20T01:03:37 | CC-MAIN-2020-45 | 1603107867463.6 | [] | zf2-docs.readthedocs.io |
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public final class TcpOptions extends Object
Optional object to specify ports for a TCP rule. If you specify TCP as the protocol but omit this object, then all ports are allowed.
Note: Objects should always be created or deserialized using the
TcpOptions.Builder. This model distinguishes fields that are
null because they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
TcpPortRange","sourcePortRange"}) @Deprecated public TcpOptions(PortRange destinationPortRange, PortRange sourcePortRange)
public static TcpOptions.Builder builder()
Create a new builder.
public PortRange getDestinationPortRange()
An inclusive range of allowed destination ports. Use the same number for the min and max to indicate a single port. Defaults to all ports if not specified.
public PortRange getSourcePortRange()
An inclusive range of allowed source ports. Use the same number for the min and max to indicate a single port. Defaults to all ports if not specified.
public Set<String> get__explicitlySet__()
public boolean equals(Object o)
equalsin class
Object
public int hashCode()
hashCodein class
Object
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/core/model/TcpOptions.html | 2020-10-20T00:58:57 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloud.oracle.com |
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public final class UpdatePublicIpDetails extends Object
Note: Objects should always be created or deserialized using the
UpdatePublicIpDetails.Builder. This model distinguishes fields that are
null because they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
UpdatePublic","privateIpId"}) @Deprecated public UpdatePublicIpDetails(Map<String,Map<String,Object>> definedTags, String displayName, Map<String,String> freeformTags, String privateIpId)
public static UpdatePublicPrivateIpId()
The OCID of the private IP to assign the public IP to. * If the public IP is already assigned to a different private IP, it will be unassigned and then reassigned to the specified private IP. * If you set this field to an empty string, the public IP will be unassigned from the private IP it is currently assigned to.
public Set<String> get__explicitlySet__()
public boolean equals(Object o)
equalsin class
Object
public int hashCode()
hashCodein class
Object
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/core/model/UpdatePublicIpDetails.html | 2020-10-19T23:51:27 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloud.oracle.com |
Contents:
Contents:
By default, the Trifacta® Wrangler Enterprise applies
This feature allows administrators to enable the passthrough of properties to Spark, and users can submit any value of an enabled property. Please be careful in choosing the properties that you enable for users to override.
Property validation:
- You can enable properties that are destructive.
- Property names of whitelisted properties:
- No validation of property names is provided by the Trifacta application.
-.
NOTE: These properties are always available for override when the feature is enabled.
Spark jobs on Azure Databricks:
For Spark jobs executed on Azure Databricks, only the following default override parameters are supported:
During Spark job execution on Azure Databricks:
Whenever overrides are applied to an Azure Databricks cluster, the overrides must be applied at the time of cluster creation. As a result, a new Azure Databricks cluster is spun up for the job execution, which may cause the following:
- Delay in job execution as the cluster is spun up.
- Increased usage and costs
- After a new Azure Databricks cluster has been created using updated Spark properties for the job, any existing clusters complete executing any in-progress jobs and gracefully terminate based on the idle timeout setting for Azure Databricks clusters.
-.
- You apply this change through the Workspace Settings Page. For more information, see Platform Configuration Methods.
Locate the following parameter:
Enable Custom Spark Options Feature
- Set this parameter to
Enabled.
Configure Available Parameters to Override
After enabling the feature, workspace administrators can define the Spark properties that are available for override.
Steps:
- Login to the application as a workspace administrator.
- You apply this change through the Workspace Settings Page. For more information, see Platform Configuration Methods.
Locate the following parameter:
Spark Whitelist Properties
Enter a comma-separated list of Spark properties. For example, the entry for adding the following two properties looks like the following:
spark.driver.extraJavaOptions,spark.executor.extraJavaOptions
-.
NOTE: No validation of the property values is performed against possible values or the connected running environment..
NOTE: No validation of the property values is performed against possible values or the connected running environment.
For more information, see Spark Execution Properties Settings.
Via API
You can submit Spark property overrides as part of the request body for an output object. See API Workflow - Manage Outputs.
This page has no comments. | https://docs.trifacta.com/display/r071/Enable+Spark+Job+Overrides | 2020-10-20T00:15:39 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
As the name suggests "Extensions" are to extend the already available functionality of uKnowva application. Vanilla uKnowva installation comes with default features like social intranet, document management, etc. But say you want a project Management App or some completely new functionality like tracking user sessions, etc, then you can do this by adding more extensions to uKnowva.
How to install new extensions in uKnowva?
You can extend uKnowva in three ways:
- From the Extension store: Refer this link:
- By programming something by yourself (in PHP of course) and compiling a .ukv file out of it and uploading it. Refer this link:
- By linking Third part Applications: Refer this link:
Extensions are of 5 types
- Apps
- Plugins
- Widgets
- Themes
- Languages
Apps
Apps (alias components) are a bunch of additional functionality which when installed, they are linked in the Main menu and are available for use. You can manage and configure all your Apps from uKnowva Configuration --> Apps Manager.
Example Apps on our Extension store: HRMS, Project management, Customer Support Ticketing system, etc.
Plugins
Plugins are usually s small additional functionality which are usually attached to a core App/component. When installed, they do not come under the Menu, instead they are triggered on a specific event like on a users login, on profile page display, on document display, etc. They are like triggers, they are triggered on a specific event so that some additional code can be executed on that event. You can manage all your plugins from uKnowva Configuration --> Plugin Manager
Example Plugins on our Extension store: PDF Viewer, uKnowva Firewall, Organization Chart, etc.
Widgets
Widgets (alias Modules) are simple placeholders that fit in a specific position in a theme/template and show some content/information. You can manage all your plugins from uKnowva Configuration --> Widget Manager
Example Widgets on our Extension store: Birthday Reminder, uKnowva Stats, FB Like Box, etc.
Themes
Themes (alias Templates) govern the complete look and feel of any uKnowva instance. Usually each instance has a predefined theme. The parameters of the same can be changed from uKnowva Configuration-->Theme Manager. Facility for installing more themes is currently not available, but if you wish to get your them revamped/changed, you can write to This email address is being protected from spambots. You need JavaScript enabled to view it.
Languages
uKnowva is by default available in English language, you can still change the complete language of your instance. This facility is still under Beta and will be available under uKnowva Configuration soon. Facility for installing more languages is currently not available, but if you wish to get your them revamped/changed, you can write to This email address is being protected from spambots. You need JavaScript enabled to view it. | https://docs.uknowva.com/about-uknowva/uknowva-extensions | 2020-10-19T23:31:42 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['/images/apps-manager-uKnowva.png', None], dtype=object)
array(['/images/plugin-manager-uknowva.png', None], dtype=object)
array(['/images/widget-manager-uknowva.png', None], dtype=object)] | docs.uknowva.com |
Unity 2019.4 is an LTS release, containing features released in 2019.1 to 2019.3, and is supported for 2 years. See the LTS release page for other available LTS installers.
Follow the links below to find User Manual pages on new and updated features in Unity 2019. These search links list new or updated User Manual pages in each Tech release, detailing the following:
New in Unity 2019.3
New in Unity 2019.2
New in Unity 2019.1
To find out more about the new features, changes, and improvements to Unity 2019 releases, see:
2019.4 Release Notes
2019.3 Release Notes
2019.2 Release Notes
2019.1 Release Notes
If you are upgrading existing projects from an earlier version of Unity, read the Upgrade Guides for information about how your project may be affected. Here are the LTS specific upgrade guides: | https://docs.unity3d.com/ja/current/Manual/WhatsNew2019.html | 2020-10-20T01:14:14 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.unity3d.com |
All Windows Mixed Reality devices use specific forms of interaction to take input from the user. Some of these forms of input are specific to certain Windows Mixed Reality devices, such as HoloLens or immersive headsets.
Immersive headsets use a variety of inputs, including spatial controllers. HoloLens is limited to using 3 forms of input, unless extra hardware is required by the application to be used in addition to the headset.
All forms of input work with Windows Mixed Reality immersive headsets. Gaze works in VR, gestures trigger when you use the Select button on the controller, and voice is available when the end user has a microphone connected to their PC.
Input on HoloLens is different from other platforms because the primary means of interaction are:
Gaze is an input mechanism that tracks where a user is looking:
On HoloLens, this is accurate enough that you can use it to get users to select GameObjects in the world. You can also use it to direct commands at specific GameObjects rather than every GameObject in the Scene.
For more information, see Microsoft’s documentation on Gaze indicator and Gaze targeting.
Windows 10 API provides voice input on both HoloLens and immersive devices. Unity supports three styles of input:
Keywords: Simple commands or phrases (set up in code) used to generate events. This allows you to quickly add voice commands to an application, as long as you do not require localization. The KeywordRecognizer API provides this functionality.
Grammars: A table of commands with semantic meaning. You can configure grammars through an XML grammar file (.grxml), and localize the table if you need to. For more information on this file format, see Microsoft’s documentation on Creating Grammar Files. The GrammarRecognizer API provides this functionality.
Dictation: A more free-form text-to-speech system that translates longer spoken input into text. To prolong battery life, dictation recognition on the HoloLens is only active for short periods of time. It requires a working Internet connection. The DictationRecognizer API provdes this functionality.
HoloLens headsets have a built in microphone, allowing voice input without extra hardware. For an application to use voice input with immersive headsets, users need to have an external microphone connected to their PC.
For more information about voice input, refer to Microsoft’s documentation on Voice design.
A gesture is a hand signal interpreted by the system or a controller signal from a Spatial Controller. Both HoloLens and immersive devices support gestures. Immersive devices require a spatial controller to initiate gestures, while HoloLens gestures require hand movement. You can use gestures to trigger specific commands in your application.
Windows Mixed Reality provides several built-in gestures for use in your application, as well as a generic API to recognize custom gestures. Both built-in gestures and custom gestures (added via API) are functional in Unity.
For more information, see Microsoft’s documentation on gestures.
Windows Mixed Reality uses a process called Late Stage Reprojection (LSR) to compensate for movement of the user’s head between the time a frame is rendered and when it appears on the display. To compensate for this latency, LSR modifyies the rendered image according to the most recent head tracking data, then presents the image on the display.
The HoloLens uses a stabilization plane for reprojection (see Microsoft documentation on plane-based reprojection). This is a plane in space which represents an area where the user is likely to be focusing. This stabilization plane has default values, but applications can also use the SetFocusPoint API to explicitly set the position, normal, and velocity of this plane. Objects appear most stable at their intersection with this plane. Immersive headsets also support this method, but for these devices there is a more efficient form of reprojection available called per-pixel depth reprojection.
Desktop applications using immersive headsets can enable per-pixel depth reprojection, which offers higher quality without requiring explicit work by the application. To allow per-pixel depth reprojection, open the Player Settings for the Windows Mixed Reality, go to XR Settings > Virtual Reality SDKs and check Enable Depth Buffer Sharing.
When Enable Depth Buffer Sharing is enabled, ensure applications do not explicitly call the
SetFocusPoint method, which overrides per-pixel depth reprojection with plane-based reprojection.
Enable Depth Buffer Sharing on HoloLens has no benefit unless you have updated the device’s OS to Windows 10 Redstone 4 (RS4). If the device is running RS4, then it determines the stabilization plane automatically from the range of values found in the depth buffer. Reprojection still happens using a stabilization plane, but the application does not need to explicitly call
SetFocusPoint.
For more information on how the HoloLens achieves stable holograms, see Microsoft’s documentation on hologram stability.
Anchor components are a way for the virtual world to interact with the real world. An anchor is a special component that overrides the position and orientation of the Transform component of the GameObject it is attached to.
A WorldAnchor represents a link between the device’s understanding of an exact point in the physical world and the GameObject containing the WorldAnchor component. Once added, a GameObject with a WorldAnchor component remains locked in place to a location in the real world, but may shift in Unity coordinates from frame to frame as the device’s understanding of the anchor’s position becomes more precise.
Some common uses for WorldAnchors include:
Locking a holographic game board to the top of a table.
Locking a video window to a wall.
Use WorldAnchors whenever there is a GameObject or group of GameObjects that you want to fix to a physical location.
WorldAnchors override their parent GameObject’s Transform component, so any direct manipulations of the Transform component are lost. Similarly, GameObjects with WorldAnchor components should not contain Rigidbody components with dynamic physics. These components can be more resource-intensive and game performance will decrease the further apart the WorldAnchors are.
Note: Only use a small number of WorldAnchors to minimise performance issues. For example, a game board surface placed upon a table only requires a single anchor for the board. In this case, child GameObjects of the board do not require their own WorldAnchor.
For more information and best practices, see Microsoft’s documentation on spatial anchors.
2018–03–27 Page published with editorial review
New content added for XR API changes in 2017.3 | https://docs.unity3d.com/ru/2019.1/Manual/wmr_input_types.html | 2020-10-20T00:13:31 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.unity3d.com |
openmediavault is Copyright © 2009-2020 by Volker Theile ([email protected]). All rights reserved.
openmediavault is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License v3 as published by the Free Software Foundation. The documentation is licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).
openmediavmediavault. If not, see <>. | https://openmediavault.readthedocs.io/en/4.x/copyright.html | 2020-10-19T23:27:22 | CC-MAIN-2020-45 | 1603107867463.6 | [] | openmediavault.readthedocs.io |
Registration views.
- class
registration.views.
RegistrationView¶
A subclass of Django’s FormView, which provides the infrastructure for supporting user registration.
Since it’s a subclass of
FormView,
RegistrationViewhas all the usual attributes and methods you can override; however, there is one key difference. In order to support additional customization,
RegistrationViewalso passes the
HttpRequestto most of its methods. Subclasses do need to take this into account, and accept the
requestargument.(request)¶
Select a form class to use on a per-request basis. If not overridden, will use
form_class. Should be the actual class object.
get_success_url(request,(request(request, *args, **kwargs)¶
Actually perform the business of activating a user account. Receives the
HttpRequestobject and any positional or keyword arguments passed to the view. Should return the activated user account if activation is successful, or any value which evaluates
Falsein boolean context if activation is unsuccessful. | https://django-registration.readthedocs.io/en/1.0/views.html | 2020-10-20T00:02:47 | CC-MAIN-2020-45 | 1603107867463.6 | [] | django-registration.readthedocs.io |
.
Commercial features in Sensu Go
- mutual transport layer security (mTLS) authentication to provide two-way verification of your Sensu agents and backend connections.
- Manage resources from your browser: Create, edit, and delete checks, handlers, mutators, and event filters using the Sensu web UI, and access the Sensu web UI homepage.
- Control permissions with Sensu role-based access control (RBAC), with the option of using Lightweight Directory Access Protocol (LDAP) and Active Directory (AD) assets.
-
Use sensuctl to view your license details at any time:
sensuctl license info
These resources will help you get started with commercial features in Sensu Go: | https://docs.sensu.io/sensu-go/latest/commercial/ | 2020-10-20T00:29:13 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['/images/go-license-download.png',
'Screenshot of Sensu account license download'], dtype=object)] | docs.sensu.io |
Encrypt/decrypt a file¶
Zend\Crypt\FileCipher implements the encryption of decryption of a file using a symmetric cipher in CBC mode
with the encrypt-then-authenticate approach, using HMAC to provide authentication (the same solution used by
Zend\Crypt\BlockCipher component).
Encrypt and decrypt a file is not an easy task, especially a big file. For instance, in CBC mode you must be sure to handle the IV correctly for each block. That means, if you are reading a big file you need to use a buffer and be sure to use the last block of the buffer as new IV for the next encryption step.
The
FileCipher uses a symmetric cipher, with the
Zend\Crypt\Symmetric\Mcrypt component.
The usage of this component is very simple, you just need to create an instance of
FileCipher and specify the
key, and you are ready to encrypt/decrypt any file:
By default
FileCipher uses the AES encryption algorithm (with a key of 256 bit) and the SHA-256 hash
algorithm to authenticate the data using the HMAC function. This component uses the PBKDF2 key derivation
algorithm to generate the encryption key and the authentication key, for the HMAC, based on the key specified
using the method
setKey().
If you want to change the encryption algorithm, you can use the
setCipherAlgorithm() function, for instance
you can specity to use the Blowfish encryption algorihtm using
setCipherAlgorithm('blowfish').
You can retrieve the list of all the supported encryption algorithm in your environment using the function
getCipherSupportedAlgorithms(), it will return an array of all the algorithm name.
If you need to customize the cipher algorithm, for instance changing the Padding mode, you can inject your
Mcrypt object in the
FileCipher using the
setCipher() method. The only parameter of the cipher that you cannot
change is the cipher mode, that will be CBC in any case.
Note
Output format
The output of the encryption file is in binary format. We used this format to do not impact on the output size. If you encrypt a file using the FileCipher component, you will notice that the output file size is almost the same of the input size, just some bytes more to store the HMAC and the IV vector. The format of the output is the concatenation of the HMAC, the IV and the encrypted file. | https://zf2-docs.readthedocs.io/en/latest/modules/zend.crypt.file.html | 2020-10-20T01:02:26 | CC-MAIN-2020-45 | 1603107867463.6 | [] | zf2-docs.readthedocs.io |
The one-step workflow¶
As an alternative to the HMAC and
model-based two-step (registration and
activation) workflows, django-registration bundles a one-step
registration workflow in
registration.backends.simple. This
workflow is deliberately as simple as possible:
- A user signs up by filling out a registration form.
- The user’s account is created and is active immediately, with no intermediate confirmation or activation step.
- The new user is logged in immediately.
Configuration¶
To use this workflow, include the URLconf
registration.backends.simple.urls somewhere in your site’s own URL
configuration. For example:
from django.conf.urls import include, url urlpatterns = [ # Other URL patterns ... url(r'^accounts/', include('registration.backends.simple.urls')), # More URL patterns ... ]
To control whether registration of new accounts is allowed, you can
specify the setting
REGISTRATION_OPEN.
Upon successful registration, the user will be redirected to the
site’s home page – the URL
/. This can be changed by subclassing
registration.backends.simple.views.RegistrationView and overriding
the method
get_success_url().(), and
specifying the custom subclass in your URL patterns.
Templates¶
The one-step workflow uses only one custom template:. | https://django-registration.readthedocs.io/en/2.3/one-step-workflow.html | 2020-10-20T00:45:18 | CC-MAIN-2020-45 | 1603107867463.6 | [] | django-registration.readthedocs.io |
package¶
Use the package resource to manage packages. When the package is installed from a local file (such as with RubyGems, dpkg, or RPM Package Manager), the file must be added to the node using the remote_file or cookbook_file resources.
This resource is the base resource for several other resources used for package management on specific platforms. While it is possible to use each of these specific resources, it is recommended to use the package resource as often as possible.
For more information about specific resources for specific platforms, see the following topics:
- apt_package
- bff_package
- chef_gem
- dpkg_package
- easy_install_package
- freebsd_package
- gem_package
- homebrew_package
- ips_package
- macports_package
- pacman_package
- portage_package
- rpm_package
- smartos_package
- solaris_package
- windows_package
- yum_package
Syntax¶
A package resource block manages a package on a node, typically by installing it. The simplest use of the package resource is:
package 'httpd'
which will install Apache using all of the default options and the default action (:install).
For a package that has different package names, depending on the platform, use a case statement within the package:
package 'Install Apache' do case node[:platform] when 'redhat', 'centos' package_name 'httpd' when 'ubuntu', 'debian' package_name 'apache2' end end
where 'redhat', 'centos' will install Apache using the httpd package and 'ubuntu', 'debian' will install it using the apache2 package
The full syntax for all of the properties that are available to the package resource is:
package 'name' do allow_downgrade TrueClass, FalseClass # Yum, RPM packages only arch String, Array # Yum packages only default_release String # Apt packages only flush_cache Array gem_binary String homebrew_user String, Integer # Homebrew packages only notifies # see description options String package_name String, Array # defaults to 'name' if not specified provider Chef::Provider::Package response_file String # Apt packages only response_file_variables Hash # Apt packages only source String subscribes # see description timeout String, Integer version String, Array action Symbol # defaults to :install if not specified end
where
- package tells the chef-client to manage a package; the chef-client will determine the correct package provider to use based on the platform running on the node
- 'name' is the name of the package
- :action identifies which steps the chef-client will take to bring the node into the desired state
- allow_downgrade, arch, default_release, flush_cache, gem_binary, homebrew_user, options, package_name, provider, response_file, response_file_variables, source, recursive,.
Warning
Gem package options should only be used when gems are installed into the system-wide instance of Ruby, and not the instance of Ruby dedicated to the chef-client.. (Debian platform only; for other platforms, use the :remove action.)
- :reconfig
- Reconfigure a package. This action requires a response file.
- :remove
- Remove a package.
- :upgrade
- Install a package and/or ensure that a package is the latest version.
Properties¶
This resource has the following attributes:
- allow_downgrade
Ruby Types: TrueClass, FalseClass
yum_package resource only. Downgrade a package to satisfy requested version requirements. Default value: false.
- arch
Ruby Types: String, Array
yum_package resource only. The architecture of the package to be installed or upgraded. This value can also be passed as part of the package name.
- default_release
Ruby Type: String
apt_package resource only. The default release. For example: stable.
-.
- gem_binary
Ruby Type: String
A property for the gem_package provider that is used to specify a gems binary.
- homebrew_user
Ruby Types: String, Integer
homebrew_package resource only. The name of the Homebrew owner to be used by the chef-client when executing.
- response_file
Ruby Type: String
apt_package and dpkg_package resources only. The direct path to the file used to pre-seed a package.
- response_file_variables
Ruby Type: Hash
apt_package and dpkg_package resources only. A Hash of response file variables in the form of {"VARIABLE" => "VALUE"}.
-.
Note
The AIX platform requires source to be a local file system path because installp does not retrieve packages using HTTP or FTP.
-.
- Chef::Provider::Package::Dpkg, dpkg_package
- The provider for the dpkg platform. Can be used with the options attribute.
- Chef::Provider::Package::EasyInstall, easy_install_package
- The provider for Python.
- Chef::Provider::Package::Freebsd, freebsd_package
- The provider for the FreeBSD platform.
- Chef::Provider::Package::Ips, ips_package
- The provider for the ips platform.
- Chef::Provider::Package::Macports, macports_package
- The provider for the Mac OS X platform.
- Chef::Provider::Package::Pacman, pacman_package
- The provider for the Arch Linux platform.
- Chef::Provider::Package::Portage, portage_package
- The provider for the Gentoo platform. Can be used with the options attribute.
- Chef::Provider::Package::Rpm, rpm_package
- The provider for the RPM Package Manager platform. Can be used with the options attribute.
- Chef::Provider::Package::Rubygems, gem_package
Can be used with the options attribute.
Warning
The gem_package resource must be specified as gem_package and cannot be shortened to package in a recipe.
- Chef::Provider::Package::Rubygems, chef_gem
- Can be used with the options attribute.
- Chef::Provider::Package::Smartos, smartos_package
- The provider for the SmartOS platform.
- Chef::Provider::Package::Solaris, solaris_package
- The provider for the Solaris platform.
- Chef::Provider::Package::Windows, package
- The provider for the Microsoft Windows platform.
- Chef::Provider::Package::Yum, yum_package
- The provider for the Yum package provider.
- Chef::Provider::Package::Zypper, package
- The provider for the openSUSE platform. a gems file from the local file system
gem_package 'right_aws' do source '/tmp/right_aws-1.11.0.gem' action :install end
Install a package
package 'tar' do action :install end
Install a package version
package 'tar' do version '1.16.1-1' action :install end
Install a package with options
package 'debian-archive-keyring' do action :install options '--force-yes' end
Install a package with a response_file
Use of a response_file is only supported on Debian and Ubuntu at this time. Custom resources must be written to support the use of a response_file, which contains debconf answers to questions normally asked by the package manager on installation. Put the file in /files/default of the cookbook where the package is specified and the chef-client will use the cookbook_file resource to retrieve it.
To install a package with a response_file:
package 'sun-java6-jdk' do response_file 'java.seed' end
Install a package using a specific provider
package 'tar' do action :install source '/tmp/tar-1.16.1-1.rpm' provider Chef::Provider::Package::Rpm end
Install a specified architecture using a named provider
yum_package 'glibc-devel' do arch 'i386' end
Purge a package
package 'tar' do action :purge end
Remove a package
package 'tar' do action :remove end
Upgrade a package
package 'tar' do action :upgrade end
Use the ignore_failure common attribute
gem_package 'syntax' do action :install ignore_failure true end
Use the provider common attribute
package 'some_package' do provider Chef::Provider::Package::Rubygems end
Avoid unnecessary string interpolation
Do this:
package 'mysql-server' do version node['mysql']['version'] action :install end
and not this:
package 'mysql-server' do version "#{node['mysql']['version']}" action :install end
Install a package in a platform
The following example shows how to use the package resource to install an application named app and ensure that the correct packages are installed for the correct platform:
package 'app_name' do action :install end case node[:platform] when 'ubuntu','debian' package 'app_name-doc' do action :install end when 'centos' package 'app_name-html' do action :install end end
Install sudo, then configure /etc/sudoers/ file
The following example shows how to install sudo and then configure the /etc/sudoers file:
# the following code sample comes from the ``default`` recipe in the ``sudo`` cookbook: package 'sudo' do action :install end if node['authorization']['sudo']['include_sudoers_d'] directory '/etc/sudoers.d' do mode '0755' owner 'root' group 'root' action :create end cookbook_file '/etc/sudoers.d/README' do source 'README' mode '0440' owner 'root' group 'root' action :create end end template '/etc/sudoers' do source 'sudoers.erb' mode '0440' owner 'root' group platform?('freebsd') ? 'wheel' : 'root' variables( :sudoers_groups => node['authorization']['sudo']['groups'], :sudoers_users => node['authorization']['sudo']['users'], :passwordless => node['authorization']['sudo']['passwordless'] ) end
where
- the package resource is used to install sudo
- the if statement is used to ensure availability of the /etc/sudoers.d directory
- the template resource tells the chef-client where to find the sudoers template
- the variables property is a hash that passes values to template files (that are located in the templates/ directory for the cookbook
Use a case statement to specify the platform
The following example shows how to use a case statement to tell the chef-client which platforms and packages to install using cURL.
package 'curl' case node[:platform] when 'redhat', 'centos' package 'package_1' package 'package_2' package 'package_3' when 'ubuntu', 'debian' package 'package_a' package 'package_b' package 'package_c' end end
where node[:platform] for each node is identified by Ohai during every chef-client run. For example:
package 'curl' case node[:platform] when 'redhat', 'centos' package 'zlib-devel' package 'openssl-devel' package 'libc6-dev' when 'ubuntu', 'debian' package 'openssl' package 'pkg-config' package 'subversion' end end
Use symbols to reference attributes
Symbols may be used to reference attributes:
package 'mysql-server' do version node[:mysql][:version] action :install end
instead of strings:
package 'mysql-server' do version node['mysql']['version'] action :install end
Use a whitespace array to simplify a recipe
The following examples show different ways of doing the same thing. The first shows a series of packages that will be upgraded:
package 'package-a' do action :upgrade end package 'package-b' do action :upgrade end package 'package-c' do action :upgrade end package 'package-d' do action :upgrade end
and the next uses a single package resource and a whitespace array (%w):
%w{package-a package-b package-c package-d}.each do |pkg| package pkg do action :upgrade end end
where |pkg| is used to define the name of the resource, but also to ensure that each item in the whitespace array has its own name.
Specify the Homebrew user with a UUID
homebrew_package 'emacs' do homebrew_user 1001 end
Specify the Homebrew user with a string
homebrew_package 'vim' do homebrew_user 'user1' end | https://docs-archive.chef.io/release/12-0/resource_package.html | 2020-10-20T00:46:09 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs-archive.chef.io |
Installation Strategies
Sensu’s architecture is one of its most compelling features. It is flexible enough to be installed on a single system for development/testing/lab purposes (or small production environments), and sophisticated enough to support highly available configurations capable of monitoring infrastructure at scale.
Please review the following definitions of standalone, distributed, and high-availability installation strategies to help you select which one will be the most appropriate for your installation. If you’re just getting started with Sensu and/or if you’re not sure which strategy to choose, follow the instructions for a standalone installation.
Standalone
Install all of Sensu’s dependencies and services on a single system. For the purposes of this installation guide (which is designed to help new users learn how Sensu works and/or setup Sensu in a development environment), a standalone installation is recommended.
To proceed with a standalone installation, please select a single compute resource with a minimum of 2GB of memory (4GB recommended) (e.g. a physical computer, virtual machine, or container) as your installation target, and continue to the next step in the guide.
NOTE: Sensu’s modular design makes it easy to upgrade from a standalone installation to a distributed or high-availability installation, so unless you have some specific technical requirement that demands a distributed or high availability installation, there’s usually no need to start with a more complex installation.
Distributed
Install Sensu’s dependencies (e.g. RabbitMQ and/or Redis) and services (i.e.
the Sensu server and API) on separate systems. The only difference between
a Standalone installation and a Distributed installation is that Sensu’s
dependencies and services are running on different systems. As a result,
although this guide will explain how to perform a Distributed Sensu
installation, it will not cover such industry-standard concepts as networking,
etc (i.e. configuring services to communicate with other services installed
elsewhere on the network will be left as an exercise for the user; e.g.
replacing default
localhost configurations with the corresponding addresses
and/or ports, and ensuring that the appropriate network connections and firewall
rules will allow said services to communicate with one another).
To proceed with a distributed installation, please select a minimum of two (2) compute resources (e.g. physical computers, virtual machines, or containers) as your installation targets, and continue to the next step in the guide.
NOTE: for the purposes of this installation guide, distributed installation will be described in terms of two (2) installation targets. One system will act as the “transport and datastore” system, and one system will be act as the Sensu server. Advanced users who may wish to use more than two systems are welcome to do so (e.g. using one as the transport/RabbitMQ, one as the data store/Redis, one as the Sensu server, and one or more for running Sensu clients).
High Availability
Install Sensu’s dependencies across multiple systems, in a high-availability configuration (clustering, etc), and install the Sensu services on multiple systems in a clustered configuration. High availability configurations will be introduced at conclusion of this guide. | https://docs.sensu.io/sensu-core/1.7/installation/installation-strategies/ | 2020-10-20T00:47:38 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.sensu.io |
Using this feature you are able to view your most recent invoices sent out and your upcoming invoices to be sent for recurring/automated type invoices on each clients account.
This helps to ensure that all invoicing is in order and to foresee should any invoicing need correcting before the invoices are sent.
Go into the client’s account. In the left navigation menu, click on Account Statement ⇒ Future Invoices.
To adjust any of these invoices, click on the invoice number to open the invoice. Click on “edit invoice”.
See Also: Adding a Service; Service Invoice Rules; Recurring Invoice | https://docs.snapbill.com/viewing_recent_or_future_invoices | 2020-10-19T23:29:52 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.snapbill.com |
Contents:
Contents: Trifacta® Wrangler Enterprise."
This page has no comments. | https://docs.trifacta.com/display/r071/Create+Dataset+with+SQL | 2020-10-20T00:00:58 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
Custom user models¶
When django-registration was first developed, Django’s
authentication system supported only its own built-in user model,
django.contrib.auth.models.User. More recent versions of Django
have introduced support for custom user models.
Older versions of django-registration did not generally support custom user models due to the additional complexity required. However, django-registration now can support custom user models. Depending on how significantly your custom user model differs from Django’s default, you may need to change only a few lines of code; custom user models significantly different from the default model may require more work to support.
Overview¶
The primary issue when using django-registration with a custom
user model will be
RegistrationForm.
RegistrationForm is
a subclass of Django’s built-in
UserCreationForm, which in turn is
a
ModelForm with its model set to
django.contrib.auth.models.User. The only changes made by
django-registration are to apply the reserved name validator
(
registration.validators.ReservedNameValidator) and make the
RegistrationForm because two of
the three built-in workflows of django-registration require an
user). As a result, you will always be required to supply a custom
form class when using django-registration with a custom user
model.
In the case where your user model is compatible with the default
behavior of django-registration, (see below) registration.forms import RegistrationForm from mycustomuserapp.models import MyCustomUser class MyCustomUserForm(RegistrationForm): class Meta: model = MyCustomUser
You will also need to specify the fields to include in the form, via
the
fields declaration.
And then in your URL configuration (example here uses the HMAC activation workflow):
from django.conf.urls import include, url from registration.backends.hmac.views import RegistrationView from mycustomuserapp.forms import MyCustomUserForm urlpatterns = [ # ... other URL patterns here url(r'^accounts/register/$', RegistrationView.as_view( form_class=MyCustomUserForm ), name='registration_register', ), url(r'^accounts/', include('registration.backends.hmac.urls')), ]
If your custom user model is not compatible with the built-in workflows of django-registration (see next section), you will probably need to subclass the provided views (either the base registration views, or the views of the workflow you want to use) and make the appropriate changes for your user model.
Determining compatibility of a custom user model¶
The built-in workflows and other code of django-registration do as
much as is possible to ensure compatibility with custom user models;
django.contrib.auth.models.User is never directly imported or
referred to, and all code in django-registration instead uses
settings.AUTH_USER_MODEL or
django.contrib.auth.get_user_model() to refer to the user model,
and
USERNAME_FIELD when access to the username is required.
However, there are still some specific requirements you’ll want to be aware of.
The two-step activation workflows – both HMAC- and model-based – require that your user model have the following fields:
CharFieldor
TextField) holding the user’s email address. Note that this field is required by
RegistrationForm, which is a difference from Django’s default
UserCreationForm.
is_active– a
BooleanFieldindicating whether the user’s account is active.
You also must specify the attribute
USERNAME_FIELD on your user
model to denote the field used as the username. Additionally, your
user model must implement the
to the user.
The model-based activation workflow requires one additional field:
date_joined– a
DateFieldor
DateTimeFieldindicating when the user’s account was registered.
The one-step workflow requires that your
user model set
USERNAME_FIELD, and requires that it define a field
named
password for storing the user’s password (it will expect to
find this value in the
password1 field of the registration form);
the combination of
USERNAME_FIELD and
password must be
sufficient to log a user in. Also note that
RegistrationForm
requires the
RegistrationForm.
If your custom user model defines additional fields beyond the minimum
requirements, you’ll either need to ensure that all of those fields
are optional (i.e., can be
NULL in your database, or provide a
suitable default value defined in the model), or you’ll need to
specify the full list of fields to display in the
fields option of
your
RegistrationForm subclass. | https://django-registration.readthedocs.io/en/2.4/custom-user.html | 2020-10-20T01:06:46 | CC-MAIN-2020-45 | 1603107867463.6 | [] | django-registration.readthedocs.io |
Assembly
Record of Committee Proceedings
Committee on Mining and Rural Development
Assembly Bill 71
Relating to: aid payments on, and city, village, town, and county approval of, certain lands purchased by the Department of Natural Resources and restrictions on the purchase of land by the Board of Commissioners of Public Lands.
By Representatives Sanfelippo, Craig, August, Brandtjen, R. Brooks, Czaja, Edming, Hutton, Jacque, Kapenga, Kleefisch, Kremer, Kulp, T. Larson, Neylon, Quinn, Skowronski, Thiesfeldt and Weatherston; cosponsored by Senators Tiffany and Nass.
March 05, 2015 Referred to Committee on Mining and Rural Development
April 07, 2016 Failed to pass pursuant to Senate Joint Resolution 1
______________________________
James Emerson
Committee Clerk | https://docs.legis.wisconsin.gov/2015/related/records/assembly/mining_and_rural_development/1237967 | 2020-10-20T01:08:57 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.legis.wisconsin.gov |
How is NetworkWatcherRG (resource group) with out any resource in it related to Virtual Network diagram component? If i delete the NetworkWatcherRG the diagram setting will not render anything in the the Vnet Diagram setting and why? Can i consider the a ResourceGroup also as a resource as it is enabling the diagram component in the Vnet ? | https://docs.microsoft.com/en-us/answers/questions/3048/virtual-network-network-watcher.html | 2020-10-20T01:49:10 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
Sensu Client
Having successfully installed and configured a Sensu server and API (Sensu Core
or Sensu Enterprise), let’s now install and/or configure a Sensu client. The
Sensu client is run on every system you need to monitor, including those running
the Sensu server and API, and Sensu’s dependencies (i.e. RabbitMQ and/or
Redis). Both Sensu Core and Sensu Enterprise use the same Sensu client
process (i.e.
sensu-client), so upgrading from Sensu Core to Sensu
Enterprise does not require you to install a difference Sensu client.
Included in Sensu Core
The Sensu client process (
sensu-client) is part of the open source Sensu
project (i.e. Sensu Core) and it is included in the Sensu Core installer
packages along with the Sensu Core server and API processes (i.e.
sensu-server
and
sensu-api). This means that if you are following the instructions in this
guide for a standalone installation, your Sensu client is already
installed!
Disabled by default
The Sensu client process (
sensu-client) is disabled by default on all
platforms. Please refer to the corresponding configuration and operation
documentation corresponding to the platform where you have installed your Sensu
client(s) for instructions on starting & stopping the Sensu client process,
and/or enabling the Sensu client process to start automatically on system boot.
Platforms
To continue with this guide, please refer to the Install Sensu Core, Configure Sensu, and Operating Sensu instructions corresponding to the platform(s) where you will run your Sensu client(s). | https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/ | 2020-10-20T00:58:33 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.sensu.io |
DeleteFlowLogs
Deletes one or more flow logs.
- FlowLogId.N
One or more flow log IDs.
Constraint: Maximum of 1000 flow log IDs.
Type: Array of strings
Required: Yes
Response Elements
The following elements are returned by the service.
- requestId
The ID of the request.
Type: String
- unsuccessful
Information about the flow logs that could not be deleted successfully.
Type: Array of UnsuccessfulItem objects
Errors
For information about the errors that are common to all actions, see Common Client Errors.
Example
Example
This example deletes flow log fl-1a2b3c4d.
Sample Request &FlowLogId.1=fl-1a2b3c4d &AUTHPARAMS
Sample Response
<DeleteFlowLogsResponse xmlns=""> <requestId>c5c4f51f-f4e9-42bc-8700-EXAMPLE</requestId> <unsuccessful/> </DeleteFlowLogsResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DeleteFlowLogs.html | 2020-10-20T00:34:42 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.aws.amazon.com |
For automated deployment on AWS, see Ansible deployment.
Check your environment is prepared with General Requirements and Database Storage Requirements.
BlockScout requires a full archive node in order to import every state change for every address on the target network. For client specific settings related to a node running parity or geth, please see Client Settings.
1)
git clone
2)
cd blockscout
3) Provide DB URL:
export DATABASE_URL=postgresql://user:password@localhost:5432/blockscout
Linux: Update the database username and password configuration
Mac: Use logged-in user name and empty password
Optional: Change credentials in
apps/explorer/config/test.exs for test env
4) Generate a new secret_key_base for the DB by setting a corresponding ENV var:
export SECRET_KEY_BASE=VTIB3uHDNbvrY0+60ZWgUoUBKDn9ppLR8MI4CpRz4/qLyEFs54ktJfaNT6Z221No
In order to generate a new
secret_key_base run
mix phx.gen.secret
5) If you have deployed previously, remove static assets from the previous build
mix phx.digest.clean.
6) Set other environment variables as needed.
CLI Example:
export ETHEREUM_JSONRPC_VARIANT=parityexport ETHEREUM_JSONRPC_HTTP_URL= DATABASE_URL=postgresql://...export COIN=DAIexport ...
The
ETHEREUM_JSONRPC_VARIANT will vary depending on your client (parity, geth etc). More information on client settings.
7) Install Mix dependencies, compile them and compile the application:
mix do deps.get, local.rebar --force, deps.compile, compile
8) If not already running, start Postgres:
pg_ctl -D /usr/local/var/postgres start
To check postgres status:
pg_isready
9) Create and migrate database
mix do ecto.create, ecto.migrate
If you in dev environment and have run the application previously with the different blockchain, drop the previous database
mix do ecto.drop, ecto.create, ecto.migrate
Be careful since it will delete all data from the DB. Don't execute it on production if you don't want to lose all the data!
10) Install Node.js dependencies
cd apps/block_scout_web/assets; npm install && node_modules/webpack/bin/webpack.js --mode production; cd -
cd apps/explorer && npm install; cd -
11) Build static assets for deployment
mix phx.digest
12) Enable HTTPS in development. The Phoenix server only runs with HTTPS.
cd apps/block_scout_web; mix phx.gen.cert blockscout blockscout.local; cd -
Add blockscout and blockscout.local to your
/etc/hosts
127.0.0.1 localhost blockscout blockscout.local255.255.255.255 broadcasthost::1 localhost blockscout blockscout.local
If using Chrome, Enable
chrome://flags/#allow-insecure-localhost
13) Return to the root directory and start the Phoenix Server.
mix phx.server | https://docs.blockscout.com/for-developers/manual-deployment | 2020-10-20T00:48:05 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.blockscout.com |
Frontastic Coding Guide
Below are the common standards for Frontastic development. These guidelines are mandatory for development inside Frontastic and are highly recommended for Project development. If you have a valid reason for going against the guidelines, please document these reasons and let us know.
In this article:
Automatic static code analysis can be triggered using the below command in any module:
ant test-static
Be sure to run this command frequently. Some rules will even let the Continuous Integration build fail, so you'll notice then at the latest.
GIT Workflow
Frontastic follows a "Master based development" flow (originally known as "trunk based development"). This means, branches are generally discouraged. All code should go directly into Master. This requires each push (ideally each commit) to leave the code base fully functional.
Commit Guidelines
- Pull before you push
- Rebase unpushed changes in favor of merge (set
pull.rebaseglobally to
true)
- Structure your work in logical steps and commit parts of your work together which deal with a common purpose
- Frequent, smaller commits are preferred over large batches of work
- Push frequently, but always ensure a working state in
master
Commit Message Guidelines
- Every commit message consists of at least a subject line explaining the change in a few meaningful words
- Limit the subject line to 50 characters (soft limit) respectively 80 characters (hard limit)
- Capitalize the subject line
- Use past tense in the subject (
Fixed Schema to complete defaultsinstead of
Fixes Schema to complete defaults)
- If you're working on a ticket, prefix the subject by the ticket number using a
#(for example,
#4223 Implemented model for product types)
- Add a body to your commit to explain the reasons for your change if you feel it's necessary (for example, removing a feature, changing a behavior for certain reasons, etc.)
- Divide the subject from the body using a blank line
- Use of Markdown style elements in the body is permitted (for example, lists)
Master Based Development Guidelines
- Run (all/component) tests before pushing (
$ ant test)
- Use an iterative development approach (start with the smallest functional feature possible and extend it subsequently)
- Create tests for all new code to ensure it's basically working (no need for full code-coverage or similar)
- Implement code without integrating it directly into the app before it's functional (use tests!)
- Deactivate the effect of your code using a feature-flag if it could disturb others while being under development
- If you're unsure about changing existing code and it doesn't have (enough) tests: create tests first
- Always test the frontend parts that your change affects in your development VM/Container before pushing
If you're unsure if a specific part of your code is the right way of doing it, feel free to create a minimal branch for that specific piece of code and let us know.
Programming Language Crossing Coding Style
Frontastic encourages Clean Code as described in Robert Martin's book.
Most importantly, the following rules should by applied to any kind of code:
- Stick to the patterns you find in existing code
- If you find code places that can be optimized for cleanness, go ahead and optimize them (Boy Scout rules)
- Use meaningful names for all code entities (classes, methods, fields, variables, files, ...)
- Avoid and encapsulate side-effects
- Use exceptions for errors
- Avoid comments that repeat code
- Add comments where you do something unusual
- Keep comments short and to the point
- Frequently run the code analysis tools available (
$ ant test)
Docs for CSS structure
- BEM ()
- SMACSS ()
- ITCSS ()
- OOCSS ()
- Atomic Design ()
Principles
It's always hard to organize CSS really well so we use the principle of ITCSS (Inverted Triangle CSS) to separate our Folder Structure into different layers.
Folder Structure
Be sure to follow the ITCSS structure and a number should be given to each folder.
Importing files
- Don't use
_before the filename
- Separate each block with an own import statement
- Separate each file with comma
- Don't use
.scssending
@import "00-settings/settings.colors", "00-settings/settings.layout"; @import "01-tools/tools.border", "01-tools/tools.get-color";
File Structure
A file can have different sections. The first section is the configuration area where you can define component-based settings. Then you give the component the style.
Please use childs and states in the block. Under the block you can describe the variations.
// Config $button-border-color: blue; // Component /// @group Buttons .c-button { cursor: pointer; display: inline-block: padding: get-spacing-squished(m); vertical-align: top; -webkit-appearance: none; -moz-appearance: none; @include border(transparent); background: get-color-brand(primary); color: get-color(unique, white); line-height: 1; text-align: center; &:hover { } &__child { color: red; } // States &.is-active { color: blue } } /// Modifiers .c-button--ghost {} .c-button--boss {}
BEM
Block, Element, Modifier. Always use this naming convention when you name your classes.
// Block .c-accordion { // Trigger &__trigger {} // Modifier &--boss {} }
Prefix for Separation
// Object .o-grid {} // Component .c-accordion {} // Utility .u-hide {}
Using T-Shirt Sizes
If you have hierarchical values, then use the different steps as T-shirt sizes.
$font-size: ( xs: .6rem, s: .8rem, m: 1rem, l: 1.4rem, xl: 1.8rem ) !default;
JS Hooks
Sometimes we need a class for only JS Hooks. It's important that JavaScript doesn't give the elements a style. For all JS Hooks Classes we use the `js' Prefix.
<div class="c-accordion js-accordion"></div>
State Handling
We separate States from the block. You can remove or add these classes with JS for State handling. Mostly these classes have a prefix like
is-.
.c-button { &.is-active { } }
Categorizing CSS Rules
We use the SMACSS approach and categorize our CSS Rules/Properties in different sections. Please structure the properties in each block alphabetically.
- Box
- Border
- Background
- Font
- Other
c-button { // Box padding: 24px; position: relative; top: 0; // Border border: 1px solid #000; // Background background: red; // Font color: #fff; line-height: 1; // Other transform: scale(1); }
Separate Colors
A system with three colors will help to scale up the architecture and we use the groups below:
Palette Colors
Every color which you want to use in the project should be added into this palette.
$color-palette: ( unique: ( white: #fff, black: #000 ), orange: ( base: #FFCE18 ), gray: ( light: #C1C2C5, base: #98999F, ) ) !default;
Brand Colors
These colors are only for the brand specific colors and the key colors for the theme. In this level we need a more generic approach, so the naming is a bit different.
$color-brand: ( primary: get-color(blue), primaryGradation: get-color(blue, light), secondary: get-color(orange), secondaryGradation: get-color(orange, light) ) !default;
Brightness is available in lighter, light, base, dark and darker.
Layout Colors
Mostly used for areas like boxes with a background, promoboxes or whisperboxes.
$color-layout: ( ghost: get-color(unique, white), spot: get-color(blue) ) !default;
Semantic Colors
These colors define the semantic of a component, element, etc. For example, hints, success or error states. Use meaningful and unique colors for feedback like success, warning, danger, etc.
$color-semantic: ( success: #98C674, successGradation: #F0FAEA, error: #E07676, errorGradation: #FAEAEA ) !default;
Avoid !important and IDs
Please don't use !important or IDs for styling as we prefer classes. An ID should only be used for JS. The only exception for
!important is in utilities where it's possible to use it.
.u-hide { display: none !important; }
Line-Breaks, Whitespaces (One Tab = Two Whitespaces)
.c-button { margin: 12px; padding: 24px; }
Use the Function to Get the Color
// DO: use the function .c-button { background: get-color-brand(primary); } // DON'T: use hex value of the color in the component .c-button { background: #000; }
Handling Spacing
CSS use properties like margin, padding and absolute positioning to separate objects. We think that Spacing should have its own concept like the color section. Sometimes you need padding for a box, a space to the item left or a distance to the next row. So we should separate these into different categories.
Separate in these categories:
- Inset - A inset spacing offers indent spacing like boxes or a photo frame on a wall.
- Inline - We put elements in a row like a list of chips. So we need a space between these elements.
- Stack - In the general case you scroll vertically through the user interface. So we stack elements like a heading on a data table.
- Squish - The squish inset reduces the space top and bottom. It's mostly used in buttons and listitems.
Scaling
It always helps to use a value system for the creation of spacing. So we use a non-linear System. Starting from the base, we go in both directions to stops smaller (16, 8, 4, 2) and larger (16, 32, 64) on the scale.
Handling Indices
Please do not use an index directly as a property. You must add it to the map and use the
get-index() function.
// DO: using a map for a good overview $indices: ( navbar: 10, header: 20 ) !default; .c-header { z-index: get-index(header); } // DON'T: z-index in the component .c-header { z-index: 20; }
Alphabetical Order
.c-button { left: 0; margin: 0; padding: 0; position: relative; color: green; font-size: 21px; line-height: 1; }
Don't Use Very Generic or Specific Class Names
// DO: Super Light Button .c-button--ghost { } // DON'T: So specific .c-button--red { } // DON'T: It can contain everything .c-button--alternative { }
Don't Use ShortNames
// DO: someone will know what do you mean $font-weight-light: 300; // DON'T: a short name $fw-light: 300;
SCSS Only With a Map
If you write a SCSS Map, then you must create a function to get a value from it the easy way.
// Map $indices: ( navbar: 10, header: 20 ) !default; // Function @function get-index($key) { @if map-has-key($indices, $key) { @return map-get($indices, $key); } @warn "The key #{$key} is not in the map ’$indices’"; @return null; }
Atomic Design
We only use this principle for categorizing all kinds of components in the frontend: It doesn't reflect our CSS structure.
atoms/ molecules/ organisms/ templates/ objects/
Storybook
Design Tokens
Readable Articles
PHP Coding Conventions
The following conventions apply to PHP / backend code.
Basics
The following PSR (PHP Standardization Request) must be followed:
Besides that we are using Symfony 4.1 in all of our PHP based components and attempt to follow its rules.
General
- No mixed data type hints or mixed return type hints
- Tests for all important, critical or complex code, especially in services
- Session mustn't contain any data but the logged in users ID
- Never use Session beyond the controller
- Always use the Request object to access request data (Header, Body, Query String, Files, …)
- Only use request objects in controllers
- Don't suppress any error from PHP during runtime (@)
- Don't use static (except for factory methods in data objects)
- NEVER use static class properties
Functions and Methods
- Keep functions small and modular
- Function and methods have a limit of 50 LOC
Classes
- Properties first
- Try to limit the number of public methods on services to a maximum of 10
- Helper methods should be below the public method using them
- Classes shouldn't exceed 256 LOC
Naming
- Don't use types in names
- Keep abbreviations to a minimum
- Always use English naming
- Try to use variable names relating to a common understanding of the domain
- A variable name should describe what is contained in the variable
- Be nice to the reader and your co-worker
Use Exceptions
- Extend from component (Base Exception)
- Always throw exception in case of errors, don't return null or false
- Handle exceptions sensibly at latest in controllers
- Display sensible error messages depending on exception type
- Log technical failures, alert operations
- Try to avoid returning false in case of actual Exceptions
- Only in the case of Validation is it OK to return false
Gateways
- Get-Methods (like getById) are allowed to throw an Exception on not found
- Find-Methods aren't allowed to throw Exceptions, null should be returned
- By default use DQL. If a query needs optimization or something is not supported write raw SQL
- No Business Logic – simple checks or mapping is OK
- Save/Read data from/to database or any other data source
- Return and receive always data objects. Primitive types in documented edge-cases
- Services depend on Gateways (interfaces)
DataObjects
- Use them for all data -- never StdClass
- Never use arrays as data structures
- Data objects must not aggregate "active" dependencies (gateways, services)
- Only logic modeling eternal truth
- Avoid to create multiple DataObjects with same name
- Don't use getter / setters: Use public properties and direct property access
Service
- Max 4 dependencies
- Technical constraints like logging and caching should be moved into Decorators
- All dependencies are injected
- The business logic should be contained here
- No dependencies on externals – each external class should be wrapped behind a facade (Symfony, DB (Gateways), Webservices, …)
- Not even the Request object, but DataObjects created from the request data
- Respect Law Of Demeter – only work with direct dependencies
Controller
- Catch Exceptions
- Check Permissions
- Convert incoming data into object
- No (Business) Logic (only validation or Simple authorization like "is logged in")
- Use Request and Response objects
MySQL
- Write keywords in statements in uppercase
- No JOINs without condition
- No implicit JOINs
- All table names and column names are in singular
- All columns in a table (
user) are prefixed with the unique table prefix (
u_) -- especially also the id (
u_id)
- A foreign key reference will use the column name from the referenced table (
comment:u_id)
‹ Back to Article List | https://docs.frontastic.cloud/article/89-frontastic-coding-guide | 2020-10-19T23:36:41 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3abcb42c7d3a7e9ae74380/file-qM6aXsBtDd.jpg',
'Frontastic CSS Design Principle Triangle'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac4492c7d3a7e9ae743ec/file-V8JCyer9Oa.jpg',
'Frontastic Coding Guidelines Handling Spacing'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac56f04286364bc94e31d/file-L2pbuA6ScH.jpg',
'Frontastic Coding Guidelines Inset Definition'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac5742c7d3a7e9ae743f3/file-frmbaoQgfM.jpg',
'Frontastic Coding Guidelines Inline Definition'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac57b2c7d3a7e9ae743f4/file-uPuNBo2qn7.jpg',
'Frontastic Coding Guidelines Stack Definition'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac58404286364bc94e31e/file-D4u5VlY6jN.jpg',
'Frontastic Coding Guidelines Squish Definition'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac58a2c7d3a7e9ae743f6/file-V9pBjIZgab.jpg',
'Frontastic Coding Guidelines Stretch Definition'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5dfe56362c7d3a7e9ae56878/images/5e3ac60f2c7d3a7e9ae74401/file-5APEcSC2g7.jpg',
'Frontastic Coding Guidelines Scaling'], dtype=object) ] | docs.frontastic.cloud |
property
Declare a managed property.
modifier property type property_name; // property data member modifier property type property_name { // property block modifier void set(type); modifier type get(); } modifier property type property_name[,] { modifier void set(type); modifier type get(); }
Parameters
[,]
The notation for an indexed property. Commas are optional; for each additional parameter you want the accessor methods to take, add a comma.
modifier
A modifier that can be used on either the event declaration or on an event accessor method. Possible values are static and virtual.
property_name
Parameter(s) for the raise method, must match the signature of the delegate.
type
The type of the value represented by the property.
Remarks be of a form, such that, you cannot reference the member in the source as if it were an actual data member of the containing class. Use ildasm.exe to view the metadata for your type and see the compiler generated name for the property's backing store.
Different accessibility is allowed for the accessor methods in a property block. That is, the set method can be public and the get method can be private. However, it is an error for an accessor method to have a less restrictive accessibility than what is on the declaration of the property itself.
property is a context-sensitive keyword. See Context-Sensitive Keywords for more information.
For more information about properties, see
-
-
-
Multidimensional Properties
Overloading Property Accessor Methods
How to: Declare Abstract and Sealed Properties
The following example shows the declaration and use of a property data member and a property block. It also shows that a property accessor can be defined out of class.
Example
// mcppv2_property.cpp // compile with: /clr using namespace System; public ref class C { int MyInt; public: // property data member property String ^ Simple_Property; // property block property int Property_Block { int get(); void set(int value) { MyInt = value; } } }; int C::Property_Block::get() { return MyInt; } int main() { C ^ MyC = gcnew C(); MyC->Simple_Property = "test"; Console::WriteLine(MyC->Simple_Property); MyC->Property_Block = 21; Console::WriteLine(MyC->Property_Block); }
test 21
Requirements
Compiler option: /clr
See Also
Concepts
Language Features for Targeting the CLR | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/es7h5kch%28v%3Dvs.90%29 | 2020-02-17T07:31:10 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
When x-axis columns.
While viewing your answer as a chart, click Edit chart configuration near the top right.
In the X-Axis box, delete the values. Then re-add them in the new preferred order.
Your chart reorganizes itself to reflect the new label order. | https://docs.thoughtspot.com/5.2/end-user/search/reorder-values-on-the-x-axis.html | 2020-02-17T06:33:06 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.thoughtspot.com |
Core pexpect components¶ the Pexpect system; these are the function, run() and the class, spawn. The spawn class is more powerful. The run() function is simpler than spawn, and is good for quickly calling program. When you call the run() function it executes a given program and then returns the output. This is a handy replacement for os.system().
For example:
pexpect.run('ls -la')
The spawn class is the more powerful interface to the Pexpect system. You can use this to spawn a child program then interact with it by sending input and expecting responses (waiting for patterns in the child’s output).
For example:
child = pexpect.spawn('scp foo [email protected]:.') child.expect('Password:') child.sendline(mypassword)
This works even for commands that ask for passwords or other input outside of the normal stdio streams. For example, ssh reads input directly from the TTY device which bypasses stdin., Jacques-Etienne Baudoux,, John Spiegel, Jan Grant, and Shane Kerr. Let me know if I forgot anyone.
Pexpect is free, open source, and all that good stuff..
spawn class¶
- class
pexpect.
spawn(command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None, ignore_sighup=False, echo=True, preexec_fn=None, encoding=None, codec_errors='strict', dimensions=None, use_poll=False)[source]¶
This is the main class interface for Pexpect. Use this class to start and control child applications.
__init__(command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None, ignore_sighup=False, echo=True, preexec_fn=None, encoding=None, codec_errors='strict', dimensions=None, use_poll=False)[source]¶
expect()returns, the full buffer attribute remains up to size maxread irrespective of searchwindowsize value.
When the keyword argument
timeoutis specified as a number, (default: 30), then
TIMEOUTwill be raised after the value specified has elapsed, in seconds, for any of the
expect()family of method calls. When None, TIMEOUT will not be raised, and
expect(pattern, timeout=-1, searchwindowsize=-1, async_=False, **kw)¶ returns 1('foo') if parts of the final 'bar' arrive late
When a match is found for the given pattern, the class instance attribute match becomes an re.MatchObject result. Should an EOF or TIMEOUT pattern match, then the match attribute will be an instance of that exception class. The pairing before and after class instance attributes are views of the data preceding and following the matching pattern. On general exception, class attribute before is all data received up to the exception, while match and after attributes are value None.
When the keyword argument timeout is -1 (default), then TIMEOUT will raise after the default value specified by the class timeout attribute. When None, TIMEOUT will not be raised and may block indefinitely until match.
When the keyword argument searchwindowsize is -1 (default), then the value specified by the class maxread attribute is used.
A list entry may be EOF or TIMEOUT instead of a string. This will catch these exceptions and return the index of the list entry instead of raising the exception. The attribute ‘after’ will be set to the exception type. The attribute ‘match’().
On Python 3.4, or Python 3.3 with asyncio installed, passing
async_=Truewill make this return an
asynciocoroutine, which you can yield from to get the same result that this method would normally give directly. So, inside a coroutine, you can replace this code:
index = p.expect(patterns)
With this non-blocking form:
index = yield from p.expect(patterns, async_=True)
expect_exact(pattern_list, timeout=-1, searchwindowsize=-1, async_=False, **kw)¶
This is similar to expect(), but uses plain string matching instead of compiled regular expressions in ‘pattern_list’. The ‘pattern.
Like
expect(), passing
async_=Truewill make this return an asyncio coroutine.
expect_list(pattern_list, timeout=-1, searchwindowsize=-1, async_=False, **kw)¶().
Like
expect(), passing
async_=Truewill make this return an asyncio coroutine.
compile_pattern_list(patterns)¶(cpl, timeout) ...
send(s)[source]¶
Sends string
sto)
sendline(s='')[source]¶
Wraps send(), sending string
sto child process, with
os.linesepautomatically appended. Returns number of bytes written. Only a limited number of bytes may be sent for each line in the default terminal mode, see docstring of
send().
writelines(sequence)[source]¶
This calls write() for each element in the sequence. The sequence can be any iterable object producing strings, typically a list of strings. This does not add line separators. There is no return value.
sendcontrol(char)[source]¶
Helper method that wraps send() with mnemonic access for sending control character to the child (such as Ctrl-C or Ctrl-D). For example, to send Ctrl-G (ASCII 7, bell, ‘’):
child.sendcontrol('g')
See also, sendintr() and sendeof().
sendeof()[source]¶.
sendintr()[source]¶
This sends a SIGINT to the child. It does not require the SIGINT to be the first character on a line.
read(size=-1)¶(size=-1)¶
This reads and returns one entire line. The newline at the end of line is returned as part of the string, unless the file ends without a newline. An empty string is returned if EOF is encountered immediately. This looks for a newline as a CR/LF pair (rn) even on UNIX because this is what the pseudotty device returns. So contrary to what you may expect you will receive newlines as rn.
If the size argument is 0 then an empty string is returned. In all other cases the size argument is ignored, which is not standard behavior for a file-like object.
read_nonblocking(size=1, timeout=-1)[source]¶ ‘size’.
interact(escape_character='\x1d', input_filter=None, output_filter=None)[source]¶()
logfile¶
logfile_read¶
logfile_send¶
Set these to a Python file object (or
sys.stdout) to log all communication, data read from the child process, or data sent to the child process.
Note
With
spawnin bytes mode, the log files should be open for writing binary data. In unicode mode, they should be open for writing unicode text. See Handling unicode.
Controlling the child process¶
- class
pexpect.
spawn[source]
kill(sig)[source]¶
This sends the given signal to the child application. In keeping with UNIX tradition it has a misleading name. It does not necessarily kill the child unless you send the right signal.
terminate(force=False)[source]¶
This forces a child process to terminate. It starts nicely with SIGHUP and SIGINT. If “force” is True then moves onto SIGKILL. This returns True if the child was terminated. This returns False if the child could not be terminated.
isalive()[source]¶.
wait()[source]¶
wait()has already been called previously or
isalive()method returns False. It simply returns the previously determined exit status.
close(force=True)[source]¶).
getwinsize()[source]¶
This returns the terminal window size of the child tty. The return value is a tuple of (rows, cols).
setwinsize(rows, cols)[source]¶.
getecho()[source]¶
This returns the terminal echo mode. This returns True if echo is on or False if echo is off. Child applications that are expecting you to enter a password often set ECHO False. See waitnoecho().
Not supported on platforms where
isatty()returns False.
setecho(state)[source]¶.
waitnoecho(timeout=-1)[source]¶.
Handling unicode¶
By default,
spawn is a bytes interface: its read methods return bytes,
and its write/send and expect methods expect bytes. If you pass the encoding
parameter to the constructor, it will instead act as a unicode interface:
strings you send will be encoded using that encoding, and bytes received will
be decoded before returning them to you. In this mode, patterns for
expect() and
expect_exact() should also be unicode.
Changed in version 4.0:
spawn provides both the bytes and unicode interfaces. In Pexpect
3.x, the unicode interface was provided by a separate
spawnu class.
For backwards compatibility, some Unicode is allowed in bytes mode: the send methods will encode arbitrary unicode as UTF-8 before sending it to the child process, and its expect methods can accept ascii-only unicode strings.
run function¶
pexpect.
run(command, timeout=30, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None, **kwargs)[source]¶n) combination even on UNIX-like systems because this is the standard for pseudottys. If you set ‘withexitstatus’ to true, then run will return a tuple of (command_output, exitstatus). If ‘withe ‘ls -l’ on the remote machine. The password ‘secret’ will be sent if the ‘( ‘events’. ‘extra_args’ is not used by directly run(). It provides a way to pass data to a callback function through run() through the locals dictionary passed to a callback.
Like
spawn, passing encoding will make it work with unicode instead of bytes. You can pass codec_errors to control how errors in encoding and decoding are handled.
Exceptions¶
- class
pexpect.
EOF(value)[source]¶
Raised when EOF is read from a child. This usually means the child has exited.
Utility functions¶
pexpect.
which(filename, env=None)[source]¶
This takes a given filename; tries to find it in the environment path; then checks if it is executable. This returns the full path to the filename if found and executable. Otherwise this returns None.
pexpect.
split_command_line(command_line)[source]¶
This splits a command line into a list of arguments. It splits arguments on spaces, but handles embedded quotes, doublequotes, and escaped characters. It’s impossible to do this with a regular expression, so I wrote a little state machine to parse the command line. | https://pexpect.readthedocs.io/en/latest/api/pexpect.html | 2020-02-17T07:50:20 | CC-MAIN-2020-10 | 1581875141749.3 | [] | pexpect.readthedocs.io |
This tutorial will guide you through a common model lifecycle in Domino. You will start by working with data from the Balancing Mechanism Reporting Service in the UK. We will be exploring the Electricty Generation by Fuel Type and predicting the electricty generation in the future. You’ll see examples of Jupyter, Dash, pandas, and Prophet used in Domino.
The following contents is mostly meant to be following in sequence. | https://docs.dominodatalab.com/en/3.6/get_started/index.html | 2020-02-17T06:05:39 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.dominodatalab.com |
Seamless configure-price-quote integrated experience
Note
These release notes describe functionality that may not have been released yet. To see when this functionality is planned to release, please review What’s new and planned for Dynamics 365 for Sales. Delivery timelines and projected functionality may change or may not ship (see Microsoft policy).
Delight your customers with fast turnaround times to consistently and accurately configure price and quote by leveraging Dynamics 365 for Sales integration with partner configure-price-quote (CPQ) solutions.
Business value
Dynamics 365 for Sales partners with the best CPQ solution providers to deliver deep product integration with Dynamics 365 for Sales. Customers can easily discover and install partner solutions to enable salespersons to quickly identify the right configuration of products that fit their customers’ needs and rapidly create accurate quotes and contracts with the right pricing, considering all the variable factors including discounts.
Personas
Administrators will be able to discover and choose the right CPQ partner solution that fits their needs.
Sales representatives can quickly generate accurate quotes and win more deals.
Features
Discover the best of third-party CPQ solution providers from within the Sales applications.
Enable salespersons to get an immersive and intuitive product configuration experience by bringing together services of the CPQ solution providers and multiple Microsoft services.
Seamless integrated experience between the selected CPQ solution provider and Dynamics 365 for Sales using Common Data Service as the data glue to both read product catalog data generated by Dynamics 365 Backoffice apps (Dynamics 365 for Finance and Operations and Dynamics 365 Business Central) and write back quotes, discounts, and product configuration data, thereby empowering organizations to build analytical and intelligent applications on top of the data coming from the different sources.
Enable salespersons to generate, save, and email quote PDF documents (in May-June).
Note
Partner CPQ solutions are available for both Sales Hub and Sales Professional applications.
This feature is available in Unified Interface only. | https://docs.microsoft.com/en-us/business-applications-release-notes/April19/dynamics365-sales/seamless-configure-price-quote-integrated-experience | 2020-02-17T08:31:26 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
.
Note
This example will only work if you disable request validation in the page by adding the @ Page attribute ValidateRequest="false". It is not recommended that you disable request validation in a production application, so make sure that you enable request validation again after viewing this example.
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e _ As System.EventArgs) Handles Button1.Click Label1.Text = Server.HtmlEncode(TextBox1.Text) Label2.Text = _ Server.HtmlEncode(dsCustomers.Customers(0).CompanyName) End Sub
private void Button1_Click(object sender, System.EventArgs e) { Label1.Text = Server.HtmlEncode(TextBox1.Text); Label2.Text = Server.HtmlEncode(dsCustomers1.Customers[0].CompanyName); }
See Also
Concepts
Overview of Web Application Security Threats
Basic Security Practices for Web Applications | https://docs.microsoft.com/en-us/previous-versions/aspnet/a2a4yykt(v=vs.100)?redirectedfrom=MSDN | 2020-02-17T07:44:32 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Using your Repository dashboard¶
When you log in to Repository, your personal dashboard is displayed.
In the top navigation bar, the currently active user or organization is shown at the far right.
TIP: If the email address on your account is associated with a Gravatar account, Repository displays your profile photo. To associate your email address with Gravatar or to change your Gravatar profile photo, see gravatar.com.
Packages, notebooks, environments, projects and installers that you have created with this account appear on your Landscape.
Click the view button to see the following options:
- Landscape: Your home page.
- Favorites: Other users’ packages that you have starred.
- Packages: Only packages you have created.
- Notebooks: Only notebooks you have created.
- Environments: Only environments you have created.
- Installers: If you have created and uploaded installers using Cloudera, they are displayed here.
- Projects: If you have created and uploaded projects, they are displayed here.
| https://docs.anaconda.com/anaconda-repository/user-guide/tasks/use-dashboard/ | 2020-02-17T08:18:15 | CC-MAIN-2020-10 | 1581875141749.3 | [array(['../../../../_images/repo-managing-toolbar-bar.png',
'../../../../_images/repo-managing-toolbar-bar.png'], dtype=object)
array(['../../../../_images/repo-managing-toolbar-menu.png',
'../../../../_images/repo-managing-toolbar-menu.png'], dtype=object)] | docs.anaconda.com |
Getting started with Anaconda¶
Anaconda Distribution contains conda and Anaconda Navigator, as well as Python and hundreds of scientific packages. When you installed Anaconda, you installed all these too.
Conda works on your command line interface such as Anaconda Prompt on Windows and terminal on macOS and Linux.. You can even switch between them, and the work you do with one can be viewed in the other.
Try this simple programming exercise, with Navigator and the command line, to help you decide which approach is right for you.
When you’re done, see What’s next?.
Your first Python program: Hello, Anaconda!¶
Use Anaconda Navigator to launch an application. Then, create and run a simple Python program with Spyder and Jupyter Notebook.
Run Python in Spyder IDE (integrated development environment)¶
Tip
Navigator’s Home screen displays several applications for you to choose from. For more information, see links at the bottom of this page.
On Navigator’s Home tab, in the Applications pane on the right, scroll to the Spyder tile and click the Install button to install Spyder.
Note
If you already have Spyder installed, you can jump right to the Launch step..
Close Spyder¶
From Spyder’s top menu bar, select Spyder - Quit Spyder (In macOS, select Python - Quit Spyder).
Run Python in a Jupyter Notebook¶
On Navigator’s Home tab, in the Applications pane on the right, scroll to the Jupyter Notebook tile and click the Install button to install Jupyter Notebook.
Note
If you already have Jupyter Notebook installed, you can jump right to the Launch step.
Launch Jupyter Notebook by clicking Jupyter Notebook’s Launch button.
This will launch a new browser window (or a new tab) showing the Notebook Dashboard.
On the top of the right hand side, there is a dropdown menu labeled “New”. Create a new Notebook with the Python version you installed.
Rename your Notebook. Either click on the current name and edit it or find rename under File in the top menu bar. You can name it to whatever you’d like, but for this example we’ll use MyFirstAnacondaNotebook.
In the first line of the Notebook, type or copy/paste
print("Hello Anaconda").
Save your Notebook by either clicking the save and checkpoint icon or select File - Save and Checkpoint in the top menu.
Run your new program by clicking the Run button or selecting Cell - Run All from the top menu.
Write a Python program using Anaconda Prompt or terminal¶
Open Anaconda Prompt¶
Choose the instructions for your operating system.
Windows
From the Start menu, search for and open “Anaconda Prompt”:
macOS
Open Launchpad, then click the terminal icon.
Linux
Open a terminal window.
Start Python¶
At Anaconda Prompt (terminal on Linux or macOS), type
python
and press Enter.
The
>>> means you are in Python.
Write a Python program¶
At the
>>>, type
print("Hello Anaconda!") and press Enter.
When you press enter, your program runs. The words “Hello Anaconda!” print to the screen. You’re programming in Python!
Exit Python¶
On Windows, press CTRL-Z and press Enter. On macOS or Linux type
exit() and press Enter.
Optional: Launch Spyder or Jupyter Notebook from the command line¶
- At the Anaconda Prompt (terminal on Linux or macOS), type
spyderand press Enter. Spyder should start up just like it did when you launched it from Anaconda Navigator.
- Close Spyder the same way you did in the previous exercise.
- At the Anaconda Prompt (terminal on Linux or macOS), type
jupyter-notebookand press Enter.
Jupyter Notebook should start up just like it did when you launched it from Anaconda Navigator. Close it the same way you did in the previous exercise. | https://docs.anaconda.com/anaconda/user-guide/getting-started/ | 2020-02-17T08:15:28 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.anaconda.com |
Measuring Complexity and Maintainability of Managed Code
Note
This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here
Troubleshooting Code Metrics Issues Performing Common Development Tasks | https://docs.microsoft.com/en-us/visualstudio/code-quality/measuring-complexity-and-maintainability-of-managed-code?view=vs-2015&redirectedfrom=MSDN | 2020-02-17T07:44:09 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.microsoft.com |
Getting Started
If you do not have a tenant for Microsoft Dynamics 365 Business Central, you can sign up for a free trial.
Once you have completed the sign up and your tenant is up and running, you can add the NAV-X Search app from the AppSource Marketplace. If you have questions about the installation process of an app through Microsoft AppSource, you can find more information on installing apps on the Microsoft Docs site.
Permission Setup
Permissions for the app must be setup before the Search Search app, you will see a notification asking "Do you want to get started with NAV-X Search?"..
Search Tables Name”. You then can pick a new table from the list. Alternatively, you can also just enter the table name in this field..
Search Document Tables Sales Header contains all open sales documents, such as quotes, orders, return orders, unposted invoices or credit memos. The configuration of these tables is the same as in the previous step.
Complete the Setup. | https://docs.nav-x.com/en/business-central/search/getting-started.html | 2020-02-17T06:18:59 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.nav-x.com |
Schema Object Names¶
Certain objects within Nebula graph, including space, tag, edge, alias, customer variables and other object names are referred as identifiers. This section describes the rules for identifiers in Nebula Graph:
- Permitted characters in identifiers: ASCII: [0-9,a-z,A-Z,_] (basic Latin letters, digits 0-9, underscore), other punctuation characters are not supported.
- All identifiers must begin with a letter of the alphabet.
- Identifiers are case sensitive.
- You cannot use a keyword (a reserved word) as an identifier. | https://docs.nebula-graph.io/manual-EN/2.query-language/3.language-structure/schema-object-names/ | 2020-02-17T08:03:08 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.nebula-graph.io |
Returns the reference count of a value.
The count returned is generally one higher than you might expect, because it includes the (temporary) reference.
Different nodes might return different reference values since the reference counter can be higher of lower depending on how the value is stored and used.
This function does not generate an event.
refs(value)
Reference count of the given value.
Returns the reference count of a given value:
[ refs( 'some string' ), refs( a = b = c = 42 ), ];
Example return value in JSON format
[ 2, 5 ] | https://docs.thingsdb.net/v0/collection-api/refs/ | 2020-02-17T06:49:57 | CC-MAIN-2020-10 | 1581875141749.3 | [] | docs.thingsdb.net |
Drops FastTab
After the Scheduling FastTab has been filled in its time to move onto the Drops FastTab
The settings in this FastTab are related to regarding the values in the Shipment Management Routes screen and the Shipment of Orders.
Ticking the options to Show Open orders on Shipment Management will determine whether sales or return orders with a status of Open should be shown on the Shipment Management Routes page. When this is left as unticked only orders with a status in the header of the document as Released will show on Shipment Management Routes page
The Shipment Bin on a Shipment Management Route is the bin where picked goods for shipment will be placed. You may want this field to be automatically populated with a shipment bin or choose a shipment bin manually before creating picks. The ship bin will be populated from that specified on the location card if you enable Populate Ship Bin on New Route | https://docs.cleverdynamics.com/Shipment%20Management/User%20Guide/Shipment%20Management%20Setup/Drop%20FastTab/ | 2020-02-17T08:34:35 | CC-MAIN-2020-10 | 1581875141749.3 | [array(['../media/2d87e3d37f8f62d4c9bb86cad457c897.png', None],
dtype=object)
array(['../media/dd2f8bda1a52c923632ab172b2ce352e.png', None],
dtype=object) ] | docs.cleverdynamics.com |
postMessage
postMessage
This article is Ready to Use.
W3C Candidate Recommendation
Summary
Posts a message through the channel, from one port to the other.
Method of apis/web-messaging/MessagePort
Syntax
MessagePort.postMessage(message, transfer);
Parameters
Data-type: any
JavaScript primitive, such as a string, PixelArray, ImageData, Blob, File, or ArrayBuffer.
transfer
Data-type: any Optional
Objects listed in transfer are transferred, not just cloned, meaning that they are no longer usable on the sending side. Throws a DataCloneError if transfer array contains duplicate objects or the source or target ports, or if message could not be cloned.
Return Value
No return value
Compatibility
There is no data available for topic "webapi", feature "postMessage". If you think that there should be data available, consider opening an issue.
There is no data available for topic "webapi", feature "postMessage". If you think that there should be data available, consider opening an issue.
Examples
This example creates a new message channel and uses one of the ports to send a message, which will be received by the other port.
JavaScript
var msgChannel = new MessageChannel(); msgChannel.port1.postMessage('Hello world');
Notes.
Related specifications
Attribution
This article contains content originally from external sources.
Portions of this content come from the Microsoft Developer Network: Windows Internet Explorer API reference Article | https://docs.webplatform.org/wiki/apis/web-messaging/MessagePort/postMessage | 2015-04-18T11:37:42 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.webplatform.org |
Build System
The major options for modular build systems are:
- gradle
- mvn (+gmaven)
Modules
- compiler
- runtime
- groovysh (org.codehaus.groovy.tools.shell)
- groovyconsole (groovy.ui + friends)
- groovydoc
- swing
- jmx
- grape
- mock
- sql
- ant (org.codehaus.groovy.ant)
- javax.script
- bsf
- servlet
- inspect
- test/junit | http://docs.codehaus.org/pages/viewpage.action?pageId=136118401 | 2015-04-18T11:39:37 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.codehaus.org |
provides a number of preference panels to allow you to customise the behaviour of the editor.
Examples for script editor args:
Gvim/Vim
--remote-tab-silent +$(Line) "$File"
Notepad2
-g $(Line) "$(File)"
Sublime Text 2
"$(File)":$(Line)
Notepad++
-n$(Line) "$(File)"
This panel allows you to choose the colors that Unity uses when displaying various user interface elements.
This panel allows you to set the keystrokes that activate the various commands in Unity. | http://docs.unity3d.com/Manual/Preferences.html | 2015-04-18T11:36:20 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.unity3d.com |
Information for "Upgrade Package/es" Basic information Display titlePaquete de actualización Default sort keyUpgrade Package/es Page length (in bytes)912 Page ID32043 Page content languageSpanish (es)armyman (Talk | contribs) Date of page creation13:16, 1 March 2014 Latest editorFuzzyBot (Talk | contribs) Date of latest edit07:10, 12 January 2015 Total number of edits11 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Chunk:Upgrade_Package/es&action=info | 2015-04-18T11:49:25 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Evaluators/Resources From Joomla! Documentation EvaluatorsRevision as of 09:29, 15 September 2013 by Tom Hutchison (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) There are several resources that can help you get a view of the overall capabilities of Joomla!. Here are some of the main considerations you might need more information on while evaluating Joomla! Retrieved from ‘’ Category: Landing subpages | https://docs.joomla.org/index.php?title=Evaluators/Resources&direction=next&oldid=73375 | 2015-04-18T11:39:04 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Information for "Joomla! Code Contributors" Basic information Display titleJoomla! Code Contributors Redirects toPortal:Joomla! Code Contributors (info) Default sort keyJoomla! Code Contributors Page length (in bytes)46 Page ID34915 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page1 Number of subpages of this page7 ’ | https://docs.joomla.org/index.php?title=Joomla!_Code_Contributors&action=info | 2015-04-18T12:23:59 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Changes related to "Adding ACL rules to your component"
← Adding ACL rules to your component
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140525233448&target=Adding_ACL_rules_to_your_component | 2015-04-18T12:55:20 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
OEChem 1.5.0 is a new release including many major and minor bug fixes along with several new features. This is also a continuation of a complete release of all OpenEye toolkits as a consolidated set so that there are no chances of incompatibilities between libraries.
Note, that in this release the directory structure has been changed to allow multiple versions of the toolkits to be installed in the same directory tree without conflicts. From this release on, all C++ releases will be under the openeye/toolkits main directory. There is then a directory specific to the version of the release and then below that, directories for each architecture/compiler combination. To simplify end user Makefiles, openeye/toolkits/lib, openeye/toolkits/include'', and ``openeye/toolkits/examples are all symlinks to the specific last version and architecture that was installed.
New users should look in openeye/toolkits/examples for all the examples. Existing users updating existing Makefiles should change their include directory from openeye/include to openeye/toolkits/include. As well, existing Makefiles should change the library directory from openeye/lib to openeye/toolkits/lib.
OEChem now has a 2D similarity implementation using the Lingos method of similarity. Lingos compares Isomeric SMILES strings instead of pre-computed fingerprints. This combination leads to very rapid 2D similarity calculation without any upfront cost to calculate fingerprints and without any storage requirements to store fingerprints.
MMFF94 charges are now available in OEChem. While we recommend AM1-BCC charges as the best available charge model, having MMFF94 charges available at the OEChem level means that decent charges are available to all toolkit users.
In OEChem 1.2, there was an alternate implementation of MCS that used a fast, approximate method for determining the MCS. While it is less than exhaustive, the speed does have some appealing uses. In OEChem 1.5, we’ve restored this older algorithm and now both are available.
namespace OEMCSType { static const unsigned int Exhaustive = 0; static const unsigned int Approximate = 1; static const unsigned int Default = Exhaustive; }
OEMCSType::Exhaustive implies the current, exhaustive algorithm from OEChem 1.3 and later, while OEMCSType::Approximate implements the older, fast but approximate algorithm.
The ability to get the license expiration date when calling OEChemIsLicensed has been added.
Molecules (OEMol, OEGraphMol) can now be attached to an existing OEBase as generic data and they will be written to OEB and read back in. Additional support for attaching grids and surfaces to molecules has been added to Grid and Spicoli.
There is a new retain Isotope flag to OESuppressHydrogens. If false, [2H] and [3H] will also be removed by this call. By default, this is true so that the current behavior of OESuppressHydrogens is identical to the previous version.
The OEChem CDX file reader can now Kekulize aromatic (single) bonds in the input ChemDraw file. It switches the internal bond order processing to use the bond’s integer type field, and then calls OEKekulize to do all of the heavy lifting.
Tweaks to the algorithm used for determining which bond(s) around a stereocenter should bear a wedge or a hash. The bug fixed here includes an example where all three neighbors are stereocenters, but two are in a ring and one isn’t.
There are new versions of OEIsReadable and OEIsWriteable that take a filename directly.
More exceptional atom naming support for the PDB residues CO6 (pdb2ii5), SFC (pdb2gce), RFC (pdb2gce), MRR (pdb2gci), MRS (pdb2gd0), FSM (pdb2cgy) and YE1 (pdb2np9) has been added. | https://docs.eyesopen.com/toolkits/cpp/oechemtk/releasenotes/version1_5_0.html | 2018-05-20T15:19:11 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.eyesopen.com |
What is Custom Decision Service?
Azure Custom Decision Service helps you create intelligent systems with a cloud-based, contextual decision-making API that sharpens with experience. Custom Decision Service uses reinforcement learning and adapts the content in your application to maximize user engagement. The system includes user feedback into its decisions in real time and responds to emergent trends and breaking stories in minutes.
In a typical application, a front page links to several articles or other types of content. As the front page loads, it requests the Custom Decision Service to rank articles included on the page. When you choose an article, a second request is sent to the Custom Decision Service that logs the outcome of that decision.
Custom Decision Service is easy to use. The easiest integration mode requires only an RSS feed for your content and a few lines of JavaScript to be added into your application.
Custom Decision Service converts your content into features for machine learning. The system uses these features to understand your content in terms of its text, images, videos, and overall sentiment. It uses several other Microsoft Cognitive Services, like Entity Linking, Text Analytics, Emotion, and Computer Vision.
Some common-use cases for Custom Decision Service include:
- Personalizing articles on a news website
- Personalizing video content on a media portal
- Optimizing ad placements or web pages that the ad directs to
- Ranking recommended items on a shopping website.
Custom Decision Service is currently in free public preview. It can personalize a list of articles on a website or an app. The feature extraction works best for English language content. Limited functionality is offered for other languages, like Spanish, French, German, Portuguese, and Japanese. This documentation will be revised as new functionality becomes available.
Custom Decision Service can be used in applications that are not in the content personalization domain. These applications might be a good fit for a custom preview. Contact us to learn more.
API usage modes
Custom Decision Service can be applied to both webpages and apps. The APIs can be called from either a browser or an app. The API usage is similar on both modes, but some of the details are different.
Glossary of terms
Several terms frequently occur in this documentation:
- Action set: The set of content items for Custom Decision Service to rank. This set can be specified as an RSS or Atom endpoint.
- Ranking: Each request to Custom Decision Service specifies one or more action sets. The system responds by picking all the content options from these sets and returns them in ranked order.
- Callback function: This function, which you specify, renders the content in your UI. The content is ordered by the rank ordering returned by Custom Decision Service.
- Reward: A measure of how the user responded to the rendered content. Custom Decision Service measures user response by using clicks. The clicks are reported to the system by using custom code inserted in your application.
Next steps
- Register your application with Custom Decision Service
- Get started to optimize a webpage or a smartphone app.
- Consult the API reference to learn more about the provided functionality. | https://docs.microsoft.com/es-es/azure/cognitive-services/custom-decision-service/custom-decision-service-overview | 2018-05-20T15:42:21 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Detect Outlier (Densities)
(RapidMiner Studio Core)
SynopsisThis operator identifies outliers in the given ExampleSet based on the data density. All objects that have at least p proportion of all objects farther away than distance D are considered outliers.
Description
The Detect Outlier (Densities) operator is an outlier detection algorithm that calculates the DB(p,D)-outliers for the given ExampleSet. A DB(p,D)-outlier is an object which is at least D distance away from at least p proportion of all objects. The two real-valued parameters p and D can be specified through the proportion and distance parameters respectively. The DB(p,D)-outliers are distance-based outliers according to Knorr and Ng. This operator implements a global homogenous outlier search.
This operator adds a new boolean attribute named 'outlier' to the given ExampleSet. If the value of this attribute is true, that example is an outlier and vice versa. ExampleSet is delivered through this output port.
original (Data Table)
The ExampleSet that was given as input is passed without changing to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the ExampleSet in the Results Workspace.
Parameters
- distanceThis parameter specifies the distance D parameter for calculation of the DB(p,D)-outliers. Range: real
- proportionThis parameter specifies the proportion p parameter for calculation of the DB(p,D)-outliers. Range: real
- (Densities) operator is applied on the ExampleSet. The distance and proportion parameters are set to 4.0 and 0.8 respectively. The resultant ExampleSet can be viewed in the Results Workspace. For better understanding switch to the 'Plot View' tab. Set Plotter to 'Scatter', x-Axis to 'att1', y-Axis to 'att2' and Color Column to 'outlier' to view the scatter plot of the ExampleSet (the outliers are marked red). The number of outliers may differ depending on the randomization, if the random seed parameter of the process is set to 1997, you will see 5 outliers. | https://docs.rapidminer.com/latest/studio/operators/cleansing/outliers/detect_outlier_densities.html | 2018-05-20T15:30:33 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.rapidminer.com |
Decision Insight 20180319 Attribute Attributes, just like relationships, contain information about an entity. An attribute is uniquely identified based on its name, its declared entity, and its declared space. In an entity, one or more attributes can be defined as the Key value, which is then used to locate an instance of the entity. Related Links | https://docs.axway.com/bundle/DecisionInsight_20180319_allOS_en_HTML5/page/attribute.html | 2018-05-20T15:39:59 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.axway.com |
NOTICE: Our WHMCS Addons are discontinued and not supported anymore.
Most of the addons are now released for free in github -
You can download them from GitHub and conribute your changes ! :)
WHMCS Joomla Bridge :: Troubelshooting
You can use WHMCS internal system module debug log in order to troubleshoot communication problems with the remote joomla site. It will record and display the raw API data being sent to, and received back from the remote system. Logging should only ever be enabled for testing, and never left enabled all the time.
In order to activate it, please navigate to “Utilities” -> “Logs” -> “Module Logs” -> Click on “Enable Debug Logging“.
“Access Forbidden” error message from Joomla while WHMCS is trying to connect with Joomla
This might indicate that the communication from Joomla to WHMCS is working, but Joomla (or the server that hosts joomla) is blocking the access.
Could be couple of reasons –
- 1. You whitelisted the wrong ip
- 2. Username / pass is incorrect
- 3. Joomla sh404 security function is blocking your request
- 4. Joomla rsfirewall security function is blocking your request
- 5. Some kind of mod_security rule is blocking your request.
- 6. Avoid using special chars ($&?%) in your joomla user password, It can interfere with with the login post from WHMCS module.
Error from WHMCS: You can’t acces with the user group.
Navigate to “Components” -> “Joomla bridge” -> “Settings“, change access level to “Super Administrator“.
* If you are getting blank screens / no communication at all – make sure that the ips of the server are white listed in the server’s firewall on both sides | https://docs.jetapps.com/whmcs-joomla-bridge-troubelshooting | 2018-05-20T15:41:16 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['https://docs.jetapps.com/wp-content/plugins/lazy-load/images/1x1.trans.gif',
'jbridge3'], dtype=object) ] | docs.jetapps.com |
PredictSupport
Returns the support value for a specified state.
Syntax
PredictSupport(<scalar column reference>, [<predicted state>])
Applies To
A scalar column.
Return Type
A scalar value of the type that is specified by <scalar column reference>.
Remarks
If the predicted state is omitted, the state that has the highest predictable probability is used, excluding the missing states bucket. To include the missing states bucket, set the <predicted state> to INCLUDE_NULL.
To return the support for the missing states, set the <predicted state> to NULL.
Examples
The following example uses a singleton query to predict whether an individual will be a bike buyer, and also determines the support for the prediction based on the TM Decision Tree mining model.
SELECT [Bike Buyer], PredictSupport([Bike Buyer]) AS [Support], From [TM Decision Tree] NATURAL PREDICTION JOIN (SELECT 28 AS [Age], '2-5 Miles' AS [Commute Distance], 'Graduate Degree' AS [Education], 0 AS [Number Cars Owned], 0 AS [Number Children At Home]) AS t
See Also
Reference
Data Mining Extensions (DMX) Function Reference
Functions (DMX)
Mapping Functions to Query Types (DMX)
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms132140(v=sql.90) | 2018-05-20T16:51:23 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Configure the memory, storage, and network paths available to machines provisioned through this reservation.
About this task
You can select a FlexClone datastore in your reservation if you have a vSphere environment and storage devices that use Net App FlexClone technology. SDRS is not supported for FlexClone storage devices.
Prerequisites
Specify Reservation Information.
Procedure
- Click the Resources tab.
- Specify the amount of memory, in GB, to be allocated to this reservation from the Memory table.
- Configure a storage path in the Storage table.
- Select a storage path from the Storage Path column.
- (Optional) Select a storage endpoint from the Endpoint drop-down menu to specify a storage device that uses FlexClone technology.
SDRS is not supported for FlexClone storage devices..
- Type a value in This Reservation Reserved to specify how much storage to allocate to this reservation.
- Specify the Priority for the storage path.
The priority is used for multiple storage paths. A storage path with priority 0 is used before a path with priority 1.
- Repeat this step to configure clusters and datastores as needed.
- Click the Network tab.
- Configure a network path for machines provisioned by using this reservation.
- Select a network path for machines provisioned on this reservation from the Network table.
- (Optional) Select a network profile from the Network Profile drop-down menu.
This option requires additional configuration to configure network profiles.
You can select more than one network path on a reservation, but only one network is selected when provisioning a machine.
Results
At this point, you can save the reservation by clicking OK. Optionally, you can configure email notifications to send alerts out when resources allocated to this reservation become low. | https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.iaas.virtual.doc/GUID-6F918286-9831-4DFC-8A37-D421DE4073E4.html | 2018-05-20T15:49:55 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.vmware.com |
NOTICE: Our WHMCS Addons are discontinued and not supported anymore.
Most of the addons are now released for free in github -
You can download them from GitHub and conribute your changes ! :)
WHMCS CSF Manager :: Changelog
1.1.405/07/2016
- Case 969 - Fixed - ioncube package issues on version 1.1.3
- Case 968 - Fixed - tigger error if no response from cPanel
1.1.305/07/2016
- Case 865 - Fixed - Unblock failed message from CSF and Brute Force
- Case 864 - Compatibility - Compatible to WHMCS 6.3
- Case 863 - Removed - Broadcast Configuration was removed
- Case 481 - Fixed - The function "mysql_escape_string" was deprecated
1.1.006/12/2015
- Case 577 - Language file is now not encoded
- Case 576 - Bugfix - Module doesn't work when "Also check & release from cPanel's Brute Force" set to "yes"
- Case 575 - Change - You can now fully manage CSF directly from WHMCS (all options are available).
- Case 574 - Change - WHMCS 6 Compatible
1.0.1306/12/2015
- Case 573 - New feature - Broadcast config, this feature will let you change settings in multiple servers using predefined template.
- Case 572 - Bugfix - Fixed (another) ip removal issue
1.0.1106/12/2015
- Case 571 - Some minor GUI bugfix
- Case 570 - Bugfix - Fixed ip removal issue
1.0.1006/12/2015
- Case 569 - Bugfix - Error when viewing iptables log from admin console
- Case 568 - Bugfix - WHMCS 5.3.x compatibility issues.
1.0.906/12/2015
- Case 567 - Some more improvements to the error handling
- Case 566 - Bugfix - Client side GUI crashes when disabling the first "firewall" tab in the settings.
1.0.806/12/2015
- Case 565 - Improvement to Error handling - will show output from remote server on error (fixes the php fatal error issue).
- Case 564 - Template adjustments to make the module compatible with the "classic" and "portal" templates (besides the default).
1.0.706/12/2015
- Case 563 - New feature - Whitelist ip by email. Client can send an activation email to himself / someone else to whitelist his ip. This saves the need to login the client area on a dynamic ip.
- Case 562 - Bugfix - trim inputs on whitelist form
1.0.606/12/2015
- Case 561 - Fixed compatibility issue with CSF v6.32 | https://docs.jetapps.com/changelog-2 | 2018-05-20T15:19:18 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.jetapps.com |
For cPanel & WHM version 68
Overview
Hosting providers often prefer to standardize their software and application environment, which includes the deployed version of cPanel & WHM. Usually, a specific release tier (such as STABLE, RELEASE, or CURRENT) ensures a consistent environment across many servers.
In some circumstances, providers prefer not to rely on cPanel, Inc.'s tiered release system to select the installed version of cPanel & WHM. Additionally, providers who require a specific version of cPanel & WHM need an assured method to deploy new installations of cPanel & WHM, even when that version is no longer the current production release.
The purpose of Long-Term Support
cPanel, Inc. actively develops cPanel & WHM, and we release new builds frequently. Traditionally, more conservative system administrators choose to completely disable software updates in order to maintain control over their deployments of cPanel & WHM. The Long-Term Support (LTS) tier provides an alternative to disabling updates, and will help ensure that cPanel & WHM servers receive important updates and fixes.
To view the cPanel & WHM version on each release tier, visit our httpupdate.cpanel.net site..
Note:
Development releases of cPanel & WHM do not qualify for LTS. We do not consider their publication dates when we calculate an LTS version's.
Third-party applications
cPanel & WHM provides various third-party applications (for example, MySQL® and Roundcube).
cPanel, Inc. will continue to provide critical and security-related updates to these third-party applications when a customer installs them with an LTS version of cPanel & WHM. However, cPanel, Inc. may provide these fixes as back-ported patches instead of as upgrades to the latest version of the application.
EasyApache
cPanel, Inc. develops and updates EasyApache with a separate release schedule from cPanel & WHM's release schedule.
cPanel, Inc. provides EasyApache updates for an LTS release until the LTS release reaches End of Life (EOL). If the currently-installed version of cPanel & WHM does not meet the requirements of a new EasyApache function or feature, EasyApache will update but the functionality is not guaranteed or supported.
Important:
cPanel & WHM releases that reach End of Life will become unavailable for installation, no longer receive fixes or patches from cPanel, Inc.®, and documentation is archived.
Additional documentation | https://dal-1.docs.confluence.prod.cpanel.net/display/68Docs/cPanel+Long-Term+Support | 2018-05-20T15:23:13 | CC-MAIN-2018-22 | 1526794863626.14 | [] | dal-1.docs.confluence.prod.cpanel.net |
Apply the latest platform update to your Microsoft Dynamics 365 Finance and Operations environment
This topic explains how to apply the latest platform release to your Microsoft Dynamics 365 for Finance and Operations environment.
Overview
The Microsoft Dynamics 365 for Finance and Operations platform consists of the following components:
- Finance and Operations platform binaries such as Application Object Server (AOS), the data management framework, the reporting and business intelligence (BI) framework, development tools, and analytics services.
- The following Application Object Tree (AOT) packages:
- Application Platform
- Application Foundation
- Test Essentials
Important
To move to the latest Finance and Operations platform, your Finance and Operations implementation cannot have any customizations (overlayering) of any of the AOT packages that belong to the platform. This restriction was introduced in Platform update 3, so that seamless continuous updates can be made to the platform. If you are running on a platform that is older than Platform update 3, see the section Upgrading to Platform update 3 from an earlier build section at the end of this topic.
Overall flow
The following illustration shows the overall process for upgrading the Finance and Operations platform to the latest update.
If you are already running on platform update 4 or later, updating the Finance and Operations platform to the latest release is a simple servicing operation. Once the platform update package is in your LCS asset library, follow the flow to apply an update from the LCS environment page: Select Apply updates under Maintain then select the platform update package.
Learn how to get the latest platform package and apply it to an environment deployed through LCS in the next section.
Apply the latest platform update package
There are two ways to get the latest platform update package in LCS from your environment page.
- Click the Platform binary updates tile
- Click the All Binary Updates tile to see a list of combined package of application and platform binary updates. (As of Platform update 4, binary updates from LCS include an upgrade to the latest platform).
Note
Tiles on an environment's page in LCS show only the updates that are applicable to your environment based on the current version and state of the environment.
Get the latest platform update package by clicking on one of the two tiles as mentioned above. After reviewing the fixes included in the platform, click Save Package to save the package to the project asset library.
From a process perspective, deploying a platform upgrade package resembles a binary hotfix deployable package.
- To apply a platform update package to your cloud development, build, demo, tier-2 sandbox, or production environment, update directly from LCS.
For more details, follow the instructions for applying a binary hotfix in Apply a deployable package.
Note
Migrate files for Document management: After upgrading to Platform update 6 or later, an administrator needs to click the Migrate Files button on the Document management parameters page to finish the upgrade process. This will migrate any attachments stored in the database to blob storage. The migration will run as a batch process and could take a long time, depending on the number and size of the files being moved from the database into Azure blob storage. The attachments will continue to be available to users while the migration process is running, so there should be no noticeable effects from the migration. To check if the batch process is still running, look for the Migrate files stored in the database to blob storage process on the Batch jobs page.
Apply a platform update to environments that are not connected to LCS
This section describes how to apply a platform update package to a local development environment (one that that is not connected to LCS).
How to get the platform update package
Platform update packages are released by Microsoft and can be imported from the Shared asset library in Microsoft Dynamics Lifecycle Services (LCS). The package name is prefixed with Dynamics 365 Unified Operations Platform Update. Use these steps to import the platform update package:
- Go to your LCS project's Asset library.
- On the Software deployable package tab, click Import to create a reference to the platform update package.
- Select the desired platform update package.
Note
The package in the Shared Asset library may not correspond to the latest build (with hotfixes) of the desired platform release. To guarrantee the latest build, use the LCS environment page as described earlier in this article.
Apply the platform update package to your development environment
Note
These instructions apply only to environments that cannot be updated directly from LCS.
Install the deployable package
- Download the platform update package (AXPlatformUpdate.zip) to your virtual machine (VM).
- Unzip the contents to a local directory.
- Depending on the type of environment that you're upgrading, open the PlatformUpdatePackages.Config file under \AOSService\Scripts, and change the MetaPackage value.
- If you're upgrading a development or demo environment that contains source code, change the MetaPackage value to dynamicsax-meta-platform-development.
- If you're upgrading a runtime environment, such as a tier-2 sandbox environment or another environment that doesn't contain source code, the default value, dynamicsax-meta-platform-runtime, is correct.
Note
Step 3 is not applicable when upgrading to platform update 4 or later.
- Follow the instructions for installing a deployable package. See Install a deployable package.
- If you're working in a development environment, rebuild your application’s code.
Example
AXUpdateInstaller.exe import -runbookfile=OneBoxDev-runbook.xml AXUpdateInstaller.exe execute -runbookid=OneBoxDev
Install the Visual Studio development tools (Platform update 3 or earlier)
Note
Skip this section if you are updating to platform update 4 or later, development tools are automatically installed as part of installing the deployable package.
Update the Visual Studio development tools as described in Updating the Visual Studio development tools.
Regenerate form adaptor models
Form adaptor models are required for test automation. Regenerate the platform form adaptor models, based on the newly updated platform models. Use the xppfagen.exe tool to generate the form adaptor models. This tool is located in the package's bin folder (typically, j:\AosService\PackagesLocalDirectory\bin). Here is a list of the platform form adaptor models:
- ApplicationPlatformFormAdaptor
- ApplicationFoundationFormAdaptor
- DirectoryFormAdaptor
The following examples show how to generate the form adaptor models.
xppfagen.exe -metadata=j:\AosService\PackagesLocalDirectory -model="ApplicationPlatformFormAdaptor" -xmllog="c:\temp\log1.xml" xppfagen.exe -metadata=j:\AosService\PackagesLocalDirectory -model="ApplicationFoundationFormAdaptor" -xmllog="c:\temp\log2.xml" xppfagen.exe -metadata=j:\AosService\PackagesLocalDirectory -model="DirectoryFormAdaptor" -xmllog="c:\temp\log3.xml"
Install the Data Management service (Platform update 3 or earlier)
Note
Skip this section if you are updating to platform update 4 or newer, the data management service is automatically installed as part of installing the deployable package.
After the deployable package is installed, follow these instructions to install the new Data Management service. Open a Command Prompt window as an administrator, and run the following commands from the .\DIXFService\Scripts folder.
msiExec.exe /uninstall {5C74B12A-8583-4B4F-B5F5-8E526507A3E0} /passive /qn /quiet
If you're connected to Microsoft SQL Server Integration Services 2016 (13.0), run the following command.
msiexec /i "DIXF_Service_x64.msi" ISSQLSERVERVERSION="Bin\2012" SERVICEACCOUNT="NT AUTHORITY\NetworkService" /qb /lv DIXF_log.txt
If you're connected to an earlier release of Microsoft SQL Server Integration Services, run the following command.
msiexec /i "DIXF_Service_x64.msi" ISSQLSERVERVERSION="Bin" SERVICEACCOUNT="NT AUTHORITY\NetworkService" /qb /lv DIXF_log.txt
Apply the platform update package on a build environment (Platform update 6 or earlier)
Note
Skip this section if you are updating to platform update 7 or newer. This was a pre-requesite step for build environments.
If the build machine has been used for one or more builds, you should restore the metadata packages folder from the metadata backup folder before you upgrade the VM to a newer Dynamics 365 for Finance and Operations platform. You should then delete the metadata backup. These steps help ensure that the platform update will be applied on a clean environment. The next build process will then detect that no metadata backup exists and will automatically create a new one. This new metadata backup will include the updated platform. To determine whether a complete metadata backup exists, look for a BackupComplete.txt file in I:\DynamicsBackup\Packages (or C:\DynamicsBackup\Packages on a downloadable virtual hard disk [VHD]). If this file is present, a metadata backup exists, and the file will contain a timestamp that indicates when it was created. To restore the deployment's metadata packages folder from the metadata backup, open an elevated Windows PowerShell Command Prompt window, and run the following command. This command will run the same script that is used in the first step of the build process.
if (Test-Path -Path "I:\DynamicsBackup\Packages\BackupComplete.txt") { C:\DynamicsSDK\PrepareForBuild.ps1 }
If a complete metadata backup doesn't exist, the command will create a new backup. This command will also stop the Finance and Operations deployment services and Internet Information Services (IIS) before it restores the files from the metadata backup to the deployment's metadata packages folder. You should see output that resembles the following example.
6:17:52 PM: Preparing build environment...* <em>6:17:53 PM: Updating Dynamics SDK registry key with specified values...</em> <em>6:17:53 PM: Updating Dynamics SDK registry key with values from AOS web config...</em> <em>6:17:53 PM: Stopping Finance and Operations deployment...</em> <em>6:18:06 PM: **A backup already exists at: I:\\DynamicsBackup\\Packages. No new backup will be created</em><em>.</em> <em>6:18:06 PM: **Restoring metadata packages from backup...</em>** <em>6:22:56 PM: **Metadata packages successfully restored from backup</em><em>.</em> <em>6:22:57 PM: Preparing build environment complete.</em> <em>6:22:57 PM: Script completed with exit code: 0</em>
After the metadata backup has been restored, delete (or rename) the metadata backup folder (DynamicsBackup\Packages), so that it will no longer be found by the build process.
Apply the platform update package
After you've prepared your build environment for this update, apply the platform update package by using the same method that you use on other environments.
Upgrading to platform update 3 from an earlier build
When upgrading to platform update 3 from an earlier build, there are some very important considerations because of two key changes in update 3:
- It is no longer possible to overlayer platform models (Application Platform, Application Foundation, Test Essentials).
- You need to delete all X++ hotfixes to the platform that are in you version control (see the section below)
- The Directory model is no longer in platform, it has moved to the application in Finance and Operations release 1611.
This means two things:
If taking only platform update 3 and not taking the application update (Finance and Operations version 1611), then you cannot have overlayering on any of the following models. All overlayering on these models must be removed before attempting to install update 3:
- Application Platform
- Application Foundation
- Test Essentials
- Directory
If you cannot remove over-layering from the Directory model, and you still want to upgrade, you will have to do a complete upgrade of the platform and the application (Finance and Operations version 1611) as described in Overview of moving to the latest update of Finance and Operations.
Delete platform metadata hotfixes from your VSTS project (Platform update 2 or earlier)
Note
This section is not relevant if you are already on Platform update 3 and updating to a newer platform.
Before you install the new platform update, you must clean up your Microsoft Visual Studio Team Services (VSTS) source control project. Remove any X++ or metadata hotfixes that you've installed on your existing platform. If you have any X++ or metadata hotfixes that are checked in to your VSTS project for any of the following Microsoft models, delete them from your project by using the Microsoft Visual Studio Source Control Explorer.
- Application Platform
- Application Foundation
- TestEssentials
- Directory
You can find these hotfixes by browsing the check-in history of these Microsoft models. For example, use Source Control Explorer to browse the check-in history of the Trunk\Main\Metadata\ApplicationFoundation\ApplicationFoundation folder, and delete all XML files that have been checked in to it.
Additional resources
Overview of moving to the latest update of Microsoft Dynamics 365 for Finance and Operations | https://docs.microsoft.com/ar-sa/dynamics365/unified-operations/dev-itpro/migration-upgrade/upgrade-latest-platform-update | 2018-05-20T16:04:25 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['media/checkinhistory.png', 'View History'], dtype=object)] | docs.microsoft.com |
Add example utterances and label with entities
Utterances are examples of user questions or commands. To teach Language Understanding (LUIS), you need to add example utterances to an intent.
Generally, you add an utterance first, and then you create entities and label utterances on the intent page. If you would rather create entities first, see Add entities.
Add an utterance
On an intent page, enter a relevant utterance you expect from your users, such as
book 2 adult business tickets to Paris tomorrow on Air France in the text box below the intent name, and then press Enter.
Note
LUIS converts all utterances to lowercase.
Utterances are added to the utterances list for the current intent.
Add simple entity label
In the following procedure, you create and label custom entities within the following utterance on the intent page:
book me 2 adult business tickets to Paris tomorrow on Air France
Select "Air France" in the utterance to label it as a simple entity.
Note
When selecting words to label them dialog box, verify the entity name and select the simple entity type, and then select Done.
See Data Extraction to learn more about extracting the simple entity from the endpoint JSON query response. Try the simple entity quickstart to learn more about how to use a simple entity.
Add list entity and label
List entities represent a fixed, closed set (exact text matches) of related words in your system.
For a drinks list entity, you can have two normalized values: water and soda pop. Each normalized name has synonyms. For water, synonyms are H20, gas, flat. For soda pop, synonyms are fruit, cola, ginger. You don't have to know all the values when you create the entity. You can add more after reviewing real user utterances with synonyms.
When creating a new list entity from the intent page, you are doing two things that may not be obvious. First, you are creating a new list by adding the first list item. Second, the first list item is named with the word or phrase you selected from the utterance. While you can change these later from the entity page, it may be faster to select an utterance that has the word that you want for the name of the list item.
For example, if you wanted to create a list of types of drink and you selected the word
h2o from the utterance to create the entity, the list would have one item, whose name was h20. If you wanted a more generic name, you should choose an utterance that uses the more generic name.
In the utterance, select the word that is the first item in the list, and then enter the name of the list in the textbox, then select Create new entity.
In the What type of entity do you want to create? dialog box, add synonyms of this list item. For the water item in a drink list, add
h20,
perrier, and
waters, and select Done. Notice that "waters" is added because the list synonyms are matched at the token level. In the English culture, that level is at the word level so "waters" would not be matched to "water" unless it was in the list.
This list of drinks has only one drink type, water. You can add more drink types by labeling other utterances, or by editing the entity from the Entities in the left navigation. Editing the entities gives you the options of entering additional items with corresponding synonyms or importing a list.
See Data Extraction to learn more about extracting list entities from the endpoint JSON query response. Try the quickstart to learn more about how to use a list entity.
Add synonyms to the list entity
Add a synonym to the list entity by selecting the word or phrase in the utterance. If you have a Drink list entity, and want to add
agua as a synonym for water, follow the steps:
In the utterance, select the synonymous word, such as
aqua for water, then select the list entity name in the drop-down list, such as Drink, then select Set as synonym, then select the list item it is synonymous with, such as water.
Create new item for list entity
Create a new item for an existing list entity by selecting the word or phrase in the utterance. If you have a Drink list, and want to add
tea as a new item, follow the steps:
In the utterance, select the word for the new list item, such as
tea, then select the list entity name in the drop-down list, such as Drink, then select Create a new synonym.
The word is now highlighted in blue. If you hover over the word, a tag displays showing the list item name, such as tea.
Wrap entities in composite label
Composite entities are created from Entities. You can't create a composite entity from the Intent page. Once the composite entity is created, you can wrap the entities in an utterance on the Intent page.
Assuming the utterance,
book 2 tickets from Seattle to Cairo, a composite utterance can return entity information of the count of tickets (2), the origin (Seattle), and destination (Cairo) locations in a single parent entity.
Follow these steps to add the number prebuilt entity. After the entity is created, the
2 in the utterance is blue, indicating it is a labeled entity. Prebuilt entities are labeled by LUIS. You can't add or remove the prebuilt entity label from a single utterance. You can only add or remove all the prebuilt labels by adding or removing the prebuilt entity from the application.
Follow these steps to create a Location hierarchical entity. Label the origin and destination locations in the example utterance.
Before you wrap the entities in a composite entity, make sure all the child entities are highlighted in blue, meaning they have been labeled in the utterance.
To wrap the individual entities into a composite, select the first labeled entity in the utterance for the composite entity. In the example utterance,
book 2 tickets from Seattle to Cairo, the first entity is the number 2. A drop-down list appears showing the choices for this selection.
Select Wrap composite entity from the drop-down list.
Select the last word of the composite entity. In the utterance of this example, select "Location::Destination" (representing Cairo). The green line is now under all the words, including non-entity words, in the utterance that are the composite.
Select the composite entity name from the drop-down list. For this example, that is TicketOrder.
When you wrap the entities correctly, a green line is under the entire phrase.
See Data Extraction to learn more about extracting the composite entity from the endpoint JSON query response. Try the composite entity tutorial to learn more about how to use a composite entity.
Add hierarchical entity and label.
On the Intent page, in the utterance, select "Seattle", then enter the entity name `Location, and then select Create new entity.
In the pop-up dialog box, select hierarchical for Entity type, then add
Originand
Destinationas children, and then select Done.
The word in the utterance was labeled with the parent hierarchical entity. You need to assign the word to a child entity. Return to the utterance on the intent
If you add the prebuilt entities to your LUIS app, you don't need to label utterances with these entities. To learn more about prebuilt entities and how to add them, see Add entities.
Add regular expression entity label
If you add the regular expression entities to your LUIS app, you don't need to label utterances with these entities. To learn more about regular expression entities and how to add them, see Add entities.
Create a pattern from an utterance
See Add pattern from existing utterance on intent or entity page.
Add pattern.any entity label. | https://docs.microsoft.com/en-in/azure/cognitive-services/luis/add-example-utterances | 2018-05-20T16:06:35 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['media/add-example-utterances/add-new-utterance-to-intent.png',
'Screenshot of Intents details page, with utterance highlighted'],
dtype=object)
array(['media/add-example-utterances/set-agua-as-synonym.png',
'Screenshot of Intents details page, with Create a new synonym highlighted'],
dtype=object)
array(['media/add-example-utterances/list-entity-create-new-item.png',
'Screenshot of adding new list item'], dtype=object)
array(['media/add-example-utterances/list-entity-item-name-tag.png',
'Screenshot of new list item tag'], dtype=object)
array(['media/add-example-utterances/remove-label.png',
'Screenshot of Intents details page, with Remove Label highlighted'],
dtype=object) ] | docs.microsoft.com |
New-Net
Neighbor
Syntax
New-NetNeighbor [-IPAddress] <String> [-AddressFamily <AddressFamily>] [-AsJob] [-CimSession <CimSession[]>] [-LinkLayerAddress <String>] [-PolicyStore <String>] [-State <State>] [-ThrottleLimit <Int32>] -InterfaceAlias <String> [-Confirm] [-WhatIf]
New-NetNeighbor [-IPAddress] <String> [-AddressFamily <AddressFamily>] [-AsJob] [-CimSession <CimSession[]>] [-LinkLayerAddress <String>] [-PolicyStore <String>] [-State <State>] [-ThrottleLimit <Int32>] -InterfaceIndex <UInt32> [-Confirm] [-WhatIf]
Description
The New-NetNeighbor cmdlet creates a neighbor cache entry for IPv4 or IPv6. The Neighbor cache maintains a list of information for each on-link neighbor, including the IP address and the associated link-layer address. Note: For successful creation of a neighbor entry, the address family of the neighbor cache entry must match the address family of the IP interface.
Examples
EXAMPLE 1
PS C:\>New-NetNeighbor -InterfaceIndex 12 -IPAddress 192.168.0.5 -MACaddress 00-00-12-00-00-ff
This example creates a new neighbor cache entry with an IPv4 address.
EXAMPLE 2
PS C:\>New-NetNeighbor -InterfaceIndex 13 -IPAddress fe80::5efe:192.168.0.5
This example creates a new neighbor cache entry on a virtual ISATAP interface.
EXAMPLE 3
PS C:\>Get-NetNeighbor -State Reachable | Get-NetAdapter
This example gets NetAdapter information for all adapters that have reachable neighbors.
Required Parameters
Specifies the IP address of the neighbor cache entry.
Specifies the interface to which the neighbor is connected, using the InterfaceAlias property.
Specifies the interface to which the neighbor is connected, using the InterfaceIndex property.
Optional Parameters
Specifies an IP address family of the neighbor cache entry. This property is automatically generated if unspecified. The acceptable values for this parameter are:
-- IPv4: IPv4 address information.
-- IPv6: IPv6 address link layer address of the neighbor cache entry. This is also known as a MAC address. A link-layer address that uses IPv4 address syntax is a tunnel technology that encapsulates packets over an IPv4 tunnel, such as. the state of the neighbor cache entry. A manually created entry in the neighbor cache only has one allowable state. That state is permanent: The neighbor is statically provisioned and will not expire unless deleted through configuration.
Microsoft.Management.Infrastructure.CimInstance#root\StandardCimv2\MSFT_NetNeighbor
The
Microsoft.Management.Infrastructure.CimInstance object is a wrapper class that displays Windows Management Instrumentation (WMI) objects.
The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object. | https://docs.microsoft.com/en-us/powershell/module/nettcpip/new-netneighbor?view=winserver2012-ps | 2018-05-20T16:55:04 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
MSlogreader_history (Transact-SQL)
The MSlogreader_history table contains history rows for the Log Reader Agents associated with the local Distributor. This table is stored in the distribution database.
See Also
Reference
Mapping SQL Server 2000 System Tables to SQL Server 2005 System Views
Other Resources
Integration Services Tables
Backup and Restore Tables
Log Shipping Tables
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms176065(v=sql.90) | 2018-05-20T16:28:08 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
class in UnityEngine.AI
/
/
Implemented in:UnityEngine.AIM
Navigation mesh agent.
This component is attached to a mobile character in the game to allow it to navigate the scene using the NavMesh. See the Navigation section of the manual for further details.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html | 2018-05-20T15:59:56 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.unity3d.com |
In AppBuilder, you can choose which Apache Cordova plugins to enable in your Debug and Release build configurations. For example, during the testing phase, you might want to build your app with enabled Telerik Mobile Testing and Console plugins and when you build your app for distribution, you might want to have these plugins disabled and the Progress AppFeedback enabled. You can also set different plugin variables for the two build configurations.
When you build your app, AppBuilder includes in your application package only the plugins enabled for the currently set build configuration and sets their corresponding plugin variables, if any.
IMPORTANT: When you modify your plugin configuration, AppBuilder stores your settings in hidden configuration-specific app files (
.debug.abrojectand
.release.abproject). Always make sure to commit changes in these files.
Configure Plugins
When you enable a core, integrated or a verified plugin in your app, you can choose if you want to enable it for the Debug or the Release build configuration or for both.
This is especially useful when you work with plugins which bring value during a specific phase of the application lifecycle. For example, the Telerik Mobile Testing plugin is helpful during the development phase, while the Progress AppFeedback might be more beneficial after the release of your app.
You cannot choose which custom plugins to enable for the Debug and the Release build configurations. Custom plugins are enabled for all build configurations.
For more information how to manage the plugins in your app, see Working with Plugins.
Configure Plugin Variables
AppBuilder lets you configure the plugin variables for all Apache Cordova plugins. You can set different plugin variables for the Debug and the Release build configurations.
For more information about configuring plugin variables, see Set Plugin Variables.
Specifics and Limitations
When you set your plugins and plugin variables for the different build configurations, keep in mind the following specifics and limitations.
Specifics
- AppBuilder creates new apps with a disabled Telerik Mobile Testing in the Release configuration.
You can enable it manually.
- AppBuilder creates new apps with a disabled Console plugin in the Release configuration.
You can enable it manually.
- AppBuilder always disables the Telerik Mobile Testing when you build your app for publishing.
To create a release build with an enabled Telerik Mobile Testing, build your app with the Build or Build in Cloud operation.
Limitations
- When you add a custom plugin, you enable it for all build configurations.
To disable a custom plugin when you build your app, you need to remove it from your app. For more information, see Remove Custom Plugins. | http://docs.telerik.com/platform/appbuilder/cordova/build-configurations/plugins-and-build-configurations.html | 2018-05-20T15:58:08 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.telerik.com |
When you manage a cached RDBMS with GemFire XD, you may occasionally need to modify table data without triggering a configured AsyncEventListener (including DBSynchronizer) or cache plug-in implementation. GemFire XD provides the skip-listeners connection property to disable DML event propagation for a particular connection to the GemFire XD cluster.
You can use the skip-listeners property with either a peer client or a thin client connection. When you set the property to "true," GemFire XD:gemfirexd:", props);Or, adding the property directly to the connection string:
final Connection conn = DriverManager.getConnection("jdbc:gemfirexd:;skip-listeners=true"); | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/caching_database/suppressing-dml-events.html | 2018-05-20T15:56:02 | CC-MAIN-2018-22 | 1526794863626.14 | [] | gemfirexd.docs.pivotal.io |
Aeros.WebServer¶
Main web server instance
Module Contents¶
Functions¶
Aeros.WebServer.
make_config_from_hypercorn_args(hypercorn_string: str, config: Config = Config()) → Config[source]¶
Overrides a given config’s items if they are specified in the hypercorn args string
- class
Aeros.WebServer.
WebServer(import_name: str, host: str = '0.0.0.0', port: int = 80, include_server_header: bool = True, hypercorn_arg_string: str = '', worker_threads: int = 1, logging_level: Union[int, str] = 'INFO', cache: Cache = Cache(), compression: Compression = Compression(level=2, min_size=10), global_headers: Dict[str, str] = None, *args, **kwargs)[source]¶
Bases:
Aeros.patches.quart.app.Quart
This is the main server class which extends a standard Flask class by a bunch of features and major performance improvements. It extends the Quart class, which by itself is already an enhanced version of the Flask class. This class however allows production-grade deployment using the hypercorn WSGI server as production server. But instead of calling the hypercorn command via the console, it can be started directly from the Python code itself, making it easier to integrate in higher-level scripts and applications without calling os.system() od subprocess.Popen().
_get_own_instance_path(self)[source]¶
Retrieves the file and variable name of this instance to be used in the Hypercorn CLI.
Since hypercorn needs the application’s file and global variable name, an instance needs to know it’s own origin file and name. But since this class is not defined in the same file as it is called or defined from, this method searches for the correct module/file and evaluates it’s instance name.
Warning
Deprecation warning: This method will be removed in future versions. Usage is highly discouraged.
cache(self, timeout=None, key_prefix='view/%s', unless=None, forced_update=None, response_filter=None, query_string=False, hash_method=hashlib.md5, cache_none=False)[source]¶
A simple wrapper that forwards cached() decorator to the internal Cache() instance. May be used as the normal @cache.cached() decorator. | https://aeros.readthedocs.io/en/latest/autoapi/Aeros/WebServer/ | 2022-01-16T23:03:18 | CC-MAIN-2022-05 | 1642320300244.42 | [] | aeros.readthedocs.io |
Automation controller ships with an admin utility script,
automation-controller-service, that can start, stop, and restart all the controller services running on the current single controller node (including the message queue components, and the database if it is an integrated installation). External databases must be explicitly managed by the administrator. The services script resides in
/usr/bin/automation-controller-service and can be invoked as follows:
Note
In clustered installs,
automation-controller-service restart does not include PostgreSQL as part of the services that are restarted because it exists external to the controller, and because PostgreSQL does not always require a restart. Use
systemctl restart ansible-controller to restart services on clustered environments instead. Also you must restart each cluster node for certain changes to persist as opposed to a single node for a localhost install. For more information on clustered environments, see the Clustering section.
You can also invoke the services script via distribution-specific service management commands. Distribution packages often provide a similar script, sometimes as an init script, to manage services. Refer to your distribution-specific service management system for more information.
Note
When running the controller in a container, do not use the
automation-controller-service script. Restart the pod using the container environment instead. | https://docs.ansible.com/automation-controller/latest/html/administration/init_script.html | 2022-01-16T21:25:40 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.ansible.com |
Machine learning with the Open Data Cube
¶
Sign up to the DEA Sandbox to run this notebook interactively from a browser
Compatibility: Notebook currently compatible with both the
NCIand
DEA Sandboxenvironments
Products used: ls8_nbart_geomedian_annual and ls8_nbart_tmad_annual
Special requirements: A shapefile of labelled data in shapefile format is required to use this notebook. An example dataset is provided.
Prerequisites: A basic understanding of supervised learning techniques is required. Introduction to statistical learning is a useful resource to begin with - it can be downloaded for free here. The Scikit-learn documentation provides information on the available models and their parameters.
Description¶
This notebook demonstrates a potential workflow using functions from the dea_tools.classification script to implement a supervised learning landcover classifier within the ODC (Open Data Cube) framework.
For larger model training and prediction implementations this notebook can be adapted into a Python file and run in a distributed fashion.
This example predicts a single class of cultivated / agricultural areas. The notebook demonstrates how to:
Extract the desired ODC data for each labelled area (this becomes our training dataset).
Train a simple decision tree model and adjust parameters.
Predict landcover using trained model on new data.
Evaluate the output of the classification using quantitative metrics and qualitative tools.
This is a quck reference for machine learning on the ODC captured in a single notebook. For a more indepth exploration please use the Scalable Machine Learning series of notebooks. ***
Getting started¶
To run this analysis, run all the cells in the notebook, starting with the “Load packages” cell.
Load packages¶
Import Python packages that are used for the analysis.
[1]:
%matplotlib inline import subprocess as sp import shapely import xarray as xr import rasterio import datacube import matplotlib import pydotplus import numpy as np import geopandas as gpd import matplotlib.pyplot as plt from io import StringIO from odc.io.cgroups import get_cpu_quota from sklearn import tree from sklearn import model_selection from sklearn.metrics import accuracy_score from IPython.display import Image from datacube.utils import geometry from datacube.utils.cog import write_cog import sys sys.path.insert(1, '../Tools/') from dea_tools.classification import collect_training_data, predict_xr import warnings warnings.filterwarnings("ignore")
Connect to the datacube¶
Connect to the datacube so we can access DEA data.
[2]:
dc = datacube.Datacube(app='Machine_learning_with_ODC')
Analysis parameters¶
path: The path to the input shapefile. A default shapefile is provided.
field: This is the name of column in your shapefile attribute table that contains the class labels
time: The time range you wish to extract data for, typically the same date the labels were created.
zonal_stats: This is an option to calculate the
'mean',
'median', or
'std'of the pixel values within each polygon feature, setting it to
Nonewill result in all pixels being extracted.
resolution: The spatial resolution, in metres, to resample the satellite data too e.g. if working with Landsat data, then this should be
(-30,30)
output_crs: The coordinate reference system for the data you are querying.
ncpus: Set this value to > 1 to parallize the collection of training data. eg.
npus=8
If running the notebook for the first time, keep the default settings below. This will demonstrate how the analysis works and provide meaningful results.
[3]:
path = '../Supplementary_data/Machine_learning_with_ODC/example_training_data.shp' field = 'classnum' time = ('2015') zonal_stats = 'median' resolution = (-25, 25) output_crs = 'epsg:3577'
Preview input data and study area¶
We can load and preview our input data shapefile using
geopandas. The shapefile should contain a column with class labels (e.g.
classnum below). These labels will be used to train our model.
[5]:
# Load input data shapefile input_data = gpd.read_file(path) # Plot first five rows input_data.head()
[5]:
The data can also be explored using the interactive map below. Hover over each individual feature to see a print-out of its unique class label number above the map.
[6]:
# Plot training data in an interactive map input_data.explore(column=field, legend=False)
[6]: | https://docs.dea.ga.gov.au/notebooks/Frequently_used_code/Machine_learning_with_ODC.html | 2022-01-16T21:38:30 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.dea.ga.gov.au |
Plan a canvas app and start to build
It might be tempting to immediately begin building your Power Apps canvas app, but you need to complete the essential first steps in planning the app. In this module, you'll learn about those steps and how to build the simpler elements of the Expense Report app and connect it to your data.
Learning objectives
In this module, you'll:
- Learn how to create a new app.
- Discover how to plan your app.
- Learn how to add and set up controls in a canvas app.
Prerequisites
- Basic understanding of Microsoft canvas apps and Microsoft Dataverse
- Basic knowledge of how to build a data model
- Introduction min
-
-
-
-
- | https://docs.microsoft.com/en-us/learn/modules/plan-canvas-app/ | 2022-01-16T23:43:45 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
netapp_eseries.santricity.na_santricity_alerts_syslog – NetApp E-Series manage syslog servers receiving storage system alerts._alerts_syslog.
Notes
Note
Check mode is supported.
This API is currently only supported with the Embedded Web Services API v2.12 (bundled with SANtricity OS 11.40.2): Add two syslog server configurations to NetApp E-Series storage array. na_santricity_alerts_syslog: ssid: "1" api_url: "" api_username: "admin" api_password: "adminpass" validate_certs: true servers: - address: "192.168.1.100" - address: "192.168.2.100" port: 514 - address: "192.168.3.100" port: 1000
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/netapp_eseries/santricity/na_santricity_alerts_syslog_module.html | 2022-01-16T21:55:06 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.ansible.com |
Pre-Defined User Variables
Automation Anywhere provides two user variables that are pre-defined for your use.
The pre-defined variables are:
- my-list-variable (type: List)
This variable provides a container for a list of values. For more details see List Type Variables
- Prompt-Assignment (type: Value)This variable provides a container for a single value. For more details see Value Type variables .
These variables can be used quickly by pressing the F2 key.
| https://docs.automationanywhere.com/de-DE/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-variables/pre-defined-user-variables.html | 2022-01-16T21:34:14 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://automationanywhere-be-prod.automationanywhere.com/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-variables/../../img/Variables/varmanagermain.png?_LANG=enus',
'Image displaying Variable Manager'], dtype=object) ] | docs.automationanywhere.com |
Overview: The Daily Hours report will show you how many hours an employee worked per day over a specific timeframe. No detailed punch information is shown on this report, just the number of hours worked per day.
Export options include:
CSV
Excel
Additional Resources:
Daily Hours Report Example
Run The Daily Hours Report
1. Start by clicking Reports in the top navigation followed by Daily Hours:
2. From there you can:
Select Employees
Choose Location, Department, or Position Codes
Specify the Start/End Date
Split hours by code
And Submit once done:
3. If you click the header, for example, "Status", you can change the sorting of the report. You can then export via CSV, Excel, PDF, or Print the report.
The Status column is for punch approvals and the Time Card Status is for time card approvals:
> | https://docs.buddypunch.com/en/articles/1064260-daily-hours-report | 2022-01-16T21:57:27 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://downloads.intercomcdn.com/i/o/410112684/02967f43a58ab8695eed7ecd/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/410112880/719e65c5b1acf0f2c9e3b9a0/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/410113326/0960e04e91b843d553d99594/image.png',
None], dtype=object) ] | docs.buddypunch.com |
External Tools¶
Start External Application¶
This example will create a entry that will launch and login to server using filezilla and sftp. Start with opening up external tools from:And create a New entry. Change Display Name to FileZilla and Filename to C:\Program Files\FileZilla FTP Client\filezilla.exe. See image below:
We then need to arguments to use for filezilla, which we can find out either by searching for it on the great wide internet or by called the -h parameter to filezilla.exe in powershell:
& 'C:\Program Files\FileZilla FTP Client\filezilla.exe' -h
This will open a small dialog showing the various input parameters. What we are going to use is the following for our entry:
- Application: FileZilla
- Protocol - sftp://
- Input Parameters (variables) - %HOSTNAME%, %USERNAME%,%PASSWORD% and %PORT%
All of the variables are parsed from mRemoteNG connection item to the filezilla command line. So lets build this entry up in External Tools where we add all these items.
Try the launch the FileZilla based external tool now against the server you want to login too and you will notice that the application is launched with the variables.
Traceroute¶
This example will create a traceroute which you can call on for a connection to get the traceroute to the connection. Start with opening up external tools from: External Tools Change Display Name to Traceroute and Filename to cmd.And create a New entry. See
See image below:
Figure 1.0: Showing traceroute init settings
Now comes the interesting part where we fill in arguments that tells the console what to launch. Here are the parts we need:
- Keep the console open - /K
- Program to run - tracert
- Variable to use - %HOSTNAME%
So lets fill these options in to the arguments like so:
This is all we really need in order to do a traceroute. Right click on a connection in the connection list and go towhich will open a cmd prompt and run a tracert against the host using hostname variable.
A console like below will appear that show the traceroute and will not exit until you close the window.
If you want to use powershell instead. Then follow information below:
- Filename - powershell.exe
- Arguments - -NoExit tracert %HOSTNAME%
Notice that we replaced the /K with -NoExit and changed cmd with powershell.exe. See image below:
| https://mremoteng.readthedocs.io/en/v1.77.3-dev/howtos/external_tools.html | 2022-01-16T21:24:19 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../_images/example_et_start_application_01.png',
'../_images/example_et_start_application_01.png'], dtype=object)
array(['../_images/example_et_start_application_02.png',
'../_images/example_et_start_application_02.png'], dtype=object)
array(['../_images/example_et_traceroute_01.png',
'../_images/example_et_traceroute_01.png'], dtype=object)
array(['../_images/example_et_traceroute_02.png',
'../_images/example_et_traceroute_02.png'], dtype=object)
array(['../_images/example_et_traceroute_03.png',
'../_images/example_et_traceroute_03.png'], dtype=object)
array(['../_images/example_et_traceroute_04.png',
'../_images/example_et_traceroute_04.png'], dtype=object)
array(['../_images/example_et_traceroute_05.png',
'../_images/example_et_traceroute_05.png'], dtype=object)] | mremoteng.readthedocs.io |
Playground
Here is a basic template to get started querying the Tracking Stream API.
info
An API token will be necessary to run the code below.
To get it working with your API token and do modifications to the code, you will have to register on Replit and fork the code sample. You can also directly call the API and see its output by entering your API token in the default prompt. | https://aviation-docs.spire.com/api/tracking-stream/playground/ | 2022-01-16T23:06:32 | CC-MAIN-2022-05 | 1642320300244.42 | [] | aviation-docs.spire.com |
Getting Involved
Table of Contents
- Development
- Continuous Integration
- Agent troubleshooting
Development
Local dev environment
For dev purposes, it is important to be able to run and test the code directly on your dev environment without using the package manager.
In order to run the agent without using the RPM package, you need to move the
three configuration files (
settings.yml,
dcirc.sh and
hosts) in the
directory of the git repo.
Then, you need to modify dev-ansible.cfg two variables:
inventory and
roles_path (baremetal_deploy_repo).
Also, in order to install package with the ansible playbook, you need to add
rights to
dci-openshift-agent user:
# cp dci-openshift-agent.sudo /etc/sudoers.d/dci-openshift-agent
Finally, you can run the script:
# Option -d for dev mode # Overrides variables with group_vars/dev % ./dci-openshift-agent-ctl -s -c settings.yml -d -- -e @group_vars/dev
Libvirt environment
Please refer to the full libvirt documentation to setup your own local libvirt environment
Testing a change
If you want to test a change from a Gerrit review or from a GitHub PR,
use the
dci-check-change command. Example:
$ dci-check-change 21136
to check or from a GitHub PR:
$ dci-check-change
Regarding Github, you will need a token to access private repositories
stored in
~/.github_token.
dci-check-change will launch a DCI job to perform an OCP
installation using
dci-openshift-agent-ctl and then launch another
DCI job to run an OCP workload using
dci-openshift-app-agent-ctl if
dci-openshift-app-agent-ctl is present on the system.
You can use
dci-queue from the
dci-pipeline package to manage a
queue of changes. To enable it, add the name of the queue into
/etc/dci-openshift-agent/config:
DCI_QUEUE=<queue name>
If you have multiple prefixes, you can also enable it in
/etc/dci-openshift-agent/config:
USE_PREFIX=1
This way, the resource from
dci-queue is passed as the prefix for
dci-openshift-app-agent-ctl.
Advanced
Dependencies
If the change you want to test has a
Depends-On: or
Build-Depends:
field,
dci-check-change will install the corresponding change and
make sure all the changes are tested together.
Prefix
If you want to pass a prefix to the
dci-openshift-agent use the
-p
option and if you want to pass a prefix to the
dci-openshift-app-agent use the
-p2 option. For example:
$ dci-check-change -p prefix -p2 app-prefix
Hints
You can also specify a
Test-Hints: field in the description of your
change. This will direct
dci-check-change to test in a specific way:
Test-Hints: snovalidate the change in SNO mode.
Test-Hints: libvirtvalidate in libvirt mode (3 masters).
Test-Hints: no-checkdo not run a check (useful in CI mode).
Test-Args-Hints: can also be used to specify extra parameters to
pass to
dci-check-change.
Test-App-Hints: can also be used to change the default app to be
used (
basic_example). If
none is specified in
Test-App-Hints:,
the configuration is taken from the system.
Hints need to be activated in the
SUPPORTED_HINTS variable in
/etc/dci-openshift-agent/config like this:
SUPPORTED_HINTS="sno|libvirt|no-check|args|app"
Continuous integration
You can use
/var/lib/dci-openshift-agent/samples/ocp_on_libvirt/ci.sh to setup
your own CI system to validate changes.
To do so, you need to set the
GERRIT_SSH_ID variable to set the ssh
key file to use to read the stream of Gerrit events from
softwarefactory-project.io. And
GERRIT_USER to the Gerrit user to
use.
The
ci.sh script will then monitor the Gerrit events for new changes
to test with
dci-check-change and to report results to Gerrit.
For the CI to vote in Gerrit and comment in GitHub, you need to set
the
DO_VOTE variable in
/etc/dci-openshift-agent/config like this:
DO_VOTE=1
Agent troubleshooting
Launching the agent without DCI calls
The
dci tag can be used to skip all DCI calls. You will need to
provide fake
job_id and
job_info variables in a
myvars.yml file
like this:
job_id: fake-id job_info: job: components: - name: 1.0.0 type: my-component
and then call the agent like this:
# su - dci-openshift-agent $ dci-openshift-agent-ctl -s -- --skip-tags dci -e @myvars.yml | https://docs.distributed-ci.io/dci-openshift-agent/docs/development.html | 2022-01-16T21:17:17 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.distributed-ci.io |
Users page
Overview
The Users page displays the list of users created in inSync Management Console.
Users page provides settings to configure and manage user provisioning. As an administrator, you can add, update, and manage users and their profiles. Based on the user provisioning method that you have opted for, you can create new mappings, update existing mappings, and manage settings for AD/LDAP, SCIM, and Azure AD-based user provisioning methods. You can also view the license statistics from this page.
Access Path
On the inSync Management Console menu bar, click Users.The cards on the Users page enable you to view the total number of users and workload-based license statistics. The license statistics will be refreshed every 7-10 minutes. For more information about how the active license is consumed, see active license consumption rationale.
Considerations
As a Cloud Administrator, you can view the license statistics for all configured workloads and profiles.
As a non-cloud administrator, you can view the license statistics for only those workloads and profiles which are assigned to you.
If you have Manage User permission, you can view both active and preserve license statistics for the workloads and profiles assigned to you.
If you do not have Manage User permission, you can view only active license statistics for the workloads and profiles assigned to you.
The following table lists the fields on the cards of the Users page.
Filter
Considerations:
As a Cloud Administrator, you can view all Workload license filters for all configured workloads.
As a non-cloud administrator, you can view license filters for only those workloads which are assigned to you.
Apply Filters:
Use the filter to narrow down the search and listing of users in inSync by typing their user name, email address, and custom attribute in the Search box.
Click the Active or Preserve license count for the respective workload, users consuming active or preserve license count will be displayed. For example, as shown in the above image, if you click 3 Active from Endpoints, all 3 users who have consumed Endpoints active license are displayed in the list view.
Applying filters - To apply filters, click the
icon, select the filters you would like to apply, and click Apply. To cancel the filters applied, click Reset. You can filter based on the following criteria:
Profile
Storage
Workload License (Endpoints License, Microsoft 365 License, and Google Workspace License): Active, Preserved, or Not Licensed. You can also filter using multiple license states, for example, Active and Preserved.
Legal Hold: Enabled or Disabled
User Status: Active or Preserved
Users
The following fields are displayed in the User listing:
Considerations:
As a Cloud Administrator, you can view all users but as a non-cloud administrator, you can view users based on the workload and profile assigned.
As a Cloud Administrator, you can view all Workload usage columns for all configured workloads. As a non-cloud administrator, you can view Workload usage columns for only those workloads which are assigned to you.
Note: The icon - in the workload usage column indicates that the workload is not licensed for the user.
Actions on the Users page
Deployment
Based on the user provisioning method (AD/LDAP, SCIM, or Azure AD) that you have selected, the Deployment page displays the following information:
Mappings
Accounts
Settings
The following table lists the fields in the Settings tab.
Mapping Priority Order
This area displays the priority of all the available Mappings. The mapping at the top has the highest priority with the one at the bottom lowest.
The following table lists the fields in the Mapping Priority Order section:
Actions on the Deployment page
The following table lists the actions on the AD/LDAP Mapping Details page.
Related topic
active license consumption rationale. | https://docs.druva.com/Endpoints/020_Introduction/020_About_inSync_Management_Console_User_Interface/Users_page | 2022-01-16T21:16:53 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.druva.com |
The explorer views have flexible filters, groupings, and other elements that enable you to navigate the data in ways that best help you pinpoint and understand the risk associated with the application under development.
In this section:
Your Custom Views are Preserved in the Violations and Test Explorers
Customizations you make to the search results panel, such as the changing the column order, selections, and grouping, are preserved when you refresh or leave and return to the page. This functionality applies to the Violations and Test explorers.
Changing Panel Sizes
You can click and drag handles between panels to customize work spaces.
Sorting Search Results
Click on the ellipses menu of a column header to sort data into ascending or descending order.
Grouping Search Results
- Click on a column header and drag it into the groupings area.
- Click the close button (X) to remove the parameter from the grouped results.
Customizing Search Results Table
You can add or remove columns from search results tables (the Violations Explorer shown):
- Click on the ellipsis menu of any column header to open the drop down menu.
- Choose Columns and enable/disable parameters.
You can change the width of a column a column so that the entire message, file name, or other field is visible by clicking on the margin between two columns headers and dragging it to the desired width.
Quick Layout Options
Each explorer view has a default layout that you can customize by dragging panel anchor points. You can also click on a quick layout button to instantly changed to a view optimized for viewing specific aspects of the data.
Quick Layout Options for Violations and Test Explorers
The Violations and Test Explorers have the same layout options. Click on the Split, Table, or Code and forms icon to change the layout.
Quick Layout Options for Coverage Explorer
Click on the Split, Code, or Test icon to change the layout.
Quick Layout Options for Change Explorer
Click the Split, Code and findings, or Code icon to change the layout.
Quick Layout Options for Metrics Explorer
Click on the Split, Table, or Table and details icon to change the layout. | https://docs.parasoft.com/display/DTP20202/Navigating+Explorer+Views | 2022-01-16T21:56:20 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.parasoft.com |
numpy.polynomial.chebyshev.chebsub¶
- numpy.polynomial.chebyshev.chebsub(c1, c2)[source]¶
Subtract one Chebyshev series from another.
Returns the difference of two Chebyshev series c1 - c2. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series T_0 + 2*T_1 + 3*T_2.
Notes
Unlike multiplication, division, etc., the difference of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.”
Examples
>>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebsub(c1,c2) array([-2., 0., 2.]) >>> C.chebsub(c2,c1) # -C.chebsub(c1,c2) array([ 2., 0., -2.]) | https://docs.scipy.org/doc/numpy-1.8.0/reference/generated/numpy.polynomial.chebyshev.chebsub.html | 2022-01-16T21:53:14 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.scipy.org |
Overview: The Hours Report by Location, Department, or Position reports allows you to run time on a specific Location, Department, or Position code. This report is particularly useful for job costing.
No detailed punch information is shown on this report, just the number of hours worked under each code and over the specified timeframe.
Important: Position codes are only available with scheduling.
Export options include:
CSV
Excel
Additional Resources:
Hours Report by Location Example
Hours Report by Department Example
Run The Hours Report by Report
1. Start by clicking Reports in the top navigation followed by Hours Report By --> Location | Department | Position:
2. Choose the correct filters and Submit once done:
3. Employee data will populate. You can then export via CSV, Excel, PDF or Print the report:
4. If you want to view detailed information for a specific employee or location, click the blue highlighted text and you'll be taken to the In/Out Activity report:
Q: On the Hours Report by Location, Department, or Position reports why is time off separate from the rest of the hours?
A: Time off is not associated with a specific location, department or position so will not be associated with hours under those categories. | https://docs.buddypunch.com/en/articles/3578226-hours-report-by-location-department-position-report | 2022-01-16T22:42:40 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://downloads.intercomcdn.com/i/o/410071861/d93814f86477036d559df750/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/410080622/120eb641c148b105a3ea3051/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/410080926/a68ac597ee9997712982d70d/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/410081919/9f834a6b4fd9518a85693834/image.png',
None], dtype=object) ] | docs.buddypunch.com |
Persistence¶
You can use the dsl library to define your mappings and a basic persistent layer for your application.
Mappings¶
The mapping definition follows a similar pattern to the query dsl:
from elasticsearch_dsl import Keyword, Mapping, Nested, Text # name your type m = Mapping('my-type') # add fields m.field('title', 'text') # you can use multi-fields easily m.field('category', 'text', fields={'raw': Keyword()}) # you can also create a field manually comment = Nested() comment.field('author', Text()) comment.field(', 'my-type',).
DocType¶
If you want to create a model-like wrapper around your documents, use the
DocType class:
from datetime import datetime from elasticsearch_dsl import DocType, Date, Nested, Boolean, \ analyzer, InnerObjectWrapper, Completion, Keyword, Text html_strip = analyzer('html_strip', tokenizer="standard", filter=["standard", "lowercase", "stop", "snowball"], char_filter=["html_strip"] ) class Comment(InnerObjectWrapper): def age(self): return datetime.now() - self.created_at class Post(DocType): title = Text() title_suggest = Completion() created_at = Date() published = Boolean() category = Text( analyzer=html_strip, fields={'raw': Keyword()} ) comments = Nested( doc_class=Comment, properties={ 'author': Text(fields={'raw': Keyword()}), 'content': Text(analyzer='snowball'), 'created_at': Date() } ) class Meta: index = 'blog' def add_comment(self, author, content): self.comments.append( {'author': author, 'content': content}) def save(self, ** kwargs): self.created_at = datetime.now() return super().save(** kwargs),
parent,
routing,
index etc) can be
accessed (and set) via a
meta attribute or directly using the underscored
variant:
post = Post(meta={'id': 42}) # prints 42, same as post._id print(post.meta.id) # override default index, same as post.() # you can also update just individual fields which will call the update API # and also update the document in place first.update(published=True, published_by='me').
All the information about the
DocType, including its
Mapping can be
accessed through the
_doc_type attribute of the class:
# name of the type and index in elasticsearch Post._doc_type.name Post._doc_type.index # the raw Mapping object Post._doc_type.mapping # the optional name of the parent type (if defined) Post._doc_type.parent
The
_doc_type attribute is also home to the
refresh method which will
update the mapping on the
DocType from elasticsearch. This is very useful
if you use dynamic mappings and want the class to be aware of those fields (for
example if you wish the
Date fields to be properly (de)serialized):
Post._doc_type.refresh()
To delete a document just call its
delete method:
first = Post.get(id=42) first.delete()
DocType
subclasses and each document in the response will be wrapped in it’s class.
If you want to run suggestions, just use the
suggest method on the
Search object:
s = Post.search() s = s.suggest('title_suggestions', 'pyth', completion={'field': 'title_suggest'}) # you can even execute just the suggestions via the _suggest API suggestions = s.execute_suggest() for result in suggestions.title_suggestions: print('Suggestions for %s:' % result.text) for option in result.options: print(' %s (%r)' % (option.text, option.payload))
class Meta options¶
In the
Meta class inside your document definition you can define various
metadata for your document:
doc_type
- name of the doc_type in elasticsearch. By default it will be constructed from the class name (MyDocument -> my_document)
index
- default index for the document, by default it is empty and every operation such as
getor
saverequires an explicit
indexparameter
using
- default connection alias to use, defaults to
'default'
mapping
- optional instance of
Mappingclass to use as base for the mappings created from the fields on the document class itself.
Any attributes on the
Meta class that are instance of
MetaField will be
used to control the mapping of the meta fields (
_all,
_parent etc).
Just name the parameter (without the leading underscore) as the field you wish
to map and pass any parameters to the
MetaField class:
class Post(DocType): title = Text() class Meta: all = MetaField(enabled=False) parent = MetaField(type='blog') dynamic = MetaField('strict')
Index¶, DocType, Text, analyzer blogs = Index('blogs') # define custom settings blogs.settings( number_of_shards=1, number_of_replicas=0 ) # define aliases blogs.aliases( old_blogs={} ) # register a doc_type with the index blogs.doc_type(Post) # can also be used as class decorator when defining the DocType @blogs.doc_type class Post(DocType):.doc_type(Post) # create a copy of the index with different name company_blogs = blogs.clone('company-blogs') # create a different copy on different cluster dev_blogs = blogs.clone('blogs', using='dev') # and change its settings dev_blogs.setting(number_of_shards=1) | https://elasticsearch-dsl.readthedocs.io/en/5.3.0/persistence.html | 2022-01-16T22:46:12 | CC-MAIN-2022-05 | 1642320300244.42 | [] | elasticsearch-dsl.readthedocs.io |
Connect to virtual machine on Hyper-V¶
Introduction¶
When set up properly, you can use mRemoteNG to connect to virtual machines running on Hyper-V. This how to provides you with all the information you need to get things running.
To be able to connect to the virtual machine we need its’ id. You can find it by executing the following powershell command on the Hyper-V server:
Create a new connection, set the protocol to RDP and set the “Use VM ID” property to true. Enter the id in the new property field that just appeared in the connection section and set the port to 2179.
Enter the id of the virtual machine you found out earlier and you are able to connect to the virtual machine.
Prerequisites¶
For the scenario above to work there is some configuration that may be necessary for you to set up, depending on your environment.
You must be a member of the Administrators and Hyper-V Administrators groups on the Hyper-V Server to be able to remotely connect to any virtual machine running on the host via VMRDP. If this is not the case your user has to be granted access to remotely access the machine. The following Powershell command achieves this:
Port 2179 must be open on Hyper-V server and on the machine you are connecting from. Use the following command to open the ports on the firewall if needed:
In case you are facing “Unknown disconnection reason 3848” error when connecting, you need to configure a number of registry settings on your client and the Hyper-V Server to make the connection work. This problem occurs because of the CredSSP (Credential Security Service Provider) policy on the client and/or Hyper-V Server not allowing to authentication of remote users by default.
Note
For more information on RDP error codes see this Microsoft article.
Start the PowerShell console with administrative privileges and run the following commands: | https://mremoteng.readthedocs.io/en/v1.77.3-dev/howtos/vmrdp.html | 2022-01-16T23:07:29 | CC-MAIN-2022-05 | 1642320300244.42 | [] | mremoteng.readthedocs.io |
Hello ,
I am able to successfull create a key vault using below azure cli command
az keyvault create --name $kvname --resource-group $rgname --location $location --enable-soft-delete true
I have a business case to add existing virtual network and subnet using private endpoint to out new key vault using CLI command . I didnt find any command that does this , could anyone please let me know if we can achieve this using powershell or azure cli? | https://docs.microsoft.com/en-us/answers/questions/556956/ad-existing-virtual-network-and-subnet-to-keyvault.html | 2022-01-16T23:36:40 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
gpdbrestore] [-T schema.table [,...]] [--table-file file_name] [--truncate] [-e] [-G] [-B parallel_processes] [-d master_data_directory] [-a] [-q] [-l logfile_directory] [-v] [--ddboost] [-.
Greenplum Database must be configured to communicate with the Symantec.
NetBackup is not compatible with DDBoost. Both NetBackup and DDBoost cannot be used in a single back up operation.
If you used named pipes when you backed up a database with gpcrondump, named pipes with the backup data must be available when restoring the database from the backup. Boost for this restore, if the --ddboost option was passed when the data was dumped. Before using Data Domain Boost, make sure the one-time Data Domain Boost credential setup is complete. See "Backing Up and Restoring Databases" in the Greenplum Database Administrator Guide for details.
- GPDB/backup_directory/date. The backup_directory is set when you specify the Data Domain credentials with gpcrondump.
- This option is not supported if --netbackup-service-host is specified.
- .
- --netbackup-block-size size
- Specify the block size, in bytes, of data being transferred from the Symantec -T or --table-file.
- | http://gpdb.docs.pivotal.io/4350/utility_guide/admin_utilities/gpdbrestore.html | 2019-01-16T07:58:27 | CC-MAIN-2019-04 | 1547583657097.39 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Bitnami Plone Stack for Windows / Linux / MacOS
Plone is an open source content management system built on the Zope application server. Plone can be used for all types of websites such as a blog, e-commerce site, and even an intranet.
Getting started
Need more help? Find below detailed instructions for solving complex issues. | https://docs.bitnami.com/installer/apps/plone/ | 2019-01-16T09:08:25 | CC-MAIN-2019-04 | 1547583657097.39 | [] | docs.bitnami.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.